Chapter 1. NoSQL: It’s about making intelligent choices

published book

This chapter covers

  • What’s NoSQL?
  • NoSQL business drivers
  • NoSQL case studies

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year...Certainly over the short term this rate can be expected to continue, if not to increase.

Gordon Moore, 1965

...Then you better start swimmin’...Or you’ll sink like a stone...For the times they are a-changin’.

Bob Dylan

In writing this book we have two goals: first, to describe NoSQL databases, and second, to show how NoSQL systems can be used as standalone solutions or to augment current SQL systems to solve business problems. Though we invite anyone who has an interest in NoSQL to use this as a guide, the information, examples, and case studies are targeted toward technical managers, solution architects, and data architects who are interested in learning about NoSQL.

This material will help you objectively evaluate SQL and NoSQL database systems to see which business problems they solve. If you’re looking for a programming guide for a particular product, you’ve come to the wrong place. Here you’ll find information about the motivations behind NoSQL, as well as related terminology and concepts. There may be sections and chapters of this book that cover topics you already understand; feel free to skim or skip over them and focus on the unknown.

Finally, we feel strongly about and focus on standards. The standards associated with SQL systems allow applications to be ported between databases using a common language. Unfortunately, NoSQL systems can’t yet make this claim. In time, NoSQL application vendors will pressure NoSQL database vendors to adopt a set of standards to make them as portable as SQL.

In this chapter, we’ll begin by giving a definition of NoSQL. We’ll talk about the business drivers and motivations that make NoSQL so intriguing to and popular with organizations today. Finally, we’ll look at five case studies where organizations have successfully implemented NoSQL to solve a particular business problem.

livebook features:
highlight, annotate, and bookmark
Select a piece of text and click the appropriate icon to annotate, bookmark, or highlight (you can also use keyboard shortcuts - h to highlight, b to bookmark, n to create a note).

You can automatically highlight by performing the text selection while keeping the alt/ key pressed.
highlights
join today to enjoy all our content. all the time.
 

1.1. What is NoSQL?

One of the challenges with NoSQL is defining it. The term NoSQL is problematic since it doesn’t really describe the core themes in the NoSQL movement. The term originated from a group in the Bay Area who met regularly to talk about common concerns and issues surrounding scalable open source databases, and it stuck. Descriptive or not, it seems to be everywhere: in trade press, product descriptions, and conferences. We’ll use the term NoSQL in this book as a way of differentiating a system from a traditional relational database management system (RDBMS).

For our purpose, we define NoSQL in the following way:

NoSQL is a set of concepts that allows the rapid and efficient processing of data sets with a focus on performance, reliability, and agility.

Seems like a broad definition, right? It doesn’t exclude SQL or RDBMS systems, right? That’s not a mistake. What’s important is that we identify the core themes behind NoSQL, what it is, and most importantly what it isn’t.

So what is NoSQL?

  • It’s more than rows in tables —NoSQL systems store and retrieve data from many formats: key-value stores, graph databases, column-family (Bigtable) stores, document stores, and even rows in tables.
  • It’s free of joins —NoSQL systems allow you to extract your data using simple interfaces without joins.
  • It’s schema-free —NoSQL systems allow you to drag-and-drop your data into a folder and then query it without creating an entity-relational model.
  • It works on many processors —NoSQL systems allow you to store your database on multiple processors and maintain high-speed performance.
  • It uses shared-nothing commodity computers —Most (but not all) NoSQL systems leverage low-cost commodity processors that have separate RAM and disk.
  • It supports linear scalability —When you add more processors, you get a consistent increase in performance.
  • It’s innovative —NoSQL offers options to a single way of storing, retrieving, and manipulating data. NoSQL supporters (also known as NoSQLers) have an inclusive attitude about NoSQL and recognize SQL solutions as viable options. To the NoSQL community, NoSQL means “Not only SQL.”

Equally important is what NoSQL is not:

  • It’s not about the SQL language —The definition of NoSQL isn’t an application that uses a language other than SQL. SQL as well as other query languages are used with NoSQL databases.
  • It’s not only open source —Although many NoSQL systems have an open source model, commercial products use NOSQL concepts as well as open source initiatives. You can still have an innovative approach to problem solving with a commercial product.
  • It’s not only big data —Many, but not all, NoSQL applications are driven by the inability of a current application to efficiently scale when big data is an issue. Though volume and velocity are important, NoSQL also focuses on variability and agility.
  • It’s not about cloud computing —Many NoSQL systems reside in the cloud to take advantage of its ability to rapidly scale when the situation dictates. NoSQL systems can run in the cloud as well as in your corporate data center.
  • It’s not about a clever use of RAM and SSD —Many NoSQL systems focus on the efficient use of RAM or solid state disks to increase performance. Though this is important, NoSQL systems can run on standard hardware.
  • It’s not an elite group of products —NoSQL isn’t an exclusive club with a few products. There are no membership dues or tests required to join. To be considered a NoSQLer, you only need to convince others that you have innovative solutions to their business problems.

NoSQL applications use a variety of data store types (databases). From the simple key-value store that associates a unique key with a value, to graph stores used to associate relationships, to document stores used for variable data, each NoSQL type of data store has unique attributes and uses as identified in table 1.1.

Table 1.1. Types of NoSQL data stores—the four main categories of NoSQL systems, and sample products for each data store type

Type

Typical usage

Examples

Key-value store—A simple data storage system that uses a key to access a value
  • Image stores
  • Key-based filesystems
  • Object cache
  • Systems designed to scale
  • Berkeley DB
  • Memcache
  • Redis
  • Riak
  • DynamoDB
Column family store—A sparse matrix system that uses a row and a column as keys
  • Web crawler results
  • Big data problems that can relax consistency rules
  • Apache HBase
  • Apache Cassandra
  • Hypertable
  • Apache Accumulo
Graph store—For relationship-intensive problems
  • Social networks
  • Fraud detection
  • Relationship-heavy data
  • Neo4j
  • AllegroGraph
  • Bigdata (RDF data store)
  • InfiniteGraph (Objectivity)
Document store—Storing hierarchical data structures directly in the database
  • High-variability data
  • Document search
  • Integration hubs
  • Web content management
  • Publishing
  • MongoDB (10Gen)
  • CouchDB
  • Couchbase
  • MarkLogic
  • eXist-db
  • Berkeley DB XML

NoSQL systems have unique characteristics and capabilities that can be used alone or in conjunction with your existing systems. Many organizations considering NoSQL systems do so to overcome common issues such as volume, velocity, variability, and agility, the business drivers behind the NoSQL movement.

livebook features:
discuss
Ask a question, share an example, or respond to another reader. Start a thread by selecting any piece of text and clicking the discussion icon.
discussions
Get Making Sense of NoSQL
add to cart

1.2. NoSQL business drivers

The scientist-philosopher Thomas Kuhn coined the term paradigm shift to identify a recurring process he observed in science, where innovative ideas came in bursts and impacted the world in nonlinear ways. We’ll use Kuhn’s concept of the paradigm shift as a way to think about and explain the NoSQL movement and the changes in thought patterns, architectures, and methods emerging today.

Many organizations supporting single-CPU relational systems have come to a crossroads: the needs of their organizations are changing. Businesses have found value in rapidly capturing and analyzing large amounts of variable data, and making immediate changes in their businesses based on the information they receive.

Figure 1.1 shows how the demands of volume, velocity, variability, and agility play a key role in the emergence of NoSQL solutions. As each of these drivers applies pressure to the single-processor relational model, its foundation becomes less stable and in time no longer meets the organization’s needs.

Figure 1.1. In this figure, we see how the business drivers volume, velocity, variability, and agility apply pressure to the single CPU system, resulting in the cracks. Volume and velocity refer to the ability to handle large datasets that arrive quickly. Variability refers to how diverse data types don’t fit into structured tables, and agility refers to how quickly an organization responds to business change.

1.2.1. Volume

Without a doubt, the key factor pushing organizations to look at alternatives to their current RDBMSs is a need to query big data using clusters of commodity processors. Until around 2005, performance concerns were resolved by purchasing faster processors. In time, the ability to increase processing speed was no longer an option. As chip density increased, heat could no longer dissipate fast enough without chip overheating. This phenomenon, known as the power wall, forced systems designers to shift their focus from increasing speed on a single chip to using more processors working together. The need to scale out (also known as horizontal scaling), rather than scale up (faster processors), moved organizations from serial to parallel processing where data problems are split into separate paths and sent to separate processors to divide and conquer the work.

1.2.2. Velocity

Though big data problems are a consideration for many organizations moving away from RDBMSs, the ability of a single processor system to rapidly read and write data is also key. Many single-processor RDBMSs are unable to keep up with the demands of real-time inserts and online queries to the database made by public-facing websites. RDBMSs frequently index many columns of every new row, a process which decreases system performance. When single-processor RDBMSs are used as a back end to a web store front, the random bursts in web traffic slow down response for everyone, and tuning these systems can be costly when both high read and write throughput is desired.

1.2.3. Variability

Companies that want to capture and report on exception data struggle when attempting to use rigid database schema structures imposed by RDBMSs. For example, if a business unit wants to capture a few custom fields for a particular customer, all customer rows within the database need to store this information even though it doesn’t apply. Adding new columns to an RDBMS requires the system be shut down and ALTER TABLE commands to be run. When a database is large, this process can impact system availability, costing time and money.

1.2.4. Agility

The most complex part of building applications using RDBMSs is the process of putting data into and getting data out of the database. If your data has nested and repeated subgroups of data structures, you need to include an object-relational mapping layer. The responsibility of this layer is to generate the correct combination of INSERT, UPDATE, DELETE, and SELECT SQL statements to move object data to and from the RDBMS persistence layer. This process isn’t simple and is associated with the largest barrier to rapid change when developing new or modifying existing applications.

Generally, object-relational mapping requires experienced software developers who are familiar with object-relational frameworks such as Java Hibernate (or NHibernate for .Net systems). Even with experienced staff, small change requests can cause slowdowns in development and testing schedules.

You can see how velocity, volume, variability, and agility are the high-level drivers most frequently associated with the NoSQL movement. Now that you’re familiar with these drivers, you can look at your organization to see how NoSQL solutions might impact these drivers in a positive way to help your business meet the changing demands of today’s competitive marketplace.

livebook features:
settings
Update your profile, view your dashboard, tweak the text size, or turn on dark mode.
settings
Sign in for more free preview time

1.3. NoSQL case studies

Our economy is changing. Companies that want to remain competitive need to find new ways to attract and retain their customers. To do this, the technology and people who create it must support these efforts quickly and in a cost-effective way. New thoughts about how to implement solutions are moving away from traditional methods toward processes, procedures, and technologies that at times seem bleeding-edge.

The following case studies demonstrate how business problems have successfully been solved faster, cheaper, and more effectively by thinking outside the box. Table 1.2 summarizes five case studies where NoSQL solutions were used to solve particular business problems. It presents the problems, the business drivers, and the ultimate findings. As you view subsequent sections, you’ll begin to see a common theme emerge: some business problems require new thinking and technology to provide the best solution.

Table 1.2. The key case studies associated with the NoSQL movement—the name of the case study/standard, the business drivers, and the results (findings) of the selected solutions

Case study/standard

Driver

Finding

LiveJournal’s Memcache Need to increase performance of database queries. By using hashing and caching, data in RAM can be shared. This cuts down the number of read requests sent to the database, increasing performance.
Google’s MapReduce Need to index billions of web pages for search using low-cost hardware. By using parallel processing, indexing billions of web pages can be done quickly with a large number of commodity processors.
Google’s Bigtable Need to flexibly store tabular data in a distributed system. By using a sparse matrix approach, users can think of all data as being stored in a single table with billions of rows and millions of columns without the need for up-front data modeling.
Amazon’s Dynamo Need to accept a web order 24 hours a day, 7 days a week. A key-value store with a simple interface can be replicated even when there are large volumes of data to be processed.
MarkLogic Need to query large collections of XML documents stored on commodity hardware using standard query languages. By distributing queries to commodity servers that contain indexes of XML documents, each server can be responsible for processing data in its own local disk and returning the results to a query server.

1.3.1. Case study: LiveJournal’s Memcache

Engineers working on the blogging system LiveJournal started to look at how their systems were using their most precious resource: the RAM in each web server. Live-Journal had a problem. Their website was so popular that the number of visitors using the site continued to increase on a daily basis. The only way they could keep up with demand was to continue to add more web servers, each with its own separate RAM.

To improve performance, the LiveJournal engineers found ways to keep the results of the most frequently used database queries in RAM, avoiding the expensive cost of rerunning the same SQL queries on their database. But each web server had its own copy of the query in RAM; there was no way for any web server to know that the server next to it in the rack already had a copy of the query sitting in RAM.

So the engineers at LiveJournal created a simple way to create a distinct “signature” of every SQL query. This signature or hash was a short string that represented a SQL SELECT statement. By sending a small message between web servers, any web server could ask the other servers if they had a copy of the SQL result already executed. If one did, it would return the results of the query and avoid an expensive round trip to the already overwhelmed SQL database. They called their new system Memcache because it managed RAM memory cache.

Many other software engineers had come across this problem in the past. The concept of large pools of shared-memory servers wasn’t new. What was different this time was that the engineers for LiveJournal went one step further. They not only made this system work (and work well), they shared their software using an open source license, and they also standardized the communications protocol between the web front ends (called the memcached protocol). Now anyone who wanted to keep their database from getting overwhelmed with repetitive queries could use their front end tools.

1.3.2. Case study: Google’s MapReduce—use commodity hardware to create search indexes

One of the most influential case studies in the NoSQL movement is the Google MapReduce system. In this paper, Google shared their process for transforming large volumes of web data content into search indexes using low-cost commodity CPUs.

Though sharing of this information was significant, the concepts of map and reduce weren’t new. Map and reduce functions are simply names for two stages of a data transformation, as described in figure 1.2.

Figure 1.2. The map and reduce functions are ways of partitioning large datasets into smaller chunks that can be transformed on isolated and independent transformation systems. The key is isolating each function so that it can be scaled onto many servers.

The initial stages of the transformation are called the map operation. They’re responsible for data extraction, transformation, and filtering of data. The results of the map operation are then sent to a second layer: the reduce function. The reduce function is where the results are sorted, combined, and summarized to produce the final result.

The core concepts behind the map and reduce functions are based on solid computer science work that dates back to the 1950s when programmers at MIT implemented these functions in the influential LISP system. LISP was different than other programming languages because it emphasized functions that transformed isolated lists of data. This focus is now the basis for many modern functional programming languages that have desirable properties on distributed systems.

Google extended the map and reduce functions to reliably execute on billions of web pages on hundreds or thousands of low-cost commodity CPUs. Google made map and reduce work reliably on large volumes of data and did it at a low cost. It was Google’s use of MapReduce that encouraged others to take another look at the power of functional programming and the ability of functional programming systems to scale over thousands of low-cost CPUs. Software packages such as Hadoop have closely modeled these functions.

The use of MapReduce inspired engineers from Yahoo! and other organizations to create open source versions of Google’s MapReduce. It fostered a growing awareness of the limitations of traditional procedural programming and encouraged others to use functional programming systems.

1.3.3. Case study: Google’s Bigtable—a table with a billion rows and a million columns

Google also influenced many software developers when they announced their Bigtable system white paper titled A Distributed Storage System for Structured Data. The motivation behind Bigtable was the need to store results from the web crawlers that extract HTML pages, images, sounds, videos, and other media from the internet. The resulting dataset was so large that it couldn’t fit into a single relational database, so Google built their own storage system. Their fundamental goal was to build a system that would easily scale as their data increased without forcing them to purchase expensive hardware. The solution was neither a full relational database nor a filesystem, but what they called a “distributed storage system” that worked with structured data.

By all accounts, the Bigtable project was extremely successful. It gave Google developers a single tabular view of the data by creating one large table that stored all the data they needed. In addition, they created a system that allowed the hardware to be located in any data center, anywhere in the world, and created an environment where developers didn’t need to worry about the physical location of the data they manipulated.

1.3.4. Case study: Amazon’s Dynamo—accept an order 24 hours a day, 7 days a week

Google’s work focused on ways to make distributed batch processing and reporting easier, but wasn’t intended to support the need for highly scalable web storefronts that ran 24/7. This development came from Amazon. Amazon published another significant NoSQL paper: Amazon’s 2007 Dynamo: A Highly Available Key-Value Store. The business motivation behind Dynamo was Amazon’s need to create a highly reliable web storefront that supported transactions from around the world 24 hours a day, 7 days a week, without interruption.

Traditional brick-and-mortar retailers that operate in a few locations have the luxury of having their cash registers and point-of-sale equipment operating only during business hours. When not open for business, they run daily reports, and perform backups and software upgrades. The Amazon model is different. Not only are their customers from all corners of the world, but they shop at all hours of the day, every day. Any downtime in the purchasing cycle could result in the loss of millions of dollars. Amazon’s systems need to be iron-clad reliable and scalable without a loss in service.

In its initial offerings, Amazon used a relational database to support its shopping cart and checkout system. They had unlimited licenses for RDBMS software and a consulting budget that allowed them to attract the best and brightest consultants for their projects. In spite of all that power and money, they eventually realized that a relational model wouldn’t meet their future business needs.

Many in the NoSQL community cite Amazon’s Dynamo paper as a significant turning point in the movement. At a time when relational models were still used, it challenged the status quo and current best practices. Amazon found that because key-value stores had a simple interface, it was easier to replicate the data and more reliable. In the end, Amazon used a key-value store to build a turnkey system that was reliable, extensible, and able to support their 24/7 business model, making them one of the most successful online retailers in the world.

1.3.5. Case study: MarkLogic

In 2001 a group of engineers in the San Francisco Bay Area with experience in document search formed a company that focused on managing large collections of XML documents. Because XML documents contained markup, they named the company MarkLogic.

MarkLogic defined two types of nodes in a cluster: query and document nodes. Query nodes receive query requests and coordinate all activities associated with executing a query. Document nodes contain XML documents and are responsible for executing queries on the documents in the local filesystem.

Query requests are sent to a query node, which distributes queries to each remote server that contains indexed XML documents. All document matches are returned to the query node. When all document nodes have responded, the query result is then returned.

The MarkLogic architecture, moving queries to documents rather than moving documents to the query server, allowed them to achieve linear scalability with petabytes of documents.

MarkLogic found a demand for their products in US federal government systems that stored terabytes of intelligence information and large publishing entities that wanted to store and search their XML documents. Since 2001, MarkLogic has matured into a general-purpose highly scalable document store with support for ACID transactions and fine-grained, role-based access control. Initially, the primary language of MarkLogic developers was XQuery paired with REST; newer versions support Java as well as other language interfaces.

MarkLogic is a commercial product that requires a software license for any datasets over 40 GB. NoSQL is associated with commercial as well as open source products that provide innovative solutions to business problems.

1.3.6. Applying your knowledge

To demonstrate how the concepts in this book can be applied, we introduce you to Sally Solutions. Sally is a solution architect at a large organization that has many business units. Business units that have information management issues are assigned a solution architect to help them select the best solution to their information challenge. Sally works on projects that need custom applications developed and she’s knowledgeable about SQL and NoSQL technologies. Her job is to find the best fit for the business problem.

Now let’s see how Sally applies her knowledge in two examples. In the first example, a group that needed to track equipment warranties of hardware purchases came to Sally for advice. Since the hardware information was already in an RDBMS and the team had experience with SQL, Sally recommended they extend the RDBMS to include warranty information and create reports using joins. In this case, it was clear that SQL was appropriate.

In the second example, a group that was in charge of storing digital image information within a relational database approached Sally because the performance of the database was negatively impacting their web application’s page rendering. In this case, Sally recommended moving all images to a key-value store, which referenced each image with a URL. A key-value store is optimized for read-intensive applications and works with content distribution networks. After removing the image management load from the RDBMS, the web application as well as other applications saw an improvement in performance.

Note that Sally doesn’t see her job as a black-and-white, RDBMS versus NoSQL selection process. Sometimes the best solution involves using hybrid approaches.

livebook features:
highlight, annotate, and bookmark
Select a piece of text and click the appropriate icon to annotate, bookmark, or highlight (you can also use keyboard shortcuts - h to highlight, b to bookmark, n to create a note).

You can automatically highlight by performing the text selection while keeping the alt/ key pressed.
highlights
join today to enjoy all our content. all the time.
 

1.4. Summary

This chapter began with an introduction to the concept of NoSQL and reviewed the core business drivers behind the NoSQL movement. We then showed how the power wall forced systems designers to use highly parallel processing designs and required a new type of thinking for managing data. You also saw that traditional systems that use object-middle tiers and RDBMS databases require the use of complex object-relational mapping systems to manipulate the data. These layers often get in the way of an organization’s ability to react quickly to changes (agility).

When we venture into any new technology, it’s critical to understand that each area has its own patterns of problem solving. These patterns vary dramatically from technology to technology. Making the transition from SQL to NoSQL is no different. NoSQL is a new paradigm and requires a new set of pattern recognition skills, new ways of thinking, and new ways of solving problems. It requires a new cognitive style.

Opting to use NoSQL technologies can help organizations gain a competitive edge in their market, making them more agile and better equipped to adapt to changing business conditions. NoSQL approaches that leverage large numbers of commodity processors save companies time and money and increase service reliability.

As you’ve seen in the case studies, these changes impacted more than early technology adopters: engineers around the world realize there are alternatives to the RDBMS-as-our-only-option mantra. New companies focused on new thinking, technologies, and architectures have emerged not as a lark, but as a necessity to solving real business problems that don’t fit into a relational mold. As organizations continue to change and move into global economies, this trend will continue to expand.

As we move into our next chapter, we’ll begin looking at the core concepts and technologies associated with NoSQL. We’ll talk about simplicity of design and see how it’s fundamental to creating NoSQL systems that are modular, scalable, and ultimately lower-cost to you and your organization.

sitemap
×

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage
Up next...
  • NoSQL concepts
  • ACID and BASE for reliable database transactions
  • How to minimize downtime with database sharding
  • Brewer’s CAP theorem