Browsing articles tagged with " Eventual Consistency"

Figuring out Consistency Levels in Azure Cosmos DB

Apr 18, 2018   //   by Karen Lopez   //   Azure, Blog, Cosmos DB, Data, Database, Database Design  //  No Comments

Azure Cosmos DB five levels of consistency Stront, Bounded Staleness, Session, Consistent Prefix and Eventual

I’ll have to admit: the first time I heard the term and explanation behind “Eventual Consistency”, I laughed.  This is due to the fact that I’ve spent my whole life fighting the good fight to ensure data is consistent.  That’s what transactions are for.  Now fast forward several years and we data professionals understand that some data stories don’t require strict consistency for ever reader of the data.

The key to that statement is reader. For the most part, we still don’t want inconsistent writes.

Consistency in a real world is a continuum from strictly consistent to eventually consistent.  Notice that consistency is still a goal.  But because it’s a continuum, there are many consistency schemes along the way.  I’ve always  struggled a bit with understanding and explaining these levels. 

We need these consistency levels due to the CAP Theorem, which says we can pick two of Consistency, Availability or Partition Tolerance when using distributed systems.  This is mostly due to physics: if I have distributed the same data over multiple locations, I need to give up one of the CAP items to make the system work. 

Let’s take a look at what the Cosmos DB documentation says about consistency levels (feel free to just scan this):

Consistency levels

You can configure a default consistency level on your database account that applies to all collections (and databases) under your Cosmos DB account. By default, all reads and queries issued against the user-defined resources use the default consistency level specified on the database account. You can relax the consistency level of a specific read/query request using in each of the supported APIs. There are five types of consistency levels supported by the Azure Cosmos DB replication protocol that provide a clear trade-off between specific consistency guarantees and performance, as described in this section.


  • Strong consistency offers a linearizability guarantee with the reads guaranteed to return the most recent version of an item.
  • Strong consistency guarantees that a write is only visible after it is committed durably by the majority quorum of replicas. A write is either synchronously committed durably by both the primary and the quorum of secondaries, or it is aborted. A read is always acknowledged by the majority read quorum, a client can never see an uncommitted or partial write and is always guaranteed to read the latest acknowledged write.
  • Azure Cosmos DB accounts that are configured to use strong consistency cannot associate more than one Azure region with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of request units consumed) with strong consistency is higher than session and eventual, but the same as bounded staleness.

Bounded staleness:

  • Bounded staleness consistency guarantees that the reads may lag behind writes by at most K versions or prefixes of an item or t time-interval.
  • Therefore, when choosing bounded staleness, the "staleness" can be configured in two ways: number of versions K of the item by which the reads lag behind the writes, and the time interval t
  • Bounded staleness offers total global order except within the "staleness window." The monotonic read guarantees exist within a region both inside and outside the "staleness window."
  • Bounded staleness provides a stronger consistency guarantee than session, consistent-prefix, or eventual consistency. For globally distributed applications, we recommend you use bounded staleness for scenarios where you would like to have strong consistency but also want 99.99% availability and low latency.
  • Azure Cosmos DB accounts that are configured with bounded staleness consistency can associate any number of Azure regions with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of RUs consumed) with bounded staleness is higher than session and eventual consistency, but the same as strong consistency.


  • Unlike the global consistency models offered by strong and bounded staleness consistency levels, session consistency is scoped to a client session.
  • Session consistency is ideal for all scenarios where a device or user session is involved since it guarantees monotonic reads, monotonic writes, and read your own writes (RYW) guarantees.
  • Session consistency provides predictable consistency for a session, and maximum read throughput while offering the lowest latency writes and reads.
  • Azure Cosmos DB accounts that are configured with session consistency can associate any number of Azure regions with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of RUs consumed) with session consistency level is less than strong and bounded staleness, but more than eventual consistency.

Consistent Prefix:

  • Consistent prefix guarantees that in absence of any further writes, the replicas within the group eventually converge.
  • Consistent prefix guarantees that reads never see out of order writes. If writes were performed in the order A, B, C, then a client sees either A, A,B, or A,B,C, but never out of order like A,C or B,A,C.
  • Azure Cosmos DB accounts that are configured with consistent prefix consistency can associate any number of Azure regions with their Azure Cosmos DB account.


  • Eventual consistency guarantees that in absence of any further writes, the replicas within the group eventually converge.
  • Eventual consistency is the weakest form of consistency where a client may get the values that are older than the ones it had seen before.
  • Eventual consistency provides the weakest read consistency but offers the lowest latency for both reads and writes.
  • Azure Cosmos DB accounts that are configured with eventual consistency can associate any number of Azure regions with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of RUs consumed) with the eventual consistency level is the lowest of all the Azure Cosmos DB consistency levels.

It’s clear, isn’t it? No?  I’ll agree that reading text about consistency levels can be difficult to really understand.  in searching for more examples, I found a wonderful write-up that uses animations plus a baseball analogy. In that post,  Michael Whittaker  references the 2013 CACM article Replicated Data Consistency Explained Through Baseball (ACM Subscription required) by Doug Terry, of Microsoft Research.  If you don’t have access to the ACM library (you definitely should, by the way), you can find videos of talks he has given on this topic on the web.

Michael also has a more complex post on Visualizing Linearizability.  This is a topic I want to know more about, but first I have to tackle my challenge of saying Linearizability without stumbling.

Monte Carlo-ing Your Eventual Consistency Bets

Jan 10, 2012   //   by Karen Lopez   //   Blog, Database  //  No Comments

One of the features of not-only-SQL (NoSQL) data storage systems is the concept of eventual consistency (via Wikipedia):

Eventual Consistency… means that given a sufficiently long period of time over which no changes are sent, all updates can be expected to propagate eventually through the system and all the replicas will be consistent.

For those of us coming from a transactional system point of view, eventual consistency can be mind-boggling at first. Thinking about data being presented in an inconsistent manner is usually seen as a data quality failure — something to be avoided. But in non-transactional systems it’s worth the trade-off for speed and scalability. Think about your Facebook page for a minute: how bad would it be if one of your friend’s updates was not visible to you at the same time it was visible to someone else, but eventually you’d be able to see that update?

Paul Cannon has a great write up on using tools to estimate your eventual consistency with Cassandra:

"The best part is that they also provided the world with an interactive demo, which lets you fiddle with N, R, and W, as well as parameters defining your system’s read and write latency distributions, and gives you a nice graph showing what you can expect in terms of consistent reads after a given time.

See the interactive demo here.

This terrific tool actually runs thousands of Monte Carlo simulations per data point (turns out the math to create a full, precise formulaic solution was too hairy) to give a very reliable approximation of consistency for a range of times after a write."

Being able to plan your architecture to best fit the business need is what is important, not necessarily data purity at the cost of speed or reliability.  Again, that sounds weird to a profession that has focused on fighting to keep data integrity on the radar of management, but the best design decisions are made balancing cost, benefit and risk.  Those of us in the data world to understand that eventually consistent is often the best solution.  Even if it feels weird.

Having tools that help us understand how to best architect the trade-offs is the first step in delivering the right data consistency for what the business needs.

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to by Email



UA-52726617-1 Secured By miniOrange