Browsing articles in "Database"

Figuring out Consistency Levels in Azure Cosmos DB

Apr 18, 2018   //   by Karen Lopez   //   Azure, Blog, Cosmos DB, Data, Database, Database Design  //  No Comments

Azure Cosmos DB five levels of consistency Stront, Bounded Staleness, Session, Consistent Prefix and Eventual

I’ll have to admit: the first time I heard the term and explanation behind “Eventual Consistency”, I laughed.  This is due to the fact that I’ve spent my whole life fighting the good fight to ensure data is consistent.  That’s what transactions are for.  Now fast forward several years and we data professionals understand that some data stories don’t require strict consistency for ever reader of the data.

The key to that statement is reader. For the most part, we still don’t want inconsistent writes.

Consistency in a real world is a continuum from strictly consistent to eventually consistent.  Notice that consistency is still a goal.  But because it’s a continuum, there are many consistency schemes along the way.  I’ve always  struggled a bit with understanding and explaining these levels. 

We need these consistency levels due to the CAP Theorem, which says we can pick two of Consistency, Availability or Partition Tolerance when using distributed systems.  This is mostly due to physics: if I have distributed the same data over multiple locations, I need to give up one of the CAP items to make the system work. 

Let’s take a look at what the Cosmos DB documentation says about consistency levels (feel free to just scan this):

Consistency levels

You can configure a default consistency level on your database account that applies to all collections (and databases) under your Cosmos DB account. By default, all reads and queries issued against the user-defined resources use the default consistency level specified on the database account. You can relax the consistency level of a specific read/query request using in each of the supported APIs. There are five types of consistency levels supported by the Azure Cosmos DB replication protocol that provide a clear trade-off between specific consistency guarantees and performance, as described in this section.

Strong:

  • Strong consistency offers a linearizability guarantee with the reads guaranteed to return the most recent version of an item.
  • Strong consistency guarantees that a write is only visible after it is committed durably by the majority quorum of replicas. A write is either synchronously committed durably by both the primary and the quorum of secondaries, or it is aborted. A read is always acknowledged by the majority read quorum, a client can never see an uncommitted or partial write and is always guaranteed to read the latest acknowledged write.
  • Azure Cosmos DB accounts that are configured to use strong consistency cannot associate more than one Azure region with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of request units consumed) with strong consistency is higher than session and eventual, but the same as bounded staleness.

Bounded staleness:

  • Bounded staleness consistency guarantees that the reads may lag behind writes by at most K versions or prefixes of an item or t time-interval.
  • Therefore, when choosing bounded staleness, the "staleness" can be configured in two ways: number of versions K of the item by which the reads lag behind the writes, and the time interval t
  • Bounded staleness offers total global order except within the "staleness window." The monotonic read guarantees exist within a region both inside and outside the "staleness window."
  • Bounded staleness provides a stronger consistency guarantee than session, consistent-prefix, or eventual consistency. For globally distributed applications, we recommend you use bounded staleness for scenarios where you would like to have strong consistency but also want 99.99% availability and low latency.
  • Azure Cosmos DB accounts that are configured with bounded staleness consistency can associate any number of Azure regions with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of RUs consumed) with bounded staleness is higher than session and eventual consistency, but the same as strong consistency.

Session:

  • Unlike the global consistency models offered by strong and bounded staleness consistency levels, session consistency is scoped to a client session.
  • Session consistency is ideal for all scenarios where a device or user session is involved since it guarantees monotonic reads, monotonic writes, and read your own writes (RYW) guarantees.
  • Session consistency provides predictable consistency for a session, and maximum read throughput while offering the lowest latency writes and reads.
  • Azure Cosmos DB accounts that are configured with session consistency can associate any number of Azure regions with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of RUs consumed) with session consistency level is less than strong and bounded staleness, but more than eventual consistency.

Consistent Prefix:

  • Consistent prefix guarantees that in absence of any further writes, the replicas within the group eventually converge.
  • Consistent prefix guarantees that reads never see out of order writes. If writes were performed in the order A, B, C, then a client sees either A, A,B, or A,B,C, but never out of order like A,C or B,A,C.
  • Azure Cosmos DB accounts that are configured with consistent prefix consistency can associate any number of Azure regions with their Azure Cosmos DB account.

Eventual:

  • Eventual consistency guarantees that in absence of any further writes, the replicas within the group eventually converge.
  • Eventual consistency is the weakest form of consistency where a client may get the values that are older than the ones it had seen before.
  • Eventual consistency provides the weakest read consistency but offers the lowest latency for both reads and writes.
  • Azure Cosmos DB accounts that are configured with eventual consistency can associate any number of Azure regions with their Azure Cosmos DB account.
  • The cost of a read operation (in terms of RUs consumed) with the eventual consistency level is the lowest of all the Azure Cosmos DB consistency levels.

    https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

It’s clear, isn’t it? No?  I’ll agree that reading text about consistency levels can be difficult to really understand.  in searching for more examples, I found a wonderful write-up that uses animations plus a baseball analogy. In that post,  Michael Whittaker  references the 2013 CACM article Replicated Data Consistency Explained Through Baseball (ACM Subscription required) by Doug Terry, of Microsoft Research.  If you don’t have access to the ACM library (you definitely should, by the way), you can find videos of talks he has given on this topic on the web.

Michael also has a more complex post on Visualizing Linearizability.  This is a topic I want to know more about, but first I have to tackle my challenge of saying Linearizability without stumbling.

How Deep is My Non-Love? Nested Dependencies and Overly Complex Design

Dec 4, 2017   //   by Karen Lopez   //   Blog, Data Modeling, Database, Database Design, SQL Server, WTF  //  No Comments

Relational databases have this nifty concept of objects (just things, not code objects) being dependent upon other things.  Sometimes those dependencies exist due to foreign key constraints, others via references to other things.  One example of the latter can be found in VIEWs.  A database VIEW is an object that references TABLEs or other VIEWS.  Of course, if that VIEW references other VIEWs, then that view must reference TABLEs or another VIEW.  And it’s that or another VIEW that can get modelers into trouble.

I reviewed a database design that had massively dependent VIEWs.  How did I know that? I used a proper data modeling tool to look at all the dependencies for one central VIEW.  And this is what my data modeling tool showed me:

Data Model with hundreds of dependencies (lines) between a handful of objects (squares)

That diagram shows how ONE VIEW is related to a whole bunch of other VIEWs and TABLEs in that design.  In reviewing the model, I saw that many of the VIEWs appeared to be duplicates or had very high overlap of content with other VIEWs. 

How do VIEWs Like This Happen?

There are many reasons one would created a nested VIEW.  Like anything in a hierarchy, you could have objects that could be used independently and as part of a group on a regular basis.  But that only explains one level of a VIEW hierarchy (nest).   What about VIEWs that are nested dozens are levels deep?  And why would a database have such a complex design around one VIEW?  These are the most common reasons I run into bad practices with VIEWs:

  • Designers who don’t understand the massive performance loss for massively nested VIEWS
  • Designers who design for theory, not for real world data stories
  • Designers who have no idea they are referencing another VIEW when they design their VIEW
  • Designers who are following a worst practice of creating a VIEW for every report and every window in an application
  • Designers who don’t collaborate with other designers and create their own set of VIEWs and dependencies
  • Designers who are compensated for doing work fast and not well
  • Designers who use DDL to do design, therefore never seeing the complexity of their designs
  • Data Governance policies that let anyone create objects in a database
  • A team environment were “everyone is a generalist”.

I could go on.  While I can’t go into details here, in my review I recommended complete refactoring of this overly complex design.  It is my guess this complexity was contributing to performance problems experienced in this application.  I also recommended that professional designer was used to refactor other issues with the database design.  I have no idea if this happened.  But I doubted that this application was going to meet its large scale web application goals.

Why Am I Sharing This?

Because so many design issues I find in reviews have the same causes for performance and data quality issues I’ve listed above.  I find that not using a real data modeling or design tool is the main contributing factor.  There’s a reason why physical world architects and engineers use drawings and architectural diagrams. Models are also how they make modifications successful to the items they build.

Yes, physical objects are different than software/application/database objects. My position is that these latter objects need models at least as much as buildings and devices do.  We need tools to reverse engineer objects, to view the dependencies, to search, and to assess.  In other words, to model.  Engineering data solutions requires engineering tools like data modeling tools.  And, yes, data engineers to understand how to use those tools and how to model out the unnecessary complexity.

The Key to Keys at the North Texas SQL Server User Group – 17 March

Mar 15, 2016   //   by Karen Lopez   //   Blog, Data Modeling, Database, Database Design, DLBlog, Speaking, SQL Server  //  No Comments

I’m visiting Dallas this week to speak at the North Texas SQL Server User Group this Thursday.  I’ll be speaking about keys: primary keys, surrogate keys, clustered keys, GUIDs, SEQUENCEs, alternate keys…well, there’s a lot to cover about such a simple topic.  The reason I put this presentation together is I see a lot of confusion about these topics. Some of it’s about terminology (“I can’t find anything about alternate keys in SQL Server…what the heck is that, anyway”), some of it is misunderstandings (“what do you mean IDENTITIES aren’t unique! of course they are…they are primary keys!”), some of it is just new (“Why the heck would anyone want to use a SEQUENCE?”).

We’ll be chatting about all these questions and more on Thursday, 17 March at the Microsoft venue in Irving, Texas starting at 6PM.

Attendance is free, but you need to register at http://northtexas.sqlpass.org/ to help organizers plan for the event.

Don’t worry if you don’t know about SQL Server or don’t use it: this presentation will focus on some SQL Server specific features, but the discussion is completely portable to other DBMSs.

So many of us have learned database design approaches from working with one database or data technology. We may have used only one data modeling or development tool. That means our vocabularies around identifiers and keys tend to be product specific. Do you know the difference between a unique index and a unique key? What about the difference between RI, FK and AK? These concepts span data activities and it’s important that your team understand each other and where they, their tools and approaches need to support these features. We’ll look at the generic and proprietary terms for these concepts, as well as where they fit in the database design process. We’ll also look at implementation options in SQL Server and other DBMSs.

Hope to see you there!

7 Databases in 170 Minutes: Workshop at NoSQLNow!

Jan 26, 2016   //   by Karen Lopez   //   Blog, Database, Database Design, DLBlog, Events, NoSQL, Speaking, Training  //  No Comments

image

My friend Joey D’Antoni ( @jdanton | blog ) and I will be giving a workshop at NoSQLNow! about new database and datastore technologies like Hadoop, Neo4j, Cassandra, Vertica, Document DB, and others.  This will be a fast-paced, demo-heavy, practical sessions for data professionals.  We’ll talk about where a modern data architecture would best use these technologies and why it’s not an either/or question for relational solutions in a successful enterprise. And, as always, our goal is to make the time we spend fun and interactive.   This session will be a great starting point for some other session on Monday that go into data modeling for NoSQL as well as for all the other in-depth, database-specific talks the rest of the week.

Sunday, April 17, 2016
Level:
Intermediate

imageWe’ve been busy keeping relational data consistent, high quality, and available. But over the last few years, new database and datastore technologies have come to the enterprise with different data stories. Do we need all our data to be consistent everywhere? What does data quality mean for analytics? Will we need relational database?

Learn how traditional and new database technologies fit in a modern data architecture. We will talk about the underlying concepts and terminology such as CAP, ACID and BASE and how they form the basis of evaluating each of the categories of databases. Learn about graph, Hadoop, relational, key value, document, columnar, and column family databases and how and when they should be considered. We’ll show you demos of each.

Finally, we will wrap up with 7+ tips for working with new hybrid data architectures: tools, techniques and standards.

 REGISTER

Use code “DATACHICK” to save:

$100 off for  Tutorials Only + Seminar Only Registration and $200 off for Full Event, Conference+Tutorials, Conference +Seminar, and Conference Only Registration.

Super early registration ends 29 January, so take advantage of both discounts now (yes, they stack!).

Follow Along TechFiedDay10 #TFD10 Austin–Updated with Video Streaming

Jan 25, 2016   //   by Karen Lopez   //   Blog, Cloud, Database, DLBlog, Events, Professional Development, Speaking  //  No Comments

TFD Logo

Last year I participated in the first Data Field Day in San Jose.  I’m honoured to be a delegate for the tenth Tech Field Day which follows the same format.  On 3-5 February I’ll be in Austin, Texas visiting with vendors in the software, hardware and virtualization world.  There will be twelve of us participating, along with our fearless host, Stephen Foskett ( @SFoskett ).

We will be visiting these vendors during TFD10:

image

At each vendor visit there will be livestreaming during their presentation and we will discuss their products and services, ask questions. You can follow that stream above.  Delegates are known for their brutal honesty, their insight and even some fun observations.

You can also follow along on Twitter hashtag of #TFD10.  You can also post your own questions for these session using that hashtag.

What I love about field days is the the mix of delegates with a wide background in business, tech, innovation, entrepreneurship and data.  This breadth means that we, as a team, look at the technology and business with a variety of viewpoints.  And you get to watch it all live.

BTW, the next Data Field Day is scheduled for 8-10 June. If you have products or services you’d like to present to a team of independent data experts, contact me.

I hope you can follow along. It’s a great chance to see real world tech innovation discussions.

Database Design Throwdown, Texas Style

Jan 21, 2016   //   by Karen Lopez   //   Blog, Data, Data Modeling, Database, Database Design, DLBlog, Events, Fun, Snark, Speaking, SQL Server  //  3 Comments

SQLSaturday #461 - Austin 2016

It’s a new year and I’ve given Thomas LaRock (@@sqlrockstar | blog ) a few months to recover and ramp up his training since our last Throwdown.  The trophies from all my wins are really cluttering my office and I feel back that Tom has not yet had a chance to claim victory.  So we will battling again in just a few days.

I’ll be dishing out the knowledge along with a handkerchief for Tom to wipe up his tears at SQL Saturday #461 Austin, TX on 30 January 2016.  This full day community-driven event features real database professionals giving free presentations on SQL Server and Data Platform topics.  All you need to do is register (again, it’s free) before all the tickets are gone.

Database Design Throwdown

Speaker(s):  Karen Lopez Thomas LaRock

Duration: 60 minutes

Track: Application & Database Development

Everyone agrees that great database performance starts with a great database design. Unfortunately, not everyone agrees which design options are best. Data architects and DBAs have debated database design best practices for decades. Systems built to handle current workloads are unable to maintain performance as workloads increase.Attend this new and improved session and join the debate about the pros and cons of database design decisions. This debate includes topics such as logical design, data types, primary keys, indexes, refactoring, code-first generators, and even the cloud. Learn about the contentious issues that most affect your end users and how to avoid them.

One of the other great benefits of attending these events is that you get to network with other data professionals who are working on project just like yours…or ones you will likely work on at some point.

Join us an other data pros to talk about data, databases and projects. And make sure you give a #datahug to Tom after the Throwdown. He’s gonna need it.

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Categories

Archive

UA-52726617-1