Browsing articles in "Data"

Confusing Community with Sales

May 23, 2016   //   by Karen Lopez   //   Blog, Ethics, Events, Professional Development, Speaking, SQL Server, WTF  //  23 Comments

Litter Box Marketing

There have been some blog posts floating around about a new PASS Summit policy.  Most of the posts have been either misleading or ill-informed about why this new rule came about.  Last year there was a sh*tshow of bad marketing and sales practices:

  • Two vendors did a bulk drop of branded promotional items in the Community Zone.  They literally turned an area intended to be about chapters, networking, and #SQLFamily into a their own company litter box.
  • A vendor lefts stacks of promotional items on booths of sponsors in the exhibit areas.  Yes, a vendor who did not pay to sponsor the event used the booths that other vendors paid for to attempt to distribute their marketing materials.
  • I heard of other things happening from sponsors, but did not witness them.  They were right along the lines of those two things above.

So PASS has come out with a new rule about exchanging stuff at the PASS Summit.  They are now going to attempt to limit exchanges to business cards only.  I think this is way too specific of a rule definition, but unlike the other bloggers, instead of making this a post about how awful the board is, I’m going to offer up win-win-win alternatives below.

Some of the comments on these posts have been made in an attempt to soften the “guerrilla marketing” bad behaviours I mentioned.  They claim that the board wants to limit small, personal exchanges of gifts like ribbons and stickers, both very common conference exchanges.  In the space community, these also include mission patches and pins. I don’t believe the board wants that, but they have certainly put that in writing.

First, the rule right now only applies to speakers. I’m not sure if it applies to attendees, but I’d want any such rules to apply to everyone at the event.

Feral Cats and What’s That Smell?

The issue isn’t about personal exchanges of gifts. The issue, as all of us know here but are pretending we don’t, is the literal carpet bombing of commercial collateral, including promotional, branded swag in community areas, empty session rooms, empty tables, restrooms, hallways, charging areas, etc.  I do not support the claims that this type of feral-cat like spraying of vendor materials is “Community over Sponsors” behaviours. It’s about sales over members. Don’t kid yourself. Consultants are vendors. InfoAdvisors is a vendor. I’m a vendor at these events because I work for a vendor.

Consultants are vendors. InfoAdvisors is a vendor. I’m a vendor at these events because I work for a vendor.

All that spraying smells. It’s only community if your business belongs in a back alley. It’s only community if you think of attendees as “prospective invoices”.  It’s all litter box marketing.

That isn’t about gifts. It’s not about community.

And what has happened is that the “arms race” mentioned in one post has now become such an embarrassment to the community that our professional association has had to step in and make a rule.

Update: One vendors claims that the sponsors asked him to drop swag on their tables.  “it just looks like litter boxing” (paraphrased). The two events I witnessed involved the sponsors throwing the swag in the garbage and asking “WTH was that?”  I’m going to guess that “being invited to give out swag at the booths” is a giant misunderstanding.  Ha ha. : ).

The New Rule Isn’t Right

I agree that the limiting to business cards is a unacceptable way to draw the line on this “I don’t see you all as community but as potential invoices” behaviours. But the real fault is on the people who need to have the event as a “sell-first, avoid you later” event.

Saying they can’t afford to have a booth isn’t accurate. It’s affordable.   Many smaller vendors have booths at SQL Saturdays and at the big show.  It’s very affordable, especially if you share with other vendors.  Which is a great way to have a booth because who wants to man/woman/kitten a booth for the entire conference?

Should you have to have a booth to exchange stickers or ribbons? No.  But when sponsors get other people’s swag dropped on their booths, or when the community zone becomes a porta-potty for marketing materials, we’ve lost our path.  No matter what someone tells you, that’s not community. It’s seeing our event not as a Connect. Share. Learn. event. It’s about seeing our event as a Speak and Sell event.

Blame for the new rule goes 100% to the folks who did these things.  Okay, maybe I’ll blame the board 10% for coming with a new rule that isn’t quite a win-win-win solution.

This Ain’t the Tea Party

If you think telling sponsors “we’ll take your money, but others can turn the community zone into their own “rogue exhibit hall” is good conferences sales point, I suggest we just give away exhibit booths and charge everyone the real price it costs to put this on.   I’m guessing that registration will cost about the same as a 7-day cruise.  Or it will be like a local user group meeting, with fewer people.   Austerity might be your political stance.  Telling people to just change jobs if their employer won’t pay $7k for them to attend Summit is a nonstarter.

The fact of the matter is that community events the size of Summit (thousands) can’t happen without sponsors.  Ensuring that sponsors get what they pay for is not “putting sponsors over the needs of attendees”.  It’s about running an event that is affordable and sustainable.  Sure, it’s a balance.  But pretending that somehow non-sponsoring vendors should be allowed to use sponsor resources for their own needs is naïve at best.  At worst, it’s painting the situation as being something it is not.

Data.  Get Your Data Right.

It’s misleading to say that these rules happened because PASS wants to cater to sponsors over community. A few overly-greedy, it’s-all-about-money people have caused this. Focus your ammo on the right malicious “users” of PASS.

What I Want the Rule to Be

I’ve talked to board members and PASS staff.  This is what I want the rule to be.  I think it’s a win-win-win for attendees, consultant and sponsors.

 

Personal, one-on-one exchanges of low-cost items like the ones below should be allowed and even encouraged.

  • Stickers
  • business cards
  • patches
  • buttons & pins
  • temp tattoos
  • ribbons
  • candy
  • stamps
  • etc.

 

I don’t care if those things have your name, your favourite tagline, your picture, your cat-owner’s photo, or your logo.  They key here is one-on-one, personal exchanges of low-value, often fun, things.  I also don’t want to have a detailed list.  People love to have a check box set of rules, but that just leads to people finding loopholes.  Heck, I love sharing space swag at non-space events. Especially collectibles that are older than most of the attendees.

Update: What do I mean by exchanges?  I mean giving out these low-cost items in trade for the other person’s similar item or for some other value.  One year at EDW I asked people to tell me they “loved their data” to get a ribbon.  Hearing people say that was a small but important value to me. I may have done that at Summit one year as well.  The key is these are still one-on-one exchanges. And none of them happened from the podium.  Selling while presenting should be a paid session.

Ribbons, stickers, stamps are all part of the geek community and I want that to continue to be a part of Summit.

 

Bulk distributions of marketing materials, flyers, branded materials should require some sort of sponsorship level.  As should the distribution of more expensive swag, cars, real tattoos, kittens, and $20 bills.

Distribution of items on sponsor booths without their permission should not be allowed.  Bulk distribution on the exhibit floor without being a sponsor or in the Community Zone should not be allowed.

 

The Community Zone Should Be a Sales-free Zone

The Community Zone should be sales-free, as far as I’m concerned. It’s the violation of this rule that I think should cause people not to be invited back to the event.  Attendees should have one area where they aren’t treated like invoices.  Having to put this into a rule makes me sad. People should just understand this is how life works.

Maybe we need a $500 sponsorship level for those vendors whose business is doing so poorly they can’t afford a booth.  Or for independent consultants.  Again, this is for people and organizations that want to do mass distribution of marketing materials and collateral, not personal exchanges.

A professional association should indeed help all members be great at what they do.  Whether they are consultants, software vendors, contractors, full- or part-time employees, retired, whatever.  But that doesn’t mean that a professional association event must provide a sales opportunity in every part of the event.

This proposal is a win-win-win because attendees can keep doing what we’ve always done.  Vendors can still do their sales things, but appropriately.  Vendor sponsors can keep getting value out of their sponsorship dollars without some on other vendor being a feral cat and bragging how “sponsoring a booth is stupid when you can just do guerrilla marketing.”  Our sponsors are part of our community, too.  In fact, organizations can be members of PASS if the sign up.

Finally…

The world does have bigger problems.  But the posts that have been coming out have not been giving the full picture, nor have they offered up a balanced solution. I think it’s good that this year several people came forward to complain to the board that the stuff people have been doing has crossed a line.   It may not really be an “arms race”. But is has been escalating.  Houston, we’ve had a problem. It stinks. It’s time to fix it.  Let’s all work together to get it right, before the urine smell kills the whole event.  If you have other ideas, I’d love to hear them.

This is some of the feedback I got for speaking up.
I’ve never attended a SQL Saturday Ottawa yet (there’s always been a scheduling conflict). I was not in Ottawa that day. I was at a NASA Armstrong Teacher Educator event.

This is how nasty this whole discussion as become. A vendor took a bunch of my tweets over the last year, some about these behaviours, some about my dislike of the things that Mr. Trump says, and some about God knows what else and made a video saying I’m mean. Then this video became a facebook post on the vendor’s own Facebook wall.

.Never Been to Ottawa.

A few people spoke up and this commenter deleted his comment after a while. The vendor did not delete it. The commenter did.  Remember this when you are thinking about win-win-win solutions. This is what’s at stake. This why bad behaviour leads to more bad behaviour. I’ll still keep blogging about it. And people will still comment on ME instead of the issue.Its what is broken with our community. Talk about bad behaviours, not people.

Happy 20th Birthday, ER/1…ER/Studio

Mar 29, 2016   //   by Karen Lopez   //   Blog, Data Modeling, Fun, News  //  1 Comment

 

imageI was preparing for my webinar tomorrow for Idera when I decided to look up how long ER/Studio Data Architect has been around.  I was happy to see that the press release for ER/1 (what it was called before they got in a bit of a trademark issue with ERwin* folks) that it was released on 15 March 2006.

I started using ER/1 not too long after that.

 

Some Interesting ER/Studio Trivia

  • ER/1 listed for $1399 a seat, but there was a special deal for a few months to get it for $899.
  • It could handle “hundreds of entities”
  • It did not feature bi-directional updating of Logical to Physical
  • It did not yet feature on diagram editing
  • You can still download the Documentation for ER/1 1.0
  • it supported:
    • Oracle 7
    • Sybase 11 and 10
    • Microsoft SQL Server 6
    • Informix
    • DB2
    • SQL Anywhere
    • Watcom
    • SQL Base
  • “ER/1 can x-ray your databases and extract their structure” < Love this.
  • It followed IDEF1X methodology adopted as part of the Federal Information Processing Standards
  • Submodelling (Subject areas diagramming) was not supported yet.
  • There was  a separate product ER/1 for Borland Interbase

Press Release

NEWS RELEASE

March 15, 1996

Embarcadero Technologies Ships ER/1 Data Modeling Tool

San Francisco, CA, March 15, 1996, Embarcadero Technologies today announced the general availability of ER/1, a new visual, entity-relationship modeling tool. ER/1 supports all major SQL database platforms, including Oracle7, Sybase 11 and 10, Microsoft SQL Server 6, Informix, DB2, SQL Anywhere, Watcom and SQL Base.

ER/1 delivers a slew of features that promote high-quality, functionally correct data models as well as unparalleled power, ease-of use and value. Its highly customizable design allows you to create visually appealing diagrams with such tools as dockable toolbars, diagram zooming, and print scaling. Powerful inheritance logic is built into ER/1 providing referential integrity throughout your data model. In addition, ER/1 provides you with the following major features to facilitate the creation of both logical and physical designs:

Accurate and Quick Reverse Engineering

ER/1 x-rays your databases and extracts their structure into entity-relationship diagrams capturing the complete definition of your tables, including constraints, primary keys, foreign keys, indexes, table and column comments and all table dependencies.

Automatic Database Builds

ER/1 uses an ODBC connection to create a physical implementation of the logical database design you created in ER/1. This one-step process involves the creation of tables, indexes, triggers, stored procedures, views, defaults, rules and user datatypes and properly orders the creation of these objects to eliminate dependency errors.

Data Dictionary

This feature promotes code-reuse by providing a central repository to store rules, defaults, and user-defined datatypes. Once you establish a business rule as a Data Dictionary object, it is re-usable throughout your diagram. In addition, the Data Dictionary supports global updates of

these objects. Just make the change once in the dictionary and ER/1 automatically propagates these changes throughout your diagram.

Comprehensive Reports

ER/1 offers the most comprehensive reporting of any data modeling tool. It completely documents both your logical and physical designs and generates professionally formatted and structured reports at the summary or detail level.

Code Generation for Team Development

ER/1 can write SQL source code files ready for version control and team development. To facilitate team programming, you can generate separate source code files.

Pricing/Availability:

ER/1 for Windows 95 and Windows NT is priced at $1399 per user. Through April 30, 1996, Embarcadero Technologies is offering a special introductory price of only $899 per user.

About Embarcadero Technologies, Inc.:

Embarcadero Technologies is a software products company specializing in tools to design, create, administer, query, program and monitor Oracle, Sybase, Microsoft, and Informix databases. Embarcadero offers a suite of products marketed to corporate customers and database professionals worldwide and has rapidly become the leading provider of database administration tools for Sybase and Microsoft SQL Server. Embarcadero’s software has been recognized for excellence with outstanding independent product reviews conducted by PC Week, DBMS, Microsoft BackOffice Magazine and Databased Advisor.

Data Modeling Tools are Experienced

One of the reasons why some people find data modeling tools overwhelming is that they’ve been around for more than 20 years.  That’s a long time for these tools to get more customized, more feature-rich, more complex.

I should give a shout out to Greg Keller, who was the product manager during the time I started using ER/Studio.

So happy birthday, Embarcadero…I mean…Idera…ER/1….ER/Studio.  I’m going to have a cupcake in your honor! Maybe twenty.

*Say “ER One”  Then say “ER WIN”.  Yeah, almost a SOUNDEX trademark issue.

The Key to Keys at the North Texas SQL Server User Group – 17 March

Mar 15, 2016   //   by Karen Lopez   //   Blog, Data Modeling, Database, Database Design, DLBlog, Speaking, SQL Server  //  No Comments

I’m visiting Dallas this week to speak at the North Texas SQL Server User Group this Thursday.  I’ll be speaking about keys: primary keys, surrogate keys, clustered keys, GUIDs, SEQUENCEs, alternate keys…well, there’s a lot to cover about such a simple topic.  The reason I put this presentation together is I see a lot of confusion about these topics. Some of it’s about terminology (“I can’t find anything about alternate keys in SQL Server…what the heck is that, anyway”), some of it is misunderstandings (“what do you mean IDENTITIES aren’t unique! of course they are…they are primary keys!”), some of it is just new (“Why the heck would anyone want to use a SEQUENCE?”).

We’ll be chatting about all these questions and more on Thursday, 17 March at the Microsoft venue in Irving, Texas starting at 6PM.

Attendance is free, but you need to register at http://northtexas.sqlpass.org/ to help organizers plan for the event.

Don’t worry if you don’t know about SQL Server or don’t use it: this presentation will focus on some SQL Server specific features, but the discussion is completely portable to other DBMSs.

So many of us have learned database design approaches from working with one database or data technology. We may have used only one data modeling or development tool. That means our vocabularies around identifiers and keys tend to be product specific. Do you know the difference between a unique index and a unique key? What about the difference between RI, FK and AK? These concepts span data activities and it’s important that your team understand each other and where they, their tools and approaches need to support these features. We’ll look at the generic and proprietary terms for these concepts, as well as where they fit in the database design process. We’ll also look at implementation options in SQL Server and other DBMSs.

Hope to see you there!

ERwin Modeling Products Sale is Final

Mar 2, 2016   //   by Karen Lopez   //   Blog, Data Modeling, DLBlog, News  //  1 Comment

image

CA announced today that CA ERwin Data Modeler and the rest of the modeling business (people, content, communities, etc.) have been sold to Parallax Capital Partners

CA has completed the sale of the ERwin data modeling business to Parallax Capital Partners, a private equity firm with an exceptional track record of transitioning divisions, subsidiaries and product lines into successful stand-alone entities.

The transaction, which closed on February 29, is a win-win scenario that was carefully designed to ensure mutual value and a seamless transition for customers, partners, and each of the approximately 60 ERwin employees worldwide. This move also aligns with our global partner strategy, which is an important component to CA’s growth model.

With this divestiture, ERwin is an independent company that will continue to be led by its current management team.

Parallax Capital is a private equity firm that specializes in lower middle market (between $5 and $100 million) software companies.  In looking at their current portfolio, I recognize only a couple of companies, with Micro Focus being the one that I recognized instantly, but they sold that in the early 2000s.  Parallax owns a diverse set of companies, so I’m not sure where they will go with the ERwin Modeling product set.

What I do know is that CA was clear after the failed Embarcadero purchase attempt that they were still intending to sell off ERwin, so a purchase is important to the ERwin user market.  I have no other information and expect that initial communications will be that everything is remaining the same until it changes.

This quote: “This move also aligns with our global partner strategy, which is an important component to CA’s growth model. “ appears to imply that CA did not consider data modeling a growth area of the enterprise software business.  As sad as that is, I agree.

My initial feelings are that having the ERwin business owned by an entity that does not own a competing product is likely best for customers.  Competition is good, for technical quality, innovation and pricing.

UPDATE: a new, more upbeat announcement has gone up on ERwin.com http://erwin.com/resources/news/erwin-divested-from-ca-technologies/

What do you think the impact of this sale will be on you and the data modeling market?

Is Logical Data Modeling Dead?

Feb 16, 2016   //   by Karen Lopez   //   Blog, Data Modeling, Data Stewardship, Database Design  //  7 Comments

KeepCalmAndModelOnOne of the most clichéd blogging tricks is to declare something popular as dead.  These click bait, desperate posts are popular among click-focused bloggers, but not for me. Yet here I am, writing an “is dead” post.  Today, this is about sharing my responses on-going social media posts. They go something like this:

OP: No one loves my data models any more.

Responses: Data modeling is dead.  Or…data models aren’t agile.  Or…data models died with the waterfalls. Or…only I know how to do data models and all of you are doing it wrong, which is why they just look dead.

I bet I’ve read that sort of conversation at least a hundred times, first on mailing lists, then on forums, now on social media.  It has been an ongoing battle for modelers since data models and dirt were discovered…invented…developed.

I think our issues around the love for data modeling, and logical data models specifically, is that we try to make these different types of models be different tasks.  They aren’t.  In fact, there are many types, many goals, and many points of view about data modeling.  So as good modelers, we should first seek to understand what everyone in the discussion means by that term.  And what do you know, even this fact is contentious.  More on that in another post.

I do logical data modeling when I’m physical modeling.  I don’t draw a whole lot of attention to it – it’s just how modeling is done on my projects.

Data Modeling is Dead Discussion

One current example of this discussion is taking place right now over on LinkedIn. Abhilash Gandhi posted:

During one of my project, when I raised some red flags for not having Logical Data Model, I was bombarded with comments – “Why do we need LDM”? “Are you kidding”? “What a waste of time!". The project was Data Warehouse with number of subject areas; possibility of number of data marts.

and

I have put myself into trouble by trying to enforce best practices for Data Modeling, Data Definitions, Naming Standards, etc. My question, am I asking or trying to do what may be obsolete or not necessary? Appreciate your comments.

There are responses that primarily back up the original poster’s feelings of being unneeded on modern development projects.  Then I added another view point:

I’ll play Devil’s advocate here and say that we Data Architects have also lost touch with the primary way the products of our data modeling efforts will be used. There are indeed all kinds of uses, but producing physical models is the next step in most. And we have lost the physical skills to work on the physical side. Because we let this happen, we also have failed to make physical models useful for teams who need them.

We just keep telling the builders how much they should love our logical models, but have failed to make the results of logical modeling useful to them.

I’ve talked about this in many of my presentations, webinars (sorry about the autoplay, it’s a sin, I know)  and data modeling blog posts. It’s difficult to keep up with what’s happening in the modern data platform world.  So most of us just haven’t.  It’s not that we need to be DBAs or developers.  We should, though, have a literacy level of the features and approaches to implementing our data models for production use.  Why? I addressed that as well.  Below is an edited version of my response:

We Don’t All Have to Love Logical Data Modeling

First of all, the majority of IT professionals do not need to love an LDM. They don’t even need to need them. The focus of the LDM is the business steward/owner (and if i had my way, the customer, too). But we’ve screwed up how we think of data models as artefacts that are "something done on an IT project".  Sure, that’s how almost all funding gets done for modeling, and it’s broken. But it’s also the fact of life for the relatively immature world of data modeling.

We literally beat developers and project managers with our logical data modeling, then ask them “why don’t you want us to produce data models?” We use extortion to get our beautiful logical data models done, then sit back an wonder why everyone sits at another lunch table. 

I don’t waste time or resources trying to get devs, DBAs or network admins to love the LDMs. When was the last time you loved the enterprise-wide AD architecture? The network topology? The data centre blueprints and HVAC diagrams?

Data Models form the infrastructure of the data architecture, as do conceptual models and all the models made that would fill the upper rows of the Zachman Framework. We don’t force the HVAC guys to wait to plan out their systems until a single IT application project comes along to fund that work. We do it when we need a full plan for a data centre. Or a network. Or a security framework.

But here we are, trying to whip together an application with no models. So we tell everyone to stop everything while we build an LDM. That’s what’s killing us.  Yes, we need to do it. But we don’t have to do it in a complete waterfall method.  I tell people I’m doing a data model. then I work on both an LDM and the PDM at the same time. The LDM I use to drive data requirements from business owners, the PDM to start to make it actually work in the target infrastructure. Yes, I LDM more at first, but I’m still doing both at the same time. Yes, the PDM looks an awful lot like the LDM at first.

Stop Yelling at the Clouds

The real risks we take is sounding like old men yelling at the clouds when we insist on working and talking like it is 1980 all over again.  I do iterative data modeling. I’m agile. I know it’s more work for me. I’d love to have the luxury of spending six months embedded with the end users coming up with a perfect and lovely logical data model. But that’s not the project I’ve been assigned to. It’s not the team I’m on. To work against the team is a demand that no data modeling be done and that database and data integration be done by non-data professionals. You can stand on your side of the cubicle wall, screaming about how LDMs are more important, or you can work with the data-driving modeling skills you have to make it work.

Are Your Data Models Agile or Fragile: Sprints
When I’m modeling, I’m working with the business team drawing out more clarity of their business rules and requirements. I am on #TeamData and #TeamBusiness. When the business sees you representing their interests, often to a hostile third party implementer, they will move mountains for you. This is the secret to getting CDMs, LDMs, and PDMs done on modern development projects. Just do them as part of your toolkit.  I would prefer to data model completely separately from everyone else. I don’t see that happening on most projects.

The #TeamData Sweet Spot

My sweet spot is to get to the point where the DBAs, Devs, QA analysts and Project Managers are saying "hey, do you have those database printouts ready to go with DDL we just delivered? And do you have the user ones, as well?" I don’t care what they call them. I just want them to call them.  At that point, I know I’m also on #TeamIT.

The key to getting people to at least appreciate logical data models is to just do them as part of whatever modeling effort you are working on.  Don’t say “stop”.  Just model on.  Demonstrate, don’t tell your teams where the business requirements are written down, where they live.  Then demonstrate how that leads to beautiful physical models as well. 

Logical Data Modeling isn’t dead.  But we modelers need to stop treating it like it’s a weapon. Long Live Logical!

 

Thanks to Jeff Smith (@thatjeffsmith | blog ) for pointing out the original post.

Follow Up to State of the Union of Data Modeling 2016–Questions for You

Feb 1, 2016   //   by Karen Lopez   //   Blog, Data Modeling, DLBlog, Speaking  //  2 Comments

DATA spelled out in cereal letters

I had so many more questions I wanted to talk about during my recent State of the Union of Data Modeling 2016, but one hour goes by quickly when you have tools, industry, professionals, standards and user groups to cover.  I’m interested in your observations and comments about these questions:

  • Has data modeling accomplished all it needs to? Are we just in the maintenance phase of data modeling as a practice and profession?
  • What industry trends (tools, processes, methods, economics, whatever) are impacting (positive or negative) data modeling the most today?
  • How has the cost of data modeling changed since 1980s?
  • How has the return on data modeling changed since the 1980s?
  • How has risk changed in data modeling since the 1980s?
  • Data Modeling tools have so much maturity of features in them today.  But along with that prices have reflected those changes.  How have the prices of enterprise data modeling tools impacted data modeling on enterprise projects?
  • Have you worked with any non-IDEF1x/IE data modeling notation recently?
  • Have you worked with any open source data modeling tools?
  • What new features/enhancements/changes would you like to see in data modeling tools? Processes? Notations?
  • Why haven’t we solved the “no one loves me or my models” problem more widely?

I’ll add my thoughts on these in the comments, but I’d like to hear your responses as well.

7 Databases in 170 Minutes: Workshop at NoSQLNow!

Jan 26, 2016   //   by Karen Lopez   //   Blog, Database, Database Design, DLBlog, Events, NoSQL, Speaking, Training  //  No Comments

image

My friend Joey D’Antoni ( @jdanton | blog ) and I will be giving a workshop at NoSQLNow! about new database and datastore technologies like Hadoop, Neo4j, Cassandra, Vertica, Document DB, and others.  This will be a fast-paced, demo-heavy, practical sessions for data professionals.  We’ll talk about where a modern data architecture would best use these technologies and why it’s not an either/or question for relational solutions in a successful enterprise. And, as always, our goal is to make the time we spend fun and interactive.   This session will be a great starting point for some other session on Monday that go into data modeling for NoSQL as well as for all the other in-depth, database-specific talks the rest of the week.

Sunday, April 17, 2016
Level:
Intermediate

imageWe’ve been busy keeping relational data consistent, high quality, and available. But over the last few years, new database and datastore technologies have come to the enterprise with different data stories. Do we need all our data to be consistent everywhere? What does data quality mean for analytics? Will we need relational database?

Learn how traditional and new database technologies fit in a modern data architecture. We will talk about the underlying concepts and terminology such as CAP, ACID and BASE and how they form the basis of evaluating each of the categories of databases. Learn about graph, Hadoop, relational, key value, document, columnar, and column family databases and how and when they should be considered. We’ll show you demos of each.

Finally, we will wrap up with 7+ tips for working with new hybrid data architectures: tools, techniques and standards.

 REGISTER

Use code “DATACHICK” to save:

$100 off for  Tutorials Only + Seminar Only Registration and $200 off for Full Event, Conference+Tutorials, Conference +Seminar, and Conference Only Registration.

Super early registration ends 29 January, so take advantage of both discounts now (yes, they stack!).

Pages:«1234567...30»

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Categories

Archive

UA-52726617-1