Browsing articles in "Database Design"

SQL Server 2014: New Datatype

Apr 1, 2014   //   by Karen Lopez   //   Blog, Data Modeling, Database Design, DLBlog, Fun, Parody, Snark, Space, SQL Server, WTF  //  18 Comments

image

Today is the general availability release date for the newest version of SQL Server, aptly named SQL Server 2014.  I’m excited about many of the new features being rolled out today, but the ones that will impact data architects, modelers and database designers are the new datatypes that will be introduced.  But first, for those of you who have their heads stuck in the deep piping and spit-managing of databases, some background about datatypes:

A datatype is a categorization of data items, based on the range and types of data that it can contain and a set of actions that can be validly taken against that data.

As such, applying a datatype to a column in a database makes it work as another type of constraint.  A tinyint column can’t hold my Starbucks name (Kitty) because it constrains the values to integers and only a subset of all integers, for example.

The number and type of datatypes (yes, I’m being meta there) varies depending on the strength and quality of the tequila the DBMS product management teams were drinking at their last Vegas Blow Out team building retreat, as called for in the ISO Standards for databases, AKA

ISO/IEC JTC 1/SC 32 – Data management and interchange.  

One of the things that developers and DBAs will tell you is that choosing the right datatype is important for performance reasons.  And by that, they mean the smallest datatype you can fit most of the data in. And maybe a bit smaller.  Soooo much bad info out there, I know.  When Knowledge Conquers Fear, we can love our data.  Thank the Cosmos you have me here to help you out.

What’s new in SQL Server 2014: A New Datatype

This new datatype is exciting for me as a data & space enthusiast.  The new feature finally allows modern database designers to properly specify the constraints for tracking time and location data in the same column. Yes, this means that your developers and DBAs no longer have to use comma-delimited values in their relational database designs when they need to track how much time and personal space they need to get up to speed on professional database design.  And it’s big enough to store that many man-hours.  Yeah. I said that.

BTW, it seems that Stack Overflow is *the* place to find info on how to implement comma-delimited values in database columns.  Kids, don’t get your database design knowledge from random forums on the Internet.

Anyway, back to the news!

The new feature makes so much sense with Microsoft’s push to the Cloud, it’s embracing of NoSQL technologies and all.  It’s AWESOME.

 

spacetime (Transact-SQL)

Defines a time and location in a universe.

SQL Server 2014

spacetime Description

Property

Value

Syntax

spacetime [(fractional seconds precision)], (universe, 5DGeometry)

Usage

DECLARE @MySpacetime spacetime (1000, 2014.12.0.2000.8,  image )

CREATE TABLE Table1 ( Column1 spacetime (1000, 2014.12.0.2000.8

image ) )

Time Range

to +∞ and beyond
(I hope you have lots and lots of memory and storage)

Space Ranger

@cmdr_hadfield

image

Universe Range

Please check data.NASA.gov for the up-to-date listing of known Universes Multiverses, as this changes beyond Microsoft control. There is no control. There is no center.

5DGeometry Range

[you’ll need a 5D monitor to view this range.]

Timezone offset range

Thank Venus, no, nope, never. We are scientists here. Use Multiuniversal Universal Time Coordinates (UTMC).

Daylight saving aware

Oh, for Carl’s sake. Do you really think something like spacetime needs to be sullied by DST?

Storage size

If you have to ask, you don’t ever need to use this datatype. Seriously.

Accuracy

+/- 10 Plancks. Depending on how far your server is from the Sun. Earth’s Sun, that is.

Default value

1989-05-15 12:00:00.1000  2014.12.0.2000.8 SNAGHTML5a35643

Calendar

Hubble

SQL Azure Windows Azure Dammit!
Microsoft Azure DB Support
Yes, of course.  But only in Premium plans and higher. 

 

Special Considerations and Gotchas

Some gotchas with this new datatype:

  • Due to the highly multi-dimensional, multiuniversal nature of this datatype, there isn’t any backwards compatibility.  Unless, of course, you can fold spacetime and go back and change earlier versions of SQL Server. But if you could do that, you wouldn’t be reading my blog, would you?
  • Just like the confusion over timestamps, you can’t really treat this like a date or time datatype.  It’s special.  And spatial. 
  • This means you can’t convert it to date, time, datetime, timestamp or spatial datatypes, either.
  • The 5D geometry thing is way too complex to explain in a single blog post.  But for those of you that managed to stick it out through some college level math, it involves parsecs (the correct usage of the term) and the double declining balance method of space depreciation.  In this first rollout of spacetime, the geometry completely ignores most OctoDeca Bands.  Except for Miller tracks.
  • You can’t use normal date and geometrical math on data in the columns. You can bend or fold the values, but since space has no center, and time has no beginning or end, spacetime has no beginning or end. It is infinite.  So the usual infinity rules apply.
  • This datatype is only available via O365, but that makes sense since as announced today, SQL Server 2014 is also only available via O365 subscriptions.
  • This datatype is only available at O365 plans at U3 and higher.  Wait, I don’t think I should have said anything about the new Universe O365 plans.  Forget I said anything.  That’s probably not going to be a rule in our universe.  Seriously.  No NDA broken.  I think.

 

Note

Some of this post may have been inspired by some bad veggie April Fish (poisson d’avril) I had last night.   If you want to get some real information about the new features of SQL Server 2014, you probably shouldn’t read random blogs on the internet on launch day.  Especially when it’s 1 April.

Did you catch all the special references in this post?  Let me know.

Data Modeling is Iterative. It’s not Waterfall

Mar 7, 2014   //   by Karen Lopez   //   Blog, Data, Data Governance, Data Modeling, Database Design, DLBlog  //  7 Comments

Sure, data modeling is taught in many training classes as a linear process for building software.  It usually goes something like this:

  1. Build a Conceptual Data Model.
  2. Review that with users
  3. Build a Logical Data Model
  4. Review that with users
  5. Build a Physical Data Model
  6. Give it to the DBA
  7. GOTO step one on another project.

And most team members think it looks like this:

image

Training classes work this way because it’s a good way to learn notations, tools and methods.  But that’s not how data modeling works when the professionals do it on a real project.

Data modeling is an iterative effort. Those integrations can be sprints (typical for my projects) or have longer intervals. Sometimes the iterations exist just between efforts to complete the data models, prior to generating a database.  But it’s highly iterative, just like the software development part of the project. 

In reality, data modeling looks more like this:

Data Model Driven Development - Karen Lopez

This is Data Model-Driven Development.  The high-level steps work like:

  1. Discuss requirements.
  2. Develop data models (all of them, some of them, one of them).
  3. Generate Databases, XML schemas, file structures, whatever you might want to physically build. Or nothing physical, if that’s not what the team is ready for. 
  4. Refine
  5. Repeat.

These, again, are small intervals, not the waterfall steps of an entire project.  In fact, I might do this several times even in the same sprint. Not all modeling efforts lead to databases or physical implementations.  That’s okay.  We still follow an iterative approach.  And while the steps here look like the same waterfall list, they aren’t the same.

  • There isn’t really a first step.  For instance, I could start with an in-production database and move around the circle from there.
  • We could start with existing data models. In fact, that’s the ideal starting point in a well-managed data model-driven development shop.
  • The data models add value because they are kept in sync with what’s happening elsewhere – as a natural part of the process, not as a separate deliverable.
  • The modeling doesn’t stop.  We don’t do a logical model, then derive a physical model, throwing away the logical model.
  • Data  modelers are involved in the the project throughout its lifecycle, not just some arbitrary phase. 
  • Modeling responsibilities may be shared among more roles.  In a strong data model-driven process, it is easier for DBAs and BAs to be hands-on with the data models.  Sometimes even users.  Really.

By the way, this iterative modeling approach isn’t unique to data models.  All the models we might work on for a project should follow this project.  Class diagrams, sequence diagrams, use cases, flow charts, etc. should all follow this process to deliver the value that has been invested in them.  That’s what Agile means in “the right amount of [modeling] documentation”. Data model driven development means that models are “alive”.  

If you are a modeler and re-enforcing the wrong perceptions of needing a waterfall-like approach to data modeling, you are doing it wrong.  You might be causing more pain for yourself than anyone else on your project.

Data Models aren’t just documentation checklist items.  They model the reality of the living, breathing systems at all points in its life.  They deliver value because they are accurate, not because they are “done”.

Romancing the Data….

image

Today is Valentine’s Day in many parts of the the world.  That means either you are looking forward to a happy day full of fun and a night full of …fun… or you are are planning on catching up with Frank Underwood on Netflix.  Both sound great to me.

Last year I wrote about 5 Naughty and Nice Ways to Love Your Data.  This year I’m going to focus on ways you can romance your data for a stronger, more lasting relationship.  So I’m assuming in the past you’ve followed my advice and have long since left the honeymoon phase of your data coffee dates.  But where are you now?  Are you starting to feel like maybe you need some more passion with your bits and bytes?  I’m here to help.

1.  Tell your data you love it.  Often. 

Heck, even show it you love it. Maybe one of the reasons your data has let itself go is that you haven’t told it how much you love it. Do you even remember the things you used to say to woo your data when you first met?  Do you have actively managed data models: conceptual, logical, and physical?  Do you give your database objects great names?  Do you keep good metadata about this data?  Do you follow data model-driven development? If you did all these in your early years of your relationship, are you still doing all that now? Are you doing all this in a modern way, not just the way you did it in 1980? Do you just talk a good game, but fail when it comes to actively showing it love?

Some day, when I’m awfully low,
When the query is slow,
I will feel a glow just charting  you
And the way you look tonight.

You’re lovely, with your axes so true
And your bars so blue
There is nothing for me but to report you,
And the way you look tonight.

With each crow’s foot your normalization grows,
Tearing my pages apart
And that CHAR that wraps your text,
Touches my foolish heart.

Yes you’re lovely, never, ever refactor
Keep that structured charm.
Won’t you never change it?
‘Cause I love you
Just the way you look tonight.

Data FTW Candy Heart.

2. Stop with the games. 

We’ve all seen it in personal relationships.  One person makes everything a game.  Do you store your data in one format, say ZIPCodes as INTEGERS, but have to pad out all those missing leading zeros every time you have to deal with North Eastern United States postal codes?  Stop doing that. Do you pretend that doing something faster is always better than doing it good enough?  Forget perfect. Good enough.  Do you tell management you have data models but all you really do is reverse engineer them?   It’s all games.

Daylight, alright
I don’t know, I don’t know if numbers are REAL
Been a LONG night and something ain’t right
You won’t SHOWPLAN, you won’t SHOWPLAN how you feel

No DATETIME ever seems right
To talk about the reasons why CAST and I fight
It’s DATETIME to end the TIMESTAMP
Put an end to this game before it’s too late

Data games, it’s you and me baby
Data games, and I can’t take it anymore
Data games, I don’t wanna play the…
Data games

HAWT Data Candy Heart.

3. Know where your data lives

Do you have an active inventory of what data resides where?  No?  How can you romance data you don’t know about?  If a server walked out the door of your organization, how long would it take you to figure out what was on it?  If a user had a legal need to access all the data the company held about a customer, would you be able to tell them?  If you really wanted a happy strong relationship with your data, you’d know.  Yes, it’s a lot of data to track where you data is.  That’s why they invented tools that do this.  And why data professionals are expected to use them.

Data is bigger
It’s bigger than the drives and they are not PB
The servers it is spread to
The bits in your drives
Oh no, I’ve duplicated too much

I set it up
That’s me in the ETL
That’s me in the database
Losing my governance
Trying to keep up with it all
And I don’t know if I can do it
Oh no, I’ve deployed too much

I haven’t documented enough
I thought that I heard you laughing
I thought that I heard you coughing
I think, I thought, I saw you cry

Data Kisses Candy Heart.

4. Stop faking it.

Yeah, sometimes little white lies are good for a relationship (BTW, You DO Look Beautiful!).  But the big ones? Nope, never.  The paranoia about NULLs often leads to a lot of lying.  Do you pretend that NULLs don’t exist by giving them various fake values like 999999 or N/A, UNKNOWN, WHO KNOWS or  __ ?  Does every developer get to choose their own NULL Imposter Text?  Are your aggregates all a huge lie due to all those zeros and 1980s dates you use to lie to your database?  Stop it.  It’s not helping that your queries are 2 ms faster when the data is one big lie.

Late at night a big database gets slower
I guess every normal form has its price
And it breaks her data to think her love is
Only given to a user with queries as fragile as ice

So it tells me it all adds up just fine
To aggregate the sales numbers for every town
But only the dev knows where those NULL have been killed
And it’s is headed for the cheatin’ UNKNOWN town

You can’t hide your lyin’ nines
And your N/A is a thin disguise
I thought by now you’d realize
There ain’t no way to hide your lyin underlines….

Sexy Data Candy Heart.

5. Protect it.

Do you l et just anyone throw code at your data without ensuring it’s treated right?  Do you participate in security and privacy reviews of application code?  You have those, right? Do you have metadata that describes the privacy and sensitive data requirements for each data element? Do you ensure that things like SQL injection tests happen for every application?

Oh where, oh where can my data be?
The dev took her away from me.

She’s gone to pastebin, so I’m gonna be sad, 
So I can see my data, by now I’m so mad.
We were out on a date in my modelling tool,
I had been too much a fool.

There in the database, all laid out,
a data was there, the database queried by a lout.
The dev allowed the inject, the data failed to be right.
I’ll never forget, the sound that night–
the screamin users, the bustin app,
the painful scream that I– heard crash.

Oh where, oh where can my data be?
The dev took her away from me.
She’s gone to pastebin, so I’m gonna be sad,
So I can see my data when my new job is had.

image

Keep saying it. Keep doing it. 

There’s so much more you can do to revitalize your relationship with data.  But if you do these, your data will keep on loving you back. I promise.  Remember, you data wants to love you back. It’s up to you to make sure it’s still there in morning.

Join Me at Enterprise Data World and Save $200 + $200

Conference Session Photo

I’ve been attending Enterprise Data World for more than 15 years.  This event, focused on data architectures, data management, data modeling data governance and other great enterprise-class methods is part technical training and part revival for data professionals.  It’s just that good.

This year the big bash is being held in Austin, TX, a thriving tech-oriented community, 27-April to 1 May.  And this year’s theme is “The Transformation to Data-Driven Business Starts Here.”

And right now there’s a $200 Early Bird Discount going…plus if you use coupon code “DATACHICK” you can save $200 more on a multi-day registration or fifty bucks on a one day pass.  There.  I just saved you $400.  And no, I get no kickbacks with this discount code.  I don’t need them.  I need you to be at this event, sharing your knowledge and meeting other data professionals. I need you to be part of the community of data professionals.

Top 10 Reasons You Need to Go to EDW 2014

  1. Data is HOT HOT HOT.  I deemed 2013 The Year of Data and I see no signs that organizations are going to back to software-is-everything thinking.  2014 is still going to be a year full of data. There’s even an executive, invitation-only CDOvision even co-located.
  2. Not Just Bullet Points.  There are over 20 hours of scheduled networking events for you to chat with other data-curious people.  Chatting with other data professionals is my favourite part of this event.  Bring your business cards…er… .vcs contact file.
  3. Lots of Expertise. Not just data celebrities, but also other data professionals with thousands of hours of hands-on experiences, sharing their use cases around data.  And not just data modeling.  Big Data.  Analytics.  Methods.  Tools.  Open Data.  Governance. NoSQL. SQL. RDBMS. Fun.
  4. Certifications.  You can take advantage of the Pay-Only-If-You-Pass option for the CDMP on-site certification testing.
  5. Workshops. I’m doing a half day tutorial on Driving Development Projects with Enterprise Data Models.  I’ll be talking about how data models fit within real-life, practical, get-stuff-done development projects. No ivory towers here.
  6. SIGs.  There are special interest groups on data modeling products, industries and methods. You can meet people just like you an share your tips and tricks for data lovin.  I will be leading the ER/Studio SIG.
  7. Ice Cream.  This conference has a tradition of the ice cream break on the exhibit floor.  Nice ice cream, even.
  8. Austin. Austin is one of the more vibrant cities in Texas.  So cool, it even has a Stevie Ray Vaughan statue. Museums, Theatres, indoor golf, clubs.  There’s a reason why SxSW is held here.
  9. Vendors. Yes, we love them, too.  Meet the product teams of the makers of the tools you use every day.  Or meet new teams and ask for a demo.  They are good people.
  10. Love Your Data.  There’s no better way to show your love than to network with other data professionals and learn from industry leaders.

Come learn how to help your organization love data better.  You might even see me in a lightning talk holding a martini.  Or taking impromptu pics of @data_model and other data professionals.  Or debating data management strategy with people from around the globe.  In other words, talking data. With people who love their data.  Join us.

Your Christmas Cookies Can Teach You About Data: Sugar Cookies

iStock_000014842218XSmall-Xmascookies

I don’t do a lot of baking. My kitchen is mostly the place where I blend my breakfast and enable my caffeine addiction. But my family has a tradition of making dozens and dozens of cookies every holiday season. Sugar cookies, No Bake Cookies, Snickerdoodles…the list just goes on and on.

As I was looking in my pantry for ingredients this year, I started thinking about how the process of producing cookies was a lot like data architectures. I may have been drinking. I’m pretty sure of it, actually. A lot. I mean I’m a lot sure I might have been drinking. A lot.

This week I bring to you a short series about Christmas Cookies and data.

Sugar Cookies

Yum! Probably the most common version of Christmas cookie is the decorated, cut out sugar cookies. Recipe books, blogs and food network shows make them look so easy. They contain just a few simple ingredients (butter, sugar, flour, salt, vanilla, eggs) that form the basis of almost all other higher forms of cookies.

What makes these special is what you do with that dough. The most exciting versions have you to roll out the dough, cut it out with cute cookie cutters, bake, cool, then decorate them. It’s just cutters and icing, right?

The Big Lie

I’m here to tell you that it’s all a lie. First, unless you have a lot of practice, the dough never rolls out cleanly because a whole lot of things have to go right first. Then you cut them out and they fall apart or tear. You’ll end up burning the first few batches until you know how your oven heats and how your baking pans work.  Maybe you need at Silpat liner. Or parchment paper.  Or an actual baker.

But no amount of equipment prepares you for the disaster of decorating them. They NEVER come out like the pictures. Those cookies on blogs and in recipe books are probably made by specialist magical cookie elves who spent their 10,000 hours learning to make cookies from Betty Crocker herself.  With Photoshop. I’m pretty sure every decorated cookie recipe is shopped worse than a Ralph Lauren model.

There are all kinds of warnings in the recipes: let the cookies cool on a rack. But who has time for that? Be agile and decorate them while the cookies are still in the cooling sprint. Oh. Crap. What the heck happened? If you haven’t spent a lot of time doing some test and training baking, your first set of cookies are going to be an embarrassment.

Burnt Cookie by Flare http://www.flickr.com/photos/75898532@N00/Cookie - http://joshuafennessy.com/

Silver Balls, Silver Balls…

And did you know that those little silver and gold balls that are the key part of the most beautiful cookies ARE NOT SUPPOSED TO BE EATEN? It says so right there on the label, “To be used as a decoration, not as a confection”. I bet you didn’t even RTFML. You’ve been unintentionally poisoning your kids and grandpa for decades. Or maybe intentionally. I won’t ask.

Silver Balls with warning (c) Karen Lopez

Lessons Learned

What does this teach us about data?

  1. Recipes make everything look easy. A lot of people see the recipe books and assume that making these cookies is very easy. And yet it’s difficult to get them right. The dough needs to be the right temperature and have the right ratio of ingredients to make the dough the right consistency. This requires not just a recipe, but a lot of practice.  It also requires good technique, the right tools and the right environmental factors.

    The same thing applies to data architecture. Sure, one can watch a 45 minute presentation on what all those boxes and lines are, but until they have applied the principles then lived with the results of their practice designs, they won’t really understand why one cannot just use melted butter or leave out the baking soda because it’s easier. It takes a lot of experience to be a good architect. Just like it takes a lot of experience to make beautiful decorated cookies.

  2. Demos of data modeling and design tools make everything look a lot easier than they are in real life. Part of this is because demos take time to give and they have to deal with the easy case. Sure you can migrate a database from Oracle to SQL Server by running a wizard. But you might not like the database or the data that comes out the other end. In fact, I can guarantee it you won’t. Migrating from one infrastructure to another always requires analysis, design, and implementation expertise. Decisions, even. Tools are never a substitute for design.
  3. If you are an amateur, you’re going to make a lot of mistakes. Heck, even professionals will make mistakes. But amateurs are going to make more.  It’s how it works.  You make mistakes, learn from them, get better. You’re going to burn a lot of data, and therefore users and ultimately customers.  You can read all the recipes in the world and watch all the episodes of Iron Chef, but living with the results of your design decisions is what helps you learn. It’s okay to make a lot of mistakes if you are learning in a class. Or are working on a development project iteration.

    Production, though, is like learning to cook your first meal for Christmas dinner for a close family of 20-30 people. It doesn’t scale well and you’ll just end up disappointing everyone in a big way. Heck, you might even kill some people with your bad design.  You might have some letters after your name, but until you get to the professional level, don’t call yourself a chef.  Well, you can, but your customers aren’t going to trust you after the second batch.

  4. You need to read and learn. Warning labels are a good start. The great think about most data principles is that they haven’t changed a lot. The technologies have, but not the foundations.  If you don’t read and learn, you won’t be in a position to deal with change that is coming whether you want it or not.
  5. Some ingredients for data actually don’t really help the data. Comma delimited data in a column is fast. It allows people to go around the whole data governance process. Stuffing internal-only customer data in to AddressLineFour is fine, right? Until someone prints that on the envelope and mails it to the customer. Sure, these cute workarounds are shiny and happy. You need to be able to see when people are proposing the equivalent of shiny silver balls. They are pretty, but not for use in real life. You can quote me on that.

There are probably a lot more lessons to be learned from Sugar Cookies, but I just wanted to cover the basics. Just like the ingredients for Sugar Cookies.

The Minimalist DBA: DBA Fundamentals

Nov 27, 2013   //   by Karen Lopez   //   Blog, Data, Database, Database Design, Events, Snark, Speaking, SQL Server  //  1 Comment

What do you think are the minimum skills a person should have before they are allowed to manage a database?  Does it matter whether or not it’s a production database?  Does it matter how much data is there? What kind of data? Is recovery a goal or a symptom?  Does it matter how old you are? Or how old the database is?  What is the meaning of all this anyway?

Thomas LaRock ( blog | @sqlrockstar ) and I will be talking about what a Minimalist DBA is, what skills we think they need, and how to ensure that they have them on 3 December at Noon EST for the DBA Fundamentals Virtual Chapter of PASS.

image
               "The best DBA is a lazy DBA…or at least a Minimalist DBA"

Every profession has a core set of responsibilities that are expected of every practitioner.  For anyone that has the letters “DBA” in their job description their job function is a black box to anyone on the outside. "What do you do here?" is a common question for most DBAs.

Some DBAs are a part-time data modelers, SAN admins, VM admins. Sometimes they know all about security, or Active Directory, or .NET. It differs from one shop to another. Whether it is day one or one hundred in your career as a DBA you need to make certain you stay focused on your core duties. If you slip up then you will find out why DBA often stands for Default Blame Acceptor.

Attend this webinar to make sure that no matter what your level of efficiency and laziness you are able to focus on the bare essentials (the minimum) necessary to be a Rockstar DBA."

Karen Lopez is a senior project manager and architect for InfoAdvisors. A frequent speaker at conferences and local user groups, she has 20+ years of experience in project and data management on large, multi-project programs. Karen is a chronic volunteer, a SQL Server MVP, and an active advocate for science, technology, engineering, and mathematics (STEM) education and data quality. She isn’t a DBA, but loves to talk and debate about the effectiveness of lazy DBAs.  She isn’t sure if the minimalist thing is a strength or an excuse. 

Thomas LaRock is a Microsoft Certified Master, a SQL Server MVP, a VMWare vExpert, and a Microsoft Certified Trainer with over 15 years’ experience in the IT industry in various roles such as programmer, developer, analyst, and database administrator. He is also the author of “DBA Survivor: Become a Rock Star DBA” (http://dbasurvivor.com) and has participated in the technical review of several other books.

Currently, Thomas is a Technical Evangelist for Confio Software. This role allows for him to work with a variety of customers to help solve questions regarding database performance tuning and virtualization. Thomas also serves on the Board of Directors for PASS as Vice President of Marketing. You can find out more information about him at his blog: http://thomaslarock.com/resume/.

You can probably expect our usual level of snark, debate, levity and great info for this presentation.  Bring your ideas and snark, too.  I always ensure that the audience is part of the presentation, so expect more the a slew of bullet points and demos.  And even though this is hosted by a SQL Server organization, all we will be talking about will be applicable to multiple platforms.  That’s how real enterprise database systems are anyway, right?

You’ll need to register, but it’s free.  By the way, if you also register on that site, you’ll become a member of that chapter.  And that’s free, too.

Big Challenges in Data Modeling: Data Model Patterns–Webinar

Oct 23, 2013   //   by Karen Lopez   //   Blog, Data, Data Modeling, Database Design, Events  //  No Comments

Thursday, 24 October 2013, 2PM EDT http://www.dataversity.net/oct-24-webinar-big-challenges-in-data-modeling/

In this month’s Big Challenges in Data Modeling #BCDModeling webinar we’ll be tackling the issue of working with models purchased or borrowed from third parties. This includes standard models, modeling patterns from books, and models inherited with software packages.

Have you ever considered using pre-existing pattern models to jump start your database projects? Have you considered purchasing proprietary models? Did you know that there are hundreds of models available to you for free or for minimal cost? In this month’s Data Modeling Challenges webinar, we discusses some of the benefits and gotchas of working with acquired models – industry standard models, patterns, and other universal model concepts.

We will chat about:

· The costs, benefits, and risks of working with industry standard data models

· The benefits of using industry standards in your package acquisition projects

· Choosing the right process

· Myths in working with pattern models

· What you should know before committing to project plans and estimates

· Lessons Learned

· Resources

· …and whatever you, attendees, want to chat about.  It’s a conversation, not a presentation!

My panelists will be Paul Agnew, co-author of The Data Model Resource Book, Vol 3 and David Hay, author of Data Model Patterns, and YOU, the attendees.  Unlike many other webinars, you can participate in the discussion by chatting with each other, as well as asking formal questions to the panelists.

While the formal part of the webinar begins at 2PM EDT, you can join early to start the chat while we go through some sound checks and pre-show rants.  Also, some of us will stay on for about 15-20 minutes “off the record”.  You can also ask questions on Twitter via the #BCDModeling hashtag.

Registration is free, but you do have to register to get into the webinar.

See you soon.

Pages:«123456»

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Categories

Archive

UA-52726617-1