Browsing articles tagged with " Deliverable"

Deadlines and Data Architects: You know the Date, Do You Know the Deliverable? #MemeMonday

Feb 6, 2012   //   by Karen Lopez   //   Blog, Data, Data Modeling, Database  //  7 Comments

It appears I’m an expert at deadline-driven accomplishments.  I mean, it’s meme Monday and even though I’ve known about the deadline for today’s post for weeks, I’m writing it at lunch on said day.  I’ve been thinking about what I’d write about for all this time, so it’s not like I’m just starting…but still, the deadline is what making this post happen.  I wish I could be the type of person who gets stuff done because it just needs to be done, but deadlines are what drive me. 

The ultimate inspiration is the deadline.
-Nolan Bushnell

I’ve been thinking about some experiences I’ve had working with other data architects and working with deadlines.  Usually as part of a team, we don’t get to set our own deadlines; they are set by a project manager or if we are lucky, via negotiation with the DBAs, developers and other architects on the team.  Our deliverables are typically the input into their deliverables, so their success is depends our getting our deliverables to them.  In my experience, though, it’s too common for data architects and data modelers to forget what deliverable those teams are waiting for or for them to make it available in the right format. 

In his post for #mememonday, Thomas LaRock (@sqlrockstar | blog ) give his best tip for working towards a deadline: work backwards.

What is Your Deliverable?

imageThis is the most frustrating question.  It should be very apparent to a data architect what deliverables the team needs from them.  But time after time I see modelers focused 99.999% on the data models.  Yes, your data models should be beautiful.  They should be complete, correct and pretty.  Every single one of them should get a trophy. All the meta data you want to capture about them should be in the data model.  Do you think, though, that your development DBA and developers are tapping their feet waiting for the PDFs of the data models?  Or do you suppose that they just might want that alter script to run in their environment so that they can get to work on their deliverables? 

Ideally, the team wants both.  They want the model to be created and they want the scripts with all the changes needed for their next deliverable.  I can assure you, though, that the scripts are what they want first.  I can also assure you that no one will love the data models more than you do, Mr. Data Architect.  So embrace the love you feel for the data models and funnel it into getting the "real" deliverable done:  DDL.  Sure, it hurts a bit that they don’t love your models as much as you do, but they will learn to love them because they produce beautiful databases.

On most my projects (mostly development ones) I never lose sight of the fact that ultimately, the data models are a method to get to good quality databases and data.

What Makes for a Good Deliverable?

I worked with a data architect who was very passionate about her data models.  She spend days making them beautiful.  She filled them with all kinds of nifty meta data to help people understand their data better.  She wrote definitions that were paragraphs long, then added more notes to them just to make it even better.  She often missed review meetings because she was too busy making a more beautiful data model.  She missed deadlines for delivering DDL because she wasn’t quite sure on how to do that.  She researched data modeling layout techniques to make models more readable.  She demoed different layouts and took surveys about the alternatives to find just the right one.  And she got further and and further behind on delivering revisions to development, her only direct customer of her work.  The team had gone from every other day deliveries of fixes and enhancements to the development databases to every other week.

Our modeler didn’t have time to learn the features of our target DBMS because was busy making the model a perfect data model. Our sprints were failing because database changes were coming too slowly and were often done incorrectly.  The DBAs and developers had to spend days cleaning up the DDL she had produced because she didn’t know how to use those features of our modeling tool.  So not only did she miss the deadlines, she missed the deliverables.  She didn’t understand this, though. In her mind, the data model was all anyone needed.  In reality, what they really needed was working DDL scripts to apply to their environments.   That was what good meant to the team.

What Format?

I worked with another data architect who would spend months working on the data model, then publish it inside a PDF of a Word document.  The actual data model file was not shared.  No DDL was shared.  Nothing was delivered that teams could actually import into their tools or apply to their local development environment.  This also meant that it was nearly impossible to comment on the data model or easily provide corrections or feedback.  The modeler wanted corrections to come to him in emails and he would make them and generate the Word documents and PDFs months later.  Requests for the data model file itself were met with "You don’t need that, just retype the model in whatever tool you use".  We can’t build stuff on PDFs.  Produce them, sure.  But they aren’t the deliverables we are looking for.  Your team (your customers) can tell you what format works best for them.  Just ask.

Where Do You Deliver the Deliverables?

I work with many data architects who don’t use the same deliverable locations as the rest of the project team and I think that’s a huge failure.  Professional development teams use some sort of versioning or configuration control environment to share their work product.  Data architects should use those, too.  Yes, we have our repositories and model marts, but those are typically only accessible by people who use the modeling tools.  They are my versioning control for the models, but I also deliver to the teams in the same place they expect to find all the components of their systems.  Maybe it’s just a wiki, or an intranet location. Where ever it is, I’m delivering there.

I worked with a data modeler who refused to use or learn the team’s versioning system.  So he just emailed around files, with no version numbers and expected people to be able to search through their emails to find scripts to deploy them in production environments. He seemed to randomly include and exclude people in the distribution list.  He often just attached a bunch of files, then said "here they are" in the body of the message.  Email is the worst versioning system in the world.  Don’t use it for deliveries.

Another approach I find annoying is fairly new, though.  I work with a an architect that keeps all the files he works on for several clients on Dropbox. All in one giant folder.  When he gets his work done,  he sends out an email saying: it’s all on Dropbox.  We as a team have to try  to figure out what the files are, which is the right version and try to ignore all the other stuff that sitting there in the folder.  As afar as I’m concerned, he might as well randomly generate his data models.

Work Backwards

If you have the answers to those questions above, you can work backwards to meet your deadlines.  My inefficient team members mentioned  above don’t do this.  They start with a generic to-do list of how to produce data models, start at step one and work their way as far as they can before they hit the deadline.  They often miss deadlines because they haven’t started at the end and worked backwards.  Don’t be that guy (or gal).  Know what you are expected to deliver, when, where, and in what format.

I start with what my deliverable is, what format I need to produce it in, and what measures I should use to ensure it’s good enough.  Then I start there with the tasks that will get me closest to that completion.  It’s only when those are completed do I keep working backwards to fill in the other high priority tasks.  If I still have time left, I do more.  I make it prettier.  I make it more useful.  I make it beautiful.  I love it.

Not all data architecture is done on development projects.  I understand that.  If your duties include supporting development teams, though, you need to support them.  That means loving your data models, the data AND databases.  There’s no reason why you can’t have it all.  Just remember which parts you’re supposed to deliver first.

Have you worked with data architects or modelers who worked backwards and got the job done?  What about people like I’ve mentioned in this post?  What would you wished they had done instead?

1980 Called and it Wants Its Deliverables Back

Feb 13, 2011   //   by Karen Lopez   //   Blog, Data, Data Modeling  //  7 Comments

 

imageI’ve been doing this data modeling stuff for a really long time.  So long that I just put "20+ years" on my slides…and I’m well past that 20.  However, having done this for a long time does not qualify me as an expert.  What qualifies me is that I know how to adapt my tools, methods and approaches as development tools and methods change.  What worked for projects in 1986 doesn’t necessarily work now. Not only do we have radically more advanced features in our tools, we have many more platforms to support.  We have a greater variety of development approaches.  Our architectures are much more distributed and much more complex than they were in the nineties.

Aerobic Excerise http://en.wikipedia.org/wiki/File:Aerobic_exercise_-_public_demonstration03.jpgBack in the mid-eighties I worked in the US Defense consulting business.  The somewhat serious joke was that consultants were paid $1 million for each inch of documentation they delivered.  It didn’t matter what was in the models or whether they were correct; it only mattered how many reams of paper were delivered.  That sort of "all documentation is great" mindset still exists well into 1999, long past its usefulness.   The world has changed and we data architects need to find a replacement for the publication fashion equivalent shoulder pads, thongs, leggings, skinny ties, and lace fingerless gloves.

Those differences mean that the deliverables we produce need to be radically different from what they were in 1999.  Our team members are no longer going to open up a 175-page prose document to find out how to use and understand the data model or database.  They don’t want all that great metadata trapped in a MS Word document or, worse, buried in an image.  The can’t easily search those and they can’t import that knowledge into their own models and tools. 

As much as I don’t want this to be true, no one wants to read or use massive narratives any longer.  Sure, if they are really, really stuck a quick write up might be helpful, but sometimes a quick video or screencast would be faster and easier to produce. If you are spending days or weeks writing big wordy documents, the sad truth is there is a high likelihood that absolutely no one except your mother is going to read or appreciate it…and she isn’t actually going to read it.

I’ve significantly changed what I produce when I release a data model.  I constantly monitor how these deliverables are used.  I look at server logs or general usage reports to continually verify that the time I’m spending on data-model related deliverables is adding value to the project and organization.  The main way I gauge usefulness of deliverables is by how urgently my team members start bugging me to get them up on the intranet where they are published. 

Here are my top 10 recommendations for approaching your data model deliverables:

  1. Get formal, lab-based hands-on training for staff.  Or use staff that are already trained in the tools and version of the tools they are using. You may be missing out on features in the tools that make publishing data models much easier that the methods you are currently using.  I had a client who was struggling to support an elaborate custom-developed application that they didn’t really know how to use or maintain.  It used a deprecated data model format to build an HTML-based report of the data model.  Sound familiar? Almost all tools provide a feature to generate such reports in seconds. 
  2. Move away from very large, manual documentation.   Think in terms of publishing data and models, not documents. Prose documents are costly to produce and maintain.  They do more harm than no documentation at all when they are not maintained.  The are difficult to search, share, and use.  This is not how the vast majority of IT staff want to consume information.  Team members want their data model data (metadata) in a format that is consumable, that can be used on a huge variety of platforms and that is interactive, not locked only in a PDF.
  3. Know how long it takes to produce every deliverable.  Having this information makes it easier for you and your team to prioritize each deliverable.  I once had a project manager ask if cutting back the automatically generated reports could save time for getting data modeling completed. I could show her that the total time to put the documents on the intranet was only about 5 minutes.   My document production data also helps other modelers estimate how long a release will take to produce.
  4. Stop exporting data for features that can be done right in the tool.  Move data model content that is locked in MS Word documents into the models or stop producing it.  Exporting diagrams as images and marking them up with more images means all that mark-up is unsearchable.  It also means that every change to the data model, even a trivial one, triggers a new requirement to recreate all those images.  Modern tools have drawing and mark-up features in them. Cost/benefit of exporting and annotating outside the modeling tool means you’ll always be spending more than your "earn".  You’re creating a data model deficit.
  5. Stop producing deliverables that require a complete manual re-write every time there is a new release.  Unless, of course, these sorts of things are HIGHLY valued by your team and you have evidence that they are used.  I’m betting that while people will say that they love them, they don’t love them as much as getting half as much data architecture work done.
  6. Focus on deliverables that are 100% generated from the tools.  Leave manually developed deliverables to short overviews and references to collaborative, user-driven content.   Wikis, knowledgebases, and FAQs are for capturing user-generated or user-initiated content.
  7. Focus on delivering documentation that can be used by implementers of the model.  That would be Data Architects, DBAs, and Developers.  Probably in reverse order of that list in priority.  Yes, you want to ensure that there are good definitions and good material so that any data architect in your company can work with the model even after you’ve won the lottery, the largest number of people who will work with the models are business users, developers, and DBAs.  Think of them as your target audience.
  8. Automate the generation of those deliverables via features right in the tools – APIs, macros, etc. Challenge every deliverable that will cost more to produce once than the value it will deliver to ever single member.
  9. Move supporting content that is locked into MS Word documents (naming standards, modeling standards and such) to a collaborative environment like a wiki or knowledgebase.  Don’t delay releases just to deliver those items.  Theses formal deliverables are important, but from a relative value point of view, they should not hold up delivering data models.
  10. Focus now on making a process that is more agile and less expensive to execute while also meeting the needs of the data model users.   If it is taking you more time to publish your data architectures than actually architecting them, you are doing it wrong.

 

While data modeling standards have stayed relatively stable over the last 10-20 years, our methods and tools haven’t.  If you are still modeling like it’s 1980, you stopped delivering value sometime around 1999.

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Recent Comments

Categories

Archive

UA-52726617-1