1980 Called and it Wants Its Deliverables Back

Feb 13, 2011   //   by Karen Lopez   //   Blog, Data, Data Modeling  //  7 Comments

 

imageI’ve been doing this data modeling stuff for a really long time.  So long that I just put "20+ years" on my slides…and I’m well past that 20.  However, having done this for a long time does not qualify me as an expert.  What qualifies me is that I know how to adapt my tools, methods and approaches as development tools and methods change.  What worked for projects in 1986 doesn’t necessarily work now. Not only do we have radically more advanced features in our tools, we have many more platforms to support.  We have a greater variety of development approaches.  Our architectures are much more distributed and much more complex than they were in the nineties.

Aerobic Excerise http://en.wikipedia.org/wiki/File:Aerobic_exercise_-_public_demonstration03.jpgBack in the mid-eighties I worked in the US Defense consulting business.  The somewhat serious joke was that consultants were paid $1 million for each inch of documentation they delivered.  It didn’t matter what was in the models or whether they were correct; it only mattered how many reams of paper were delivered.  That sort of "all documentation is great" mindset still exists well into 1999, long past its usefulness.   The world has changed and we data architects need to find a replacement for the publication fashion equivalent shoulder pads, thongs, leggings, skinny ties, and lace fingerless gloves.

Those differences mean that the deliverables we produce need to be radically different from what they were in 1999.  Our team members are no longer going to open up a 175-page prose document to find out how to use and understand the data model or database.  They don’t want all that great metadata trapped in a MS Word document or, worse, buried in an image.  The can’t easily search those and they can’t import that knowledge into their own models and tools. 

As much as I don’t want this to be true, no one wants to read or use massive narratives any longer.  Sure, if they are really, really stuck a quick write up might be helpful, but sometimes a quick video or screencast would be faster and easier to produce. If you are spending days or weeks writing big wordy documents, the sad truth is there is a high likelihood that absolutely no one except your mother is going to read or appreciate it…and she isn’t actually going to read it.

I’ve significantly changed what I produce when I release a data model.  I constantly monitor how these deliverables are used.  I look at server logs or general usage reports to continually verify that the time I’m spending on data-model related deliverables is adding value to the project and organization.  The main way I gauge usefulness of deliverables is by how urgently my team members start bugging me to get them up on the intranet where they are published. 

Here are my top 10 recommendations for approaching your data model deliverables:

  1. Get formal, lab-based hands-on training for staff.  Or use staff that are already trained in the tools and version of the tools they are using. You may be missing out on features in the tools that make publishing data models much easier that the methods you are currently using.  I had a client who was struggling to support an elaborate custom-developed application that they didn’t really know how to use or maintain.  It used a deprecated data model format to build an HTML-based report of the data model.  Sound familiar? Almost all tools provide a feature to generate such reports in seconds. 
  2. Move away from very large, manual documentation.   Think in terms of publishing data and models, not documents. Prose documents are costly to produce and maintain.  They do more harm than no documentation at all when they are not maintained.  The are difficult to search, share, and use.  This is not how the vast majority of IT staff want to consume information.  Team members want their data model data (metadata) in a format that is consumable, that can be used on a huge variety of platforms and that is interactive, not locked only in a PDF.
  3. Know how long it takes to produce every deliverable.  Having this information makes it easier for you and your team to prioritize each deliverable.  I once had a project manager ask if cutting back the automatically generated reports could save time for getting data modeling completed. I could show her that the total time to put the documents on the intranet was only about 5 minutes.   My document production data also helps other modelers estimate how long a release will take to produce.
  4. Stop exporting data for features that can be done right in the tool.  Move data model content that is locked in MS Word documents into the models or stop producing it.  Exporting diagrams as images and marking them up with more images means all that mark-up is unsearchable.  It also means that every change to the data model, even a trivial one, triggers a new requirement to recreate all those images.  Modern tools have drawing and mark-up features in them. Cost/benefit of exporting and annotating outside the modeling tool means you’ll always be spending more than your "earn".  You’re creating a data model deficit.
  5. Stop producing deliverables that require a complete manual re-write every time there is a new release.  Unless, of course, these sorts of things are HIGHLY valued by your team and you have evidence that they are used.  I’m betting that while people will say that they love them, they don’t love them as much as getting half as much data architecture work done.
  6. Focus on deliverables that are 100% generated from the tools.  Leave manually developed deliverables to short overviews and references to collaborative, user-driven content.   Wikis, knowledgebases, and FAQs are for capturing user-generated or user-initiated content.
  7. Focus on delivering documentation that can be used by implementers of the model.  That would be Data Architects, DBAs, and Developers.  Probably in reverse order of that list in priority.  Yes, you want to ensure that there are good definitions and good material so that any data architect in your company can work with the model even after you’ve won the lottery, the largest number of people who will work with the models are business users, developers, and DBAs.  Think of them as your target audience.
  8. Automate the generation of those deliverables via features right in the tools – APIs, macros, etc. Challenge every deliverable that will cost more to produce once than the value it will deliver to ever single member.
  9. Move supporting content that is locked into MS Word documents (naming standards, modeling standards and such) to a collaborative environment like a wiki or knowledgebase.  Don’t delay releases just to deliver those items.  Theses formal deliverables are important, but from a relative value point of view, they should not hold up delivering data models.
  10. Focus now on making a process that is more agile and less expensive to execute while also meeting the needs of the data model users.   If it is taking you more time to publish your data architectures than actually architecting them, you are doing it wrong.

 

While data modeling standards have stayed relatively stable over the last 10-20 years, our methods and tools haven’t.  If you are still modeling like it’s 1980, you stopped delivering value sometime around 1999.

7 Comments

  • […] This post was mentioned on Twitter by Gary Bowers, Carlos Cunha. Carlos Cunha said: RT @datachick: 1980 Called and it Wants Its Deliverables Back http://bit.ly/fiXcLH […]

  • Karen,

    I agree with everything here – with a couple of reservations/qualifications. I should say I have some practical experience in doing what you are suggesting and have had moderate success.
    The important thing to remember is that the sorts of modelling tools we are discussing are primarily intended for technical types: architects, developers, analysts. Most of your business owners have no interest in, and little aptitude for, such tools. In addition, many of them will not understand the modelling conventions or languages you are using (e.g. most marketers don’t understand UML). So if you decide to use the tool as your primary “documentation” you should make an explicit decision about how you address these non-tool using/literate audiences. This may be as simple as creating high-level non-technical diagrams in the tool for exactly these audiences and exporting them into a non-technical document. Or it may require more labour-intensive methods such as handcrafting special diagrams for those audiences depending on how important they are. Aligning and communicating models to business stakeholders and their concerns can be the difference in your project succeeding or failing, so the importance of this shouldn’t be underestimated.
    Another key success factor is getting all of the technical staff acess to the tool/models. There is little to be gained from modelling everything if your key audiences (e.g. developers) don’t have access to the tool. You will still have to create monolithic documents for them, which they won’t want to read.
    I am a real fan of the approach being suggested here – but in my roles, addressing those two concerns is key.

    Doug

  • Doug, all great points.

    For business users, I do think prose can be beneficial. The good thing, though, is that this level of documentation generally does not change with every data type change, additional column, or added index. A good overview of key business features of the model along with really great definitions and other materials can go a long way to ensure that they understand what is modelled and how to review it.

    I’m with you on having access to the tools. That’s why I prefer tools that offer concurrent licensing plans. Anyone in the organization can use the tools if they want to. I have some business users who use our modeling tools.

  • Nice list. Based on the Enterprise Architecture tools I reviewed last year, EA’s understand this and the vendors are trying to support them.  Then there are the BA’s and Solution Architects who are stuck in the mire of Visio-land. I’d say we DA’s are somewhere in the middle, both in the DA tool support for quality artifacts easily published and ability to use them.

    • Most of the Data Architects I’ve worked with are still in the “write a 100 pages for each release, annotate the data model outside the tool, and only do releases a couple of times a year” mode. That’s what drove me to write this post.

  • […] of your relationship, are you still doing all that now? Are you doing all this in a modern way, not just the way you did it in 1980? Do you just talk a good game, but fail when it comes to actively showing it […]

  • […] is sounding like old men yelling at the clouds when we insist on working and talking like it is 1980 all over again.  I do iterative data modeling. I’m agile. I know it’s more work for me. I’d love to […]

Leave a comment

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Categories

Archive

UA-52726617-1