Browsing articles tagged with " Documentation"

1980 Called and it Wants Its Deliverables Back

Feb 13, 2011   //   by Karen Lopez   //   Blog, Data, Data Modeling  //  7 Comments

 

imageI’ve been doing this data modeling stuff for a really long time.  So long that I just put "20+ years" on my slides…and I’m well past that 20.  However, having done this for a long time does not qualify me as an expert.  What qualifies me is that I know how to adapt my tools, methods and approaches as development tools and methods change.  What worked for projects in 1986 doesn’t necessarily work now. Not only do we have radically more advanced features in our tools, we have many more platforms to support.  We have a greater variety of development approaches.  Our architectures are much more distributed and much more complex than they were in the nineties.

Aerobic Excerise http://en.wikipedia.org/wiki/File:Aerobic_exercise_-_public_demonstration03.jpgBack in the mid-eighties I worked in the US Defense consulting business.  The somewhat serious joke was that consultants were paid $1 million for each inch of documentation they delivered.  It didn’t matter what was in the models or whether they were correct; it only mattered how many reams of paper were delivered.  That sort of "all documentation is great" mindset still exists well into 1999, long past its usefulness.   The world has changed and we data architects need to find a replacement for the publication fashion equivalent shoulder pads, thongs, leggings, skinny ties, and lace fingerless gloves.

Those differences mean that the deliverables we produce need to be radically different from what they were in 1999.  Our team members are no longer going to open up a 175-page prose document to find out how to use and understand the data model or database.  They don’t want all that great metadata trapped in a MS Word document or, worse, buried in an image.  The can’t easily search those and they can’t import that knowledge into their own models and tools. 

As much as I don’t want this to be true, no one wants to read or use massive narratives any longer.  Sure, if they are really, really stuck a quick write up might be helpful, but sometimes a quick video or screencast would be faster and easier to produce. If you are spending days or weeks writing big wordy documents, the sad truth is there is a high likelihood that absolutely no one except your mother is going to read or appreciate it…and she isn’t actually going to read it.

I’ve significantly changed what I produce when I release a data model.  I constantly monitor how these deliverables are used.  I look at server logs or general usage reports to continually verify that the time I’m spending on data-model related deliverables is adding value to the project and organization.  The main way I gauge usefulness of deliverables is by how urgently my team members start bugging me to get them up on the intranet where they are published. 

Here are my top 10 recommendations for approaching your data model deliverables:

  1. Get formal, lab-based hands-on training for staff.  Or use staff that are already trained in the tools and version of the tools they are using. You may be missing out on features in the tools that make publishing data models much easier that the methods you are currently using.  I had a client who was struggling to support an elaborate custom-developed application that they didn’t really know how to use or maintain.  It used a deprecated data model format to build an HTML-based report of the data model.  Sound familiar? Almost all tools provide a feature to generate such reports in seconds. 
  2. Move away from very large, manual documentation.   Think in terms of publishing data and models, not documents. Prose documents are costly to produce and maintain.  They do more harm than no documentation at all when they are not maintained.  The are difficult to search, share, and use.  This is not how the vast majority of IT staff want to consume information.  Team members want their data model data (metadata) in a format that is consumable, that can be used on a huge variety of platforms and that is interactive, not locked only in a PDF.
  3. Know how long it takes to produce every deliverable.  Having this information makes it easier for you and your team to prioritize each deliverable.  I once had a project manager ask if cutting back the automatically generated reports could save time for getting data modeling completed. I could show her that the total time to put the documents on the intranet was only about 5 minutes.   My document production data also helps other modelers estimate how long a release will take to produce.
  4. Stop exporting data for features that can be done right in the tool.  Move data model content that is locked in MS Word documents into the models or stop producing it.  Exporting diagrams as images and marking them up with more images means all that mark-up is unsearchable.  It also means that every change to the data model, even a trivial one, triggers a new requirement to recreate all those images.  Modern tools have drawing and mark-up features in them. Cost/benefit of exporting and annotating outside the modeling tool means you’ll always be spending more than your "earn".  You’re creating a data model deficit.
  5. Stop producing deliverables that require a complete manual re-write every time there is a new release.  Unless, of course, these sorts of things are HIGHLY valued by your team and you have evidence that they are used.  I’m betting that while people will say that they love them, they don’t love them as much as getting half as much data architecture work done.
  6. Focus on deliverables that are 100% generated from the tools.  Leave manually developed deliverables to short overviews and references to collaborative, user-driven content.   Wikis, knowledgebases, and FAQs are for capturing user-generated or user-initiated content.
  7. Focus on delivering documentation that can be used by implementers of the model.  That would be Data Architects, DBAs, and Developers.  Probably in reverse order of that list in priority.  Yes, you want to ensure that there are good definitions and good material so that any data architect in your company can work with the model even after you’ve won the lottery, the largest number of people who will work with the models are business users, developers, and DBAs.  Think of them as your target audience.
  8. Automate the generation of those deliverables via features right in the tools – APIs, macros, etc. Challenge every deliverable that will cost more to produce once than the value it will deliver to ever single member.
  9. Move supporting content that is locked into MS Word documents (naming standards, modeling standards and such) to a collaborative environment like a wiki or knowledgebase.  Don’t delay releases just to deliver those items.  Theses formal deliverables are important, but from a relative value point of view, they should not hold up delivering data models.
  10. Focus now on making a process that is more agile and less expensive to execute while also meeting the needs of the data model users.   If it is taking you more time to publish your data architectures than actually architecting them, you are doing it wrong.

 

While data modeling standards have stayed relatively stable over the last 10-20 years, our methods and tools haven’t.  If you are still modeling like it’s 1980, you stopped delivering value sometime around 1999.

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Categories

Archive

UA-52726617-1