Making The Hard Decisions On A Project–Lessons From NASA

May 2, 2011   //   by Rob Drysdale   //   Blog, Fun, Professional Development, Social Networking, Space, Travel  //  6 Comments
Space Shuttle Endeavour STS-134 (201104290011HQ)

Image by nasa hq photo via Flickr

A Right Turn Instead Of A Left Turn

Some time ago, Karen and I put our names in to attend the #NASATweetup scheduled for the last launch of Space Shuttle Endeavour (STS-134).  Karen was chosen and went down last week and had a fabulous experience, but with less than 3 hours to go until the launch it got scrubbed.  Throughout that morning they had already worked on a problem with a regulator and had made up for lost time caused by a storm the previous day and it all looked good for a launch.  I was watching the tweets and through NASA TV saw the astronauts in the Astro Van heading to the launch pad when they turned right to go back instead of left and we found out the launch was scrubbed.  As of right now, a new launch date has not been set as they work on the problem and determine when the next eligible target launch date can be.

But We’re Going To Disappoint All These People

The launch delay got me thinking about how decisions like that get made especially so close to the deadline and how we could apply this thinking to our own projects.  Think about it, the President was on his way, there were numerous dignitaries, 150 #NASATweetup attendees, and an estimated 700,000 others there to watch this historic launch of the last shuttle flight of Endeavour.  Can you imagine having to be the one that has to say “not today”?  Have you ever been on a project when the executives are there saying “Let’s just go ahead and implement it and we’ll fix it later”?

Your Decision Making Process Is Key And Must Be In Writing

While most of us don’t deal with projects with the same risk factors as NASA does we still have to deal with problems and risk, but how we deal with it is key.  As Karen detailed in her post #NASATweetup – It’s a GO! Readiness Reviews and Your Projects this all works when you have everything documented beforehand and you have a formal process for this.  In essence, you have algorithms and decision trees that you follow that make sure that you make the right choice and don’t let human emotion and behaviour get in the way.  Don’t get me wrong, this was not an immediate decision and I’m sure it was not an easy decision.  But if you have all of your options and decision trees, policies and procedures mapped out ahead of time then the decision is based on those written policies and not subject to human emotion.

In the announcement of the delay Shuttle Launch Director, Mike Leinbach, stated:

Today, the orbiter is not ready to fly…we will not fly before we’re ready.

This was not a decision taken lightly, but after thoroughly  evaluating the problem and determining if it could be fixed prior to launch or if it was more serious.  But with such a short time to launch they had to make a firm decision, so they did.  In my mind, this takes a lot of integrity and strength to be able to stand up and say that they can’t launch.

WWND

So the next time you have a problem on one of your projects think about this: WWND – What Would NASA Do?  Better yet, when you start a project, write down all the possible scenarios, risks and decisions and a have a formal process so you can follow it when you need to.

Enhanced by Zemanta

6 Comments

  • Maybe WWND was based more on bitter experience than anything else – well-documented failures of project management contributed to some of the more tragic episodes in NASA’s history. Maybe you need to change the acronym to WWNDN with the final N standing for Now.

    Peter

    • Peter,
      Every organization, and even profession, needs to learn from their failures and improve their processes and procedures. That’s why you see improvement in safety legislation, engineering standards, etc. I agree that there have been failures in the past, but they still have a lot more rigor in their procedures than most organizations do and I think it’s important to highlight that because I think we can learn from what they do.

      • Hi Rob,

        I agree with you – but not sure that an organisation the size of NASA, carrying projects of the complexity that they do, can ever be an exemplar for more quotidian projects that most readers would be involved in.

        Maybe you have done so already, but I would encourage you to read Richard Feynman’s annexe to the Rogers Commission report on the Challenger disaster. Common to our world, one of the things that this focused on was the manner in which software was developed and tested. Another is the general approach to estimating and aggregating risks of failure. Some salutary reading in there for any IT professional; even those not involved in life-and-death situations IMO.

        All the best

        Peter

        • Yeah, I’d love to see the look on my CIOs face if I said she had to start implementing NASA procedures to move forward. I’m currently sitting in Florida, waiting for the cause the first launch attempt being scrubbed to be fixed. From an observer’s point of view, I just want to see them light that candle. I feel so much like an end user who has no idea why this is taking so long, why they can’t just run out to Radio Shack and get a new control module power unit and a heater and put everything into production…er…outer space.

          But I think Rob’s excellent point is that we often forget in the heat of the moment, just as we all think we are ready for production, that we should outline some Go/No Go terms so that our emotional attachment to the release don’t overrule the logical arguments of why putting something into production prematurely might cause harm to others.

          Ultimately, our duty as professionals is to mitigate harm, even if it means professional or personal emabarrassment to team members. Having some formal procedures is a good thing. Having them ahead of time, before the excitement of a launch is even better.

          Yes, NASA changed their decisions based on some catestrophic failures that most of of us in IT don’t have to face. But we still manage to put things into production that we new were faulty, that we knew would cause harm. That’s the unacceptable part.

          I’m going to head off and check the weather predictions and the NASA.gov website for the 10 millionth time today…and I might just head over to Radio Shack to see if they have any spare power units hanging around.

  • It’s important to remember as well that the decision procedures don’t need to be lengthy binders and binders of procedures either. It can be as easy as a few guiding principles; all tests must pass, data integrity is never compromised, we don’t deploy anything that hasn’t been tested to production, etc…

    • Hi John,

      I think that the idea of simple guiding principles is a sound one, but they also need to relate to something concrete and not simply be platitudinous. To pick on one thing, what does “data integrity is never compromised” mean in practice. It is a goal that most people would agree with, but what would people do that is different on a day-to-day basis?

      It is a bit like the corporate slogans that are bandied about. If there is not a connection between the macro and micro levels then the people actually carrying out the work of an organisation are likely to simply shrug their shoulders and get on with whatever they were doing before.

      All the best

      Peter

Leave a comment

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to www.datamodel.com by Email


Categories

Archive

UA-52726617-1