Not getting that promotion and handling failure

We talk a lot about the need to fail and there are lots of great nuggets of wisdom like “A person who never made a mistake never tried anything new.” and “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.”  But doesn’t that all sound like a bunch of hooey when failure visits you personally?

The best example of this phenomenon is when one doesn’t get a promotion.   As Amy Gallo puts it in her HBR blog post “Didn’t Get That Promotion?

Getting passed over for a promotion can be disheartening and even humiliating. Whether you thought you deserved the job or were promised it, no one likes hearing that they didn’t meet the mark.

It is a rejection that’s more painful than any save for unrequited or lost love.  One can brush off a failed project or presentation fairly easily… at least compared to hearing that one didn’t quite cut it. 

Gallo and her experts hit on familiar points up front: act ( but don’t react), get some outside perspective, no whingeing.  However, I found the last two points the most valuable from my experience.  I would go even further: reframing the experience and reenergizing one’s network are essential to make the obvious work.  One can’t exercise patience, get “outside > in” feedback, then take appropriate action without taking those two steps first.

When you choose not to decide, you still have made a choice

That snippet of Rush’s “Freewill” ran through my head after I read Michael Krigsman’s post on developers’ perspectives on IT failure.  What caused my earworm?  It was this section, dealing with IT priorities:

The survey breaks out IT quality priorities by role in the organization, and yields an interesting gap between the project managers and business stakeholders. As the following table shows, project managers prioritize budget and schedule while people in the business seek the best solution. 

More interesting to me were the portfolio and strategy implications of the answers.

  1. It didn’t seem like the respondents understood that these options would require trade offs…every option received over 50 percent. 
  2. Where are the resource trade offs?  Resource constraints are much “harder” than cash budgets in my experience.
  3. I’m not sure how well thought through the survey was.  “Shipping when the system is ready is more important than shipping on schedule” and “Delivering high quality is more important than delivering on time and on budget”.  These are almost the same trade off, even if the “high quality” question slips in “budget.”
  4. Left unexplored are the tradeoffs within the portfolio at large.  It is great to say you’re willing to trade time and resources for quality or ROI.  However, that’s a point analysis that leaves out the opportunities foregone. 

One of the reasons IT projects are under such time and resource pressure is that there’s a domino effect.  In other words, if one project slips, the rest of the portfolio slips because you can’t simply plug in new resources, there are technical dependencies, etc.   And what else slips?  The benefits from these future projects.

Three comments on Michael Krigsman’s: The devalued future of IT in a marketing world

I had three quick points re: Michael Krigsman’s provocatively titled post “The Devalued Future of IT in a Marketing World“. This situation is as much opportunity as it is threat, so be proactive when addressing: 1.) Embrace your firm’s ability to shift funds from SG&A to revenue-generating functions. That was the idea behind more efficient and effective IT, right? 2.) Ensure that marketing colleagues are choosing their spend, not just deciding.. It is great that they want to decide spend…but not “all they can eat.”. In other words, drive portfolio prioritization of alternatives (don’t accept “all of the above”). 3.) Drive benefits measurement and realization. Probably the biggest gap in all PMOs across functions. A good first step is to insist on explicit value measurement plans in project plans.

Become Focused by Failure

Great WSJ article by Prof. Ken Bain that takes the Cub Scout motto of “Do Your Best” to the next level. 

It also hits home personally.  I was often praised for being “smart”, which is like being congratulated for being “lucky.”  The implication is that I didn’t have much to do with it.  That approach wasn’t too “smart” it turns out.  As Prof. Bain notes, for about 25 years social scientists have developed:

key insights into how successful people overcome their unsuccessful moments—and they have found that attitudes toward learning play a large role from a young age.

The most important attitude is a “growth mind-set”: the idea that knowledge comes from trying, learning, and yes, failing at, new things.  

Prof. Cain also references research that our brain makes more and stronger connections after exposure to novelty.  While he presents the research obliquely — as part of a psychology experiment about priming learning attitudes  — my understanding is that there is real neuroscience to support this insight.

I wouldn’t rely on the priming approach solely.  If you believe in priming, whatever you do don’t read this Nature article by Ed Yong on the problems with social science experimental design!

Survey: Best and Worst Project Names

You’ve asked for it…and here it is:

Let everyone know your best and worst project names.

Quote of the Day: Umberto Eco

Would it, too, go according to plan, or would it go according to The Plan, which now was no longer mine?

– Umberto Eco, Foucault’s Pendulum

Why personal behaviors impact testing

My last post used testing to illustrate the consequences of questionable personal behavior on a business situation.  Quality is susceptible to personal and professional gaps that interact to amplify each other’s effects.

Why is that so?  Let’s start with the examples I used.  Recall that business process owners simply copied the unit tests of the developers to serve as user acceptance tests.   I characterized this approach as a failure of accountability: the process owners didn’t believe it was their “real” job, even though they knew they would have to certify the system was fit for use.  Less charitably, one could have called it laziness.  More charitably, one could have called it efficiency. 

And indeed, an appeal to efficiency underlay the rationalizations of these owners: “Why should I create a new test when the developer — who knows the system better than I do — has already created one?”  How would you answer this question?  As a leader, do you know the snares such testing practices lay in your path?  Off the top…

  1. Perpetuating confirmation bias:  By the time someone presents work product for formal, published testing, he or she has strong incentives to conduct testing that proves that the work product is complete.  After all, who doesn’t want his work to be accepted or her beliefs confirmed?   This issue is well-known in the research field, so one should expect that even the most diligent developer will tend to select testing that confirms that belief.   An example is what on one project we called the “magic material number”, a material that was used by all supply chain testers to confirm their unit and integration tests.  And the process always worked…until we tried another part number.
  2. Misunderstanding replicability:  “Leveraging” test scripts can be made to sound like one is replicating the developer’s result.  I have had testers justify this short cut by appealing to the concept of replicability.  Replicability is an important part of the scientific process.  However, it is a part that is often misunderstood or misapplied.  In the case of copied test plans, the error is simple.  One is indeed following the process test exactly — a good thing — but applying it to the same test subject (e.g., same part, same distribution center, etc.).  This technique means that the test is only applied against what may be “a convenient subset” of the data.
  3. Impeding falsifiability: This sounds like a good thing, but isn’t.  In fact, the truth of a theory — in this case, that the process as configured and coded conforms to requirements — is determined by its “ability to withstand rigorous falsification tests” (Uncontrolled, Jim Manzi, p. 18).  Recall the problem with engaging users in certain functions?  These users’ ability to falsify tests makes their disengagement a risk to projects.  Strong process experts, especially those who are not members of the project team, are often highly motivated to “shake down” the system.  Even weaker process players can find gaps when encouraged to “do their jobs” using the new technology using whatever parts, customers, vendors, etc. they see fit.

I hope this example shows how a personal failing damages one’s professional perspective.  No one in this example was ignorant of the scientific method; in fact, several had advanced hard science or engineering degrees.  Nonetheless, disagreement who owned verifying fitness for use led to rationalizations about fundamental breaches in testing.

How personal shortcomings undermine recovery (Mini Case Part 2)

Unfortunately, our quality control processes didn’t fare so well. We did get sufficient testing resources for the first rollout, but a couple of process owners only delivered under protest. For you see, they believed that testing of their processes — even user acceptance testing (UAT) — was not their job. To put it another way, they did not hold themselves accountable to ensure that the technical solution conformed to their processes’ requirements.

This personal shortcoming — an unwillingness to be accountable — triggered a chain of events that put the program right back in a hole:

  • Because it wasn’t their “real” job, some process owners did not create their own user acceptance tests. They simply copied the tests the developers used for unit or integration testing. Therefore, UAT did not provide an independent verification of the system’s fitness for use; it simply confirmed the results of the first test.
  • This approach also allowed process gaps to persist. Missing functionality that would have been caught with test plans that ensured process coverage went unnoticed.
  • Resources for testing were provided only grudgingly and were often second-rate. They often did not know enough about system and process to run the scripts, never mind verify the solution or notice process gaps.

To say it was a challenging cutover and start-up would be an understatement.  Yawning process gaps remained open because they had never been tested.  For sure, we had a stack of deliverable acceptance documents, all formally signed off.  What we didn’t have was a process that was enabled and fit for use.  One example: 

  • Documents remained stuck in limbo for weeks after go live, all because a key approval workflow scenario had not even been developed.
  • And because it hadn’t been developed, the developers hadn’t created and executed a test script.
  • And because the process owners were so focused on doing only their “real” job, they missed a gap that made us do business by hand for nearly two months.

Quote of the Day: P.G. Wodehouse

Of all sad words of tongue or pen, the saddest are these, ‘It might have been.’

– P.G. Wodehouse, Leave it to Psmith

Can personal shortcomings undermine recovery? (Mini Case Part 1)

I concede that projects can recover — at least for a time — without sustainable personal and professional behaviors in place. Heroic measures to catch up on accumulated technical debt, more testers to ensure all tests are completed, new resources that specialize in turnarounds can and do work… again, for a time.

But what happens when the “hero” team needs to take a week or three of down time? What happens when those additional testers go back to their “real” jobs? What happens when the turnaround team leaves? What happens is that the project risks a slide back into the abyss.

Even one gap can be problematic. For example, I was on a troubled transformation program that needed to use all three of these approaches: extraordinary effort, additional testers, and experienced recovery resources. And indeed, the heroic measures did create deliverables that were fit for use , the technical debt had been repaid, and the development team was staffed up to support the remainder of the program. The turnaround specialists put a set of program governance practices in place; even better, the program office continued to execute them effectively.  Quality assurance and testing were other matters entirely….

Follow

Get every new post delivered to your Inbox.

Join 2,518 other followers

%d bloggers like this: