Conclusions — Avoiding the Experience Trap

NOTE: Finally, I’ve gotten to the 12th (and last) post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap.  Below are the summarized conclusions with my comments:

  1. Learning on the job simply won’t work in any but the most basic environments: PMs do not have the time or perspective to learn from their mistakes when dealing with chaos.  OTJ is unrealistic outside of simple project management or team leadership roles.
  2. Managers can continue learning only if they’re given some formal training and decision support: This conclusion is especially important for PMOs, which must respond by adjusting training approaches from canned classroom lectures to interactive workshops and simulations specifically tailored to the challenges the PM community will face.
  3. Challenge spending of training dollars most heavily on entry-level hires: This assumption needs to be challenged considering the evolving complexity of projects; regardless, more attention needs to be paid to senior managers.
  4. Importing project-planning tools wholesale from other companies or industries is risky: Plug-and-play fixes to complex problems just aren’t available and worse, are often misleadingly easy to implement, hard to make work.
  5. Senior recruits cannot be expected to hit the ground running: Unless recruits come from exactly the same environment — pretty rare if not impossible — it will be hard for them to jump right in without some substantive exposure via advanced training/on-boarding.

Develop project simulators — Avoiding the Experience Trap

NOTE: 11th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap

I’ve gotten stacked up w/ life and work, time to close out this topic.  We’ve started to look closely at how to put together more effective training for seasoned managers.

It is, however, possible to construct artificial environments that can be managed so that complexity does not overwhelm learning….  Appropriately constructed “flight simulators” can play a similar role in project management, as virtual worlds for training and immersion.

The flight simulator metaphor gets beyond our current approach to complex learning: role play.  Role plays turn out to be too artificial, there is little of the pressure to perform because the scenarios are often contrived or hokey.  Hard to learn when you’re suppressing smirks and giggles. 

The most effective approach I’ve seen in role play was at McDonald’s, where senior managers would step in and play the role of the aggreived customer.  It put pressure on the participants, reinforced the seriousness of the role play, and demonstrated the commitment of senior leaders to the training.

The article makes a final point about simulations: that they give one the capability to move managers around the organization with more confidence.

Since knowledge has a situation… or company-specific aspect, each time managers change companies or work contexts they need to learn about…which factors drive productivity or quality.

Set goals for behavior — Avoiding the Experience Trap

NOTE: 10th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap.

The paper outlines some unexpected consequences of the way in which we typically estimate, especially on goals (I’ve quoted liberally from the paper, so I’ve split this post).  I never seriously considered the impact on PM behavior of the interrelationship between scope changes and original estimates:

Another weakness of estimation tools is that their projections are usually based on product size (for example, how many lines of code or function points), which is extremely difficult to predict in the planning stages. Moreover, product deliverables can change over time in ways that are difficult to anticipate. Thus, initial estimates don’t make good goals.

And of course, what are our KPIs?  The baseline cost and schedule targets generated by the estimate that became the SOW…

[W]hen managers know they will be measured against targets based on unreliable estimates, they seek additional slack by opting for “safe” estimates and then proceed to squander the slack through make-work and by embellishing the project with unnecessary features. There is thus a strong case to be made for rethinking the way goals are set.

So far, this pretty expected.  What comes next are the money quotes: Continue reading

Added new link collection “Complexity Set”

FYI, I’ve added a new page — Complexity Set — that will feature a set of links dealing with the emerging topic of project complexity.  I expect to add quite a few links as we go along.

The first iteration of the page collects my posts to date on “The Experience Trap” HBS article by Prof. Kishore Sengupta, et.al. in the Harvard Business Review.

Calibrate estimation tools — Avoiding the Experience Trap

NOTE: 9th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap.

Some interesting observations on the use of PM tools, with a specific focus on estimation.  It is a basic tenet of multivariate analysis to identify the variables most correlated with the desired answer — which implies that one must develop an enviroment-specific model — e.g., industry, solution, team skill, etc.  But…

Many organizations, however, simply import project management forecasting tools from other contexts and other companies. One software company we studied had just adopted a tool from an aerospace company.

My experience is that the model issues are relatively easy to fix, but that data availability and reliability are much stickier. 

Organizations compound estimation problems by basing their model assumptions on data from past projects without scrubbing the data first (that is, without accounting for any unusual circumstances encountered by those projects).

The authors ID a “best process” that restores some credibility to the estimates, inclding

[normalizing] outcomes in a three-step process that dentifies unusual events, roughly calculates their impact, and then deducts the impact from the results. The scrubbed values then go into the estimation models.

Reducing SAP implementation time/cost

A case study (here) and press release (here) about a quick, clean SAP Business All-In-One implementation at TomoTherapy.  What stands out about what worked?

  • Using SAP Best Practices as the baseline.  It makes it very easy when one can leverage a fully documented and functional prototype.
  • Using a partner willing to leverage SAP Best Practices.  It is instructive that two partners wanted to build from the ground up.  itelligence is an SAP Best Practices partner (see list here) — the case study is a little misleading about their role in SAP Best Practices — so they were able to bring speed to the table.
  • Splitting the Blueprint SOW from Execution SOW.  We are doing more and more T&M design SOWs, which is great way to get “everything on the table,” as TomoTherapy’s IT director stated.

An aside: it is funny to see that while implementation took only 16 weeks, but it took the PR machine 14 months to get out the word.

 

Hiring/Staffing Tools and Guidelines — Avoiding the Experience Trap

NOTE: 8th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap.

While this point is more general — recommending project-focused decision tools and guidelines — I think PMs need to focus on the specific example given: the impact of staffing decisions.

When a manager makes several hires, there is a hiring delay and an assimilation delay with each. Over time it becomes difficult for the manager to assess current and predict future team productivity, especially if the staff suffers attrition.

One of the most common mistakes I see — that that I’ve made — is not accounting for leads and lags in hiring times.  It is much better to start filling the pipeline with appropriate candidates ASAP, rather than waiting in order to somehow “save budget” or “wait until the headcount is approved.”

But if the manager is provided with tools that can calculate the effects of additions and turnover for several periods, he will obtain a clearer picture of the expected cumulative impact on team productivity over the medium term.

I’m not sold on tools — how well do these tools actually work without the appropriate training, coaching, etc?  As stated in earlier posts, PMs seemed to ignore this information when provided.  This seems like a nice to have, but useless unless one knows how to use it.

Provide Cognitive Feedback — Avoiding the Experience Trap

NOTE: 7th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap

This starts the transition to answers about how to avoid the distorting effects of past experience on PMs that manage complex projects.  The first suggestion is to add reporting that focuses on insights on performance drivers.

Project environments are rich in information, particularly feedback on outcome, which is delivered through status reports. But in environments where cause-and-effect relationships are ambiguous… managers need insights into the relationships among important variables in the project environment, particularly as the project evolves.

The example in the article shows the relationship between the level of quality assurance and the rate at which defects are caught in the first 80 days of a project.  

[The project started] with a relatively low level of quality assurance and has increased it over time. The rate at which defects are caught increases correspondingly, but with a lag, and disproportionately because more effort is now devoted to detection. The rate then decreases, signaling that most of the defects are being detected, and the manager can now maintain quality assurance at this level or even reduce it.

Such approaches have demonstrated results — the article cites >50% reduction in defects in large projects.  It highlights the value proposition for a proper PMIS and associated analytics, which too often only gets honored in the breach.

Performance Implications of the Experience Trap

NOTE: 6th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap.  To summarize the conclusions based on their experiments:

  • Project managers find it difficult to move beyond the simple mental models based on the simple projects they ran.
  • They ignore complications or use simple heuristics that work only in noncomplex situations.
  • They don’t improve their models based on their complex project experience.

What does this mean for your PMO if it continues to rely on on-the-job-training to develop PMs? (emphasis mine)

[I]mpressive backgrounds…have little bearing on their ability to manage complex projects. Many companies routinely find that replacing one veteran project manager with another has no impact.  Despite their experience with complex projects, both managers do not meaningfully change the mental models…formed in simpler and usually similar contexts.

If it makes little difference whom you put in charge, then managers will end up ascribing responsibility for failures not to their own decisions but to some other factor: overambitious planning or the demands of the finance department (or—as is often the case—a salesperson promising too much to the client and then setting unrealistic goals for the project). When that kind of belief takes hold, managers start to look in the wrong places for solutions to their performance problems.

Static Estimation Practices — The Experience Trap

NOTE: 4th post of a series on an HBR article by Prof. Kishore Sengupta, et al on The Experience Trap.

Re-planning the iron triangle — resources, time, and scope — is essential as a project progresses.  One of the more surprising mistakes experienced project managers make is to accept the original estimate as given.

In software development, initial estimates for a project shape the trajectory of decisions that a manager makes over its life….  The trouble is that initial estimates usually turn out to be wrong.

One of the most disappointing outcomes of this portion of the study is that project managers simply opted for conservative estimates, regardless of the evidence.  This behavior reinforces a negative stereotype about our profession, that we are risk-averse to the extreme. 

The idea was to give all subjects identical status reports, so we could compare how people’s productivity estimates evolved over time. Our hypothesis was that people’s productivity estimates would converge (people starting with low estimates would raise them over time and those with high estimates would lower them)….  So what happened? The managers’ productivity estimates did not converge over time. What’s more, there was a clear bias toward conservativeness: All their estimates drifted downward….  We suspect that this conservatism can be explained by managers’ attempts to game the system to get more resources.

%d bloggers like this: