Better Leadership + Business Skills = Better Projects

What drives project success? Research has consistently shown that it’s having an effective project manager. Results from PM College’s latest research, “Project Manager Skills Benchmark 2015,” confirms this, showing that organizations with highly skilled project managers get significantly better project results.

This result is hardly a surprise, but the magnitude of the outperformance is. Project managers at high skill levels outperform those with project managers at low skill levels – almost 50% better. In addition, high-performing organizations appear to emphasize skills beyond project management tools and techniques. High performers’ project managers excelled at leadership skills, especially displaying integrity and honesty, building relationships, and building trust and respect.

LeadersAndPMsDiffer

There is a lot more insight in the report but let me highlight one key finding.  As one might expect, project managers in all organizations need to improve across all areas of the talent triangle: leadership, business, and project management skills. Their skills are good to excellent in 15% of organizations and inadequate to fair in 30%.

However, senior leaders are far more likely than project managers to see benefits realization, project alignment with strategy, and poor communication as challenges. This perception gap extends to skill improvement priorities (see graph). Note that the biggest gaps are in leadership, business, and strategy skills: project managers

The study is available for download now. Stay tuned for an invite to our upcoming webinar to review and discuss these results. Hold the date and time: 18 June (2 PM Eastern).

Why Project Management Expertise Isn’t Enough: Lessons Learned from Security Breaches

How many times have I heard that “a good project manager can manage any project?” Too often for my taste. My biggest issue with the claim is that it begs the question: he statement assumes we all agree that any project manager with a mastery of the profession’s tools and techniques can succeed anywhere.

We’ve finally learned better, and PMI has acknowledged this in its new requirements for PMP continuing education. As PMI itself puts it:

As the global business environment and project management profession evolves, the [certification] program must adapt to provide development of new employer-desired skills…. The ideal skill set — the PMI Talent Triangle — is a combination of technical, leadership, and strategic and business management expertise. (PMI 2015 Continuing Certification Requirements (CCR) Program Updates)

Our pending research on project skill gaps (stay tuned for a webinar invite) shows that executives and senior managers understand this much better than project practitioners. They emphasize strategy, business, and leadership improvements, while practitioners don’t.

Perhaps an example from the current headlines will help. As most of you know, security breaches have wreaked havoc on a number of prominent firms: Target, Home Depot, Sony are simply the most well-known. The sad thing is that the most famous failures could have been prevented.

One of my new favorite podcasts is from Andreessen Horowitz, the venture capital firm. My most recent listen was an interview with Orion Hindawi of Tanium. I recommend listening to the whole thing — it’s less than 30 minutes — as Orion provides some great color to what, where, why, etc. on security attacks and vulnerabilities. The summary hits his sobering message on the head:

The paradox of security is we pretty much know what we are supposed to do most of the time — but we don’t do it. If you examine all the recent high-profile attacks, somebody in the organization knew something was wrong before it happened. They just didn’t have the ability to escalate the problem, or the ability to raise a flag that people took seriously.

In other words, we don’t lack the technical understanding of security risks, or the tools and techniques to mitigate them. We lack the leadership and business savvy to confront the challenge of communicating the risks, then deploying and using our toolkit effectively. The last two sentences show how these skills gaps drive the root causes:

  • Ability to escalate the problem” is a leadership challenge. This suggests that “somebody” wasn’t connected, articulate, or brave enough to get to decision makers.
  • Ability to raise a flag that people took seriously” is a symptom of weak strategy and business skills. If the threat isn’t framed, articulated, and understood in terms serious leaders get, then such warnings are ignored…or even worse, viewed as counterproductive scare mongering.

Find Your Best Project Leaders

My last post noted that filling gaps, improving skill mastery, and driving behavior change are the improvements that organizations need. But how can you design these objectives into your talent improvement program? If you have had a program in place, how do you know you have the right mix? And how do you  measure its impact on the organization?

Who are the truly competent initiative leaders in your organization? And how do you know?

Any competency improvement plan starts with identifying what the “truly competent” project or program manager looks like for the particular organization. We intuitively know that more competence pays for itself. And there is strong evidence for that intuition: it’s in our Building Project Manager Competency white paper (request here). But lasting improvement will only come from a structured and sustained competency improvement program. That structure has to begin with an assessment of the existing competency. Furthermore, the program must include clear measures of business value, so that every improvement in competence can be linked to improvements in key business measures.

My experience with such programs is that PMO and talent management groups approach the process in a way that muddles cause and effect. For example, a training program is often paired with PMO set-up. Fair enough. However, if the training design is put into place without a baseline of the current competence of your initiative leaders, then that design may perpetuate key skill or behavior gaps among your staff.  You may hit the target, but a scattershot strategy leans heavily on luck.

In addition, this approach will leave you guessing about which part of your training had business impact. You may see better business outcomes, but not have any better idea about which improved skills and behaviors drove them. Even worse, if your “hope-based” design and delivery is followed by little improvement, then your own initiative may well be doomed.

So how should you fix your program, or get it right from the start? We at PM College lay out a structured, five-step process for working through your competency improvement program.

  1. Define Roles and Competencies
  2. Assess Competencies
  3. Establish a Professional Development Program with Career Paths
  4. Execute Training Program
  5. Measure Competency and Project Delivery Outcomes Before and After Training

These steps were very useful for structuring my thinking, but they’re more of a checklist than a plan. For example, my PMOs almost always had something to work with in Steps 1 and 3. Even if I didn’t directly own roles and career paths, I had credibility and influence with my colleagues in human resources.  However, the condition of the training program was more of a mixed bag. Sometimes I would have something in place, sometimes I was starting “greenfield.”

The current state of the training program informs how I look at these steps.

  • Training program in place: My approach is to jump straight to Step 5, and drive for a competency and outcome assessment based on what went before.  I assume steps 1-4 as completed – even if not explicitly – and position the assessment as something that validates the effectiveness of what came before. In other words, this strategy is a forcing function that stresses the whole competence program, without starting anew.
  • No training program in place: I use the formal assessment to drive change. As PMO head I have been able to use its results to explicitly drive the training program’s design. More significantly, these results are proof points driving better role and career path designs, even if HR formally owns those choices.

PM College has a unique and holistic competency assessment methodology that looks at and assesses the knowledge, behaviors, and job performance across the project management roles in your organization. As always, if your organization would like discuss our approach, and how it drives improved project and business outcomes, please contact me or use the contact form below. We’d love to hear from you.

FYI: For more reading on competency-based management, check out Optimizing Human Capital with A Strategic Project Office.

McKinsey: Simulation key to how effective organizations build staff capabilities

I’ve seen the impact of leadership development on organizations: it’s why I joined PM College. One of the challenges is to determine which methods work best to drive transformation, or accelerate improvements one has already reaped. Our firm has experience and research that pins this down, but it’s always nice to find a third-party that confirms what we know and believe.

McKinsey to the rescue, with a new survey on “Building Capabilities for Performance.” The survey refreshes data from a 2010 study, and found that:

… the responses to our latest survey on the topic suggest that organizations, to perform at their best, now focus on a different set of capabilities and different groups of employees to develop.

In other words, the best performers did personnel development differently.

What did they do? The first finding that struck me was the use — or disuse — of experiential learning: McKinsey model factories or simulations as examples. The most effective organizations used these methods more than four times more frequently than others. But even then, experiential learning was used sparingly, by just under a quarter of the top performers.

As long-time Crossderry readers know, I’m a big fan of simulations. We had great experience with them at SAP. As McKinsey notes, they are about the only way “to teach adults in an experimental, risk-free environment that fosters exploration and innovation.” To that end, several popular PM College offerings — Managing by Project, its construction-specific flavor, and Leadership in High Performance Teams — use simulations to bring project and leadership challenges alive…without risking real initiatives.

I’ll have more on other success factors — custom content and blended delivery — in following posts.

Crossderry Podcast #1 — 11 November 2014

Here is the first Crossderry podcast. I plan to do this roughly once a week. The topics are: Apple Watch as threat to Swiss watch industry, Quick hitter tweet review: Team size, platform category errors, and salespeople who do not know anything about their customers.

Enjoy!

Links:

The Allure of Doomsaying

I just finished this Grantland piece by Bryan Curtis on the imminent demise of baseball. If you’re a fan at all — or a fan of any long-standing pastime — you’ve probably read or heard complaints like this:

Somehow or other, they don’t play ball nowadays as they used to some eight or ten years ago. I don’t mean to say they don’t play it as well. … But I mean that they don’t play with the same kind of feelings or for the same objects they used to. … It appears to me that ball matches have come to be controlled by different parties and for different purposes …

The kicker is that this quote is from 1868, eight years before the founding of the National League. It turns out that there’s a long thread of end-times commentary stretching back to the beginning of the Major Leagues, and Curtis unspools it carefully and well.

These persistent predictions hint at one of the reasons that doomsayers will never want for work: all human institutions, no matter how long-lived, will wax and wane. Predicting an institution’s demise, as Curtis describes it:

…allows us to imagine we’re present at a turning point in history. We’re the lucky coroners who get to toe-tag the game of Babe Ruth, Ted Williams, and Kurt Bevacqua.

“We are not at a historic moment,” Thorn said. “The popularity of anything will be cyclical. There will be ups and downs. If you want to measure a current moment against a peak, you will perceive a decline. J.P. Morgan was asked, ‘What will the stock market do this year?’ His answer was: ‘Fluctuate.’”

One driver that Curtis doesn’t mention is the control that failure gives us. There’s a certain temperament — and I plead guilty — that is very comfortable with the dodge Richard Feynman mocks here:

All the time you’re saying to yourself, ‘I could do that, but I won’t,’–which is just another way of saying that you can’t.

Making a positive forecast about, in this case, baseball, would put us in the uncomfortable position of predicting success for something we can’t control. It is hard to create and achieve success in this world and nothing lasts forever. The sure bet is on the “can’t” in Henry Ford’s “Whether you think you can, or you think you can’t–you’re right.

As everyone say, please read the whole thing.

The Apple 8.0.1 Debacle: Whom to blame?

Marc Andreessen drew my attention to a Bloomberg article that laid out what it purported to be “links” with the failed Maps launch. @pmarca was properly skeptical of the article:

https://twitter.com/pmarca/status/515350114081075203

And indeed, the piece starts in on the leader of the quality assurance effort, noting that:

The same person at Apple was in charge of catching problems before both products were released. Josh Williams, the mid-level manager overseeing quality assurance for Apple’s iOS mobile-software group, was also in charge of quality control for maps, according to people familiar with Apple’s management structure.

If you didn’t read any further, you’d think the problem was solved. Some guy wasn’t doing his job. Case closed.

But are quality problems ever so simple? After all, Isn’t quality supposed to be built into a product? If this guy was the problem, then why was Apple leaning so heavily on him to lead its bug-finding QA group?

Well, reading on is rewarding, for it becomes clear that the quality problems at Apple run deeper than a bad QA leader. For example, turf wars and secrecy within Apple make it so:

Another challenge is that the engineers who test the newest software versions often don’t get their hands on the latest iPhones until the same time that they arrive with customers, resulting in updates that may not get tested as much on the latest handsets. Cook has clamped down on the use of unreleased iPhones and only senior managers are allowed access to the products without special permission, two people said.

Even worse, integration testing is not routinely done before an OS feature gets to QA:

Teams responsible for testing cellular and Wi-Fi connectivity will sometimes sign off on a product release, then Williams’ team will discover later that it’s not compatible with another feature, the person said.

So all you Apple fans, just remember the joke we used to make late in a project: “What’s another name for the release milestone? User Acceptance Testing begins!”

Why personal behaviors impact testing

My last post used testing to illustrate the consequences of questionable personal behavior on a business situation.  Quality is susceptible to personal and professional gaps that interact to amplify each other’s effects.

Why is that so?  Let’s start with the examples I used.  Recall that business process owners simply copied the unit tests of the developers to serve as user acceptance tests.   I characterized this approach as a failure of accountability: the process owners didn’t believe it was their “real” job, even though they knew they would have to certify the system was fit for use.  Less charitably, one could have called it laziness.  More charitably, one could have called it efficiency. 

And indeed, an appeal to efficiency underlay the rationalizations of these owners: “Why should I create a new test when the developer — who knows the system better than I do — has already created one?”  How would you answer this question?  As a leader, do you know the snares such testing practices lay in your path?  Off the top…

  1. Perpetuating confirmation bias:  By the time someone presents work product for formal, published testing, he or she has strong incentives to conduct testing that proves that the work product is complete.  After all, who doesn’t want his work to be accepted or her beliefs confirmed?   This issue is well-known in the research field, so one should expect that even the most diligent developer will tend to select testing that confirms that belief.   An example is what on one project we called the “magic material number”, a material that was used by all supply chain testers to confirm their unit and integration tests.  And the process always worked…until we tried another part number.
  2. Misunderstanding replicability:  “Leveraging” test scripts can be made to sound like one is replicating the developer’s result.  I have had testers justify this short cut by appealing to the concept of replicability.  Replicability is an important part of the scientific process.  However, it is a part that is often misunderstood or misapplied.  In the case of copied test plans, the error is simple.  One is indeed following the process test exactly — a good thing — but applying it to the same test subject (e.g., same part, same distribution center, etc.).  This technique means that the test is only applied against what may be “a convenient subset” of the data.
  3. Impeding falsifiability: This sounds like a good thing, but isn’t.  In fact, the truth of a theory — in this case, that the process as configured and coded conforms to requirements — is determined by its “ability to withstand rigorous falsification tests” (Uncontrolled, Jim Manzi, p. 18).  Recall the problem with engaging users in certain functions?  These users’ ability to falsify tests makes their disengagement a risk to projects.  Strong process experts, especially those who are not members of the project team, are often highly motivated to “shake down” the system.  Even weaker process players can find gaps when encouraged to “do their jobs” using the new technology using whatever parts, customers, vendors, etc. they see fit.

I hope this example shows how a personal failing damages one’s professional perspective.  No one in this example was ignorant of the scientific method; in fact, several had advanced hard science or engineering degrees.  Nonetheless, disagreement who owned verifying fitness for use led to rationalizations about fundamental breaches in testing.

How personal shortcomings undermine recovery (Mini Case Part 2)

Unfortunately, our quality control processes didn’t fare so well. We did get sufficient testing resources for the first rollout, but a couple of process owners only delivered under protest. For you see, they believed that testing of their processes — even user acceptance testing (UAT) — was not their job. To put it another way, they did not hold themselves accountable to ensure that the technical solution conformed to their processes’ requirements.

This personal shortcoming — an unwillingness to be accountable — triggered a chain of events that put the program right back in a hole:

  • Because it wasn’t their “real” job, some process owners did not create their own user acceptance tests. They simply copied the tests the developers used for unit or integration testing. Therefore, UAT did not provide an independent verification of the system’s fitness for use; it simply confirmed the results of the first test.
  • This approach also allowed process gaps to persist. Missing functionality that would have been caught with test plans that ensured process coverage went unnoticed.
  • Resources for testing were provided only grudgingly and were often second-rate. They often did not know enough about system and process to run the scripts, never mind verify the solution or notice process gaps.

To say it was a challenging cutover and start-up would be an understatement.  Yawning process gaps remained open because they had never been tested.  For sure, we had a stack of deliverable acceptance documents, all formally signed off.  What we didn’t have was a process that was enabled and fit for use.  One example: 

  • Documents remained stuck in limbo for weeks after go live, all because a key approval workflow scenario had not even been developed.
  • And because it hadn’t been developed, the developers hadn’t created and executed a test script.
  • And because the process owners were so focused on doing only their “real” job, they missed a gap that made us do business by hand for nearly two months.

Can personal shortcomings undermine recovery? (Mini Case Part 1)

I concede that projects can recover — at least for a time — without sustainable personal and professional behaviors in place. Heroic measures to catch up on accumulated technical debt, more testers to ensure all tests are completed, new resources that specialize in turnarounds can and do work… again, for a time.

But what happens when the “hero” team needs to take a week or three of down time? What happens when those additional testers go back to their “real” jobs? What happens when the turnaround team leaves? What happens is that the project risks a slide back into the abyss.

Even one gap can be problematic. For example, I was on a troubled transformation program that needed to use all three of these approaches: extraordinary effort, additional testers, and experienced recovery resources. And indeed, the heroic measures did create deliverables that were fit for use , the technical debt had been repaid, and the development team was staffed up to support the remainder of the program. The turnaround specialists put a set of program governance practices in place; even better, the program office continued to execute them effectively.  Quality assurance and testing were other matters entirely….