We say we want faster horses

While I’m on a Henry Ford roll, here’s one about the dangers of simply taking orders from one’s customers.

If I had asked people what they wanted, they would have said faster horses.

We now mock Ford for “any color he wants, as long as it’s black“; but passively listening to the customer is no good either. Long ago Ford understood the pitfalls of just asking “what would you like?”

This quote came to mind as I reviewed the predictions in the Pew Research report on Killer Apps in the Gigabit Age. Full disclosure, I don’t buy such specific predictions. I’m with William Schrader’s take on page 2:

Gigabit bandwidth is one of the few real ‘build it and they will come’ moments for new killer apps. The fact that no one had imagined the other killer apps prior to seeing them grow rapidly implies that no one can imagine these new ones—including me.

Many of the guesses are entertaining and may well be true. In the end, what struck me was how derivative nearly every prediction was. Most involved augmentation of current functionality: a variant on the “faster horse” desire. Some, like one librarian, were hoping for features that already exist: e.g., seeing recipes in a heads-up display.

Paging KitchMe and Google Glass.

Our relocation project — capabilities vs. specs

As we planned our relocation, we started to search for houses using a set of “features and functions” that closely matched our current house. They were all pretty typical specs: bedrooms, baths, square feet, lot size, schools, age of house, etc.

This approach was useful when narrowing neighborhoods, but they didn’t surface the house we ended up choosing here in Evansville. The house we chose fulfilled the capabilities we wanted from our new house, even though it didn’t exactly match the specs we used to benchmark houses.

For example, our new house’s lot is about half the size of our old lot. However, it does give us the “lot-driven” capabilities that we require — privacy (house faces into a small cul de sac and has nice mature trees around it) and a large, fenced yard space — even though it didn’t match some key yard requirements we thought we had.

For me, it was a useful reminder not to confuse the capabilities one expects from a project with product/solution specifications. In this case, what appeared to be a clear-cut, “must have” requirement of “lot must be over x square feet” would have been better expressed as “lot must be private and have a large, dog-friendly fenced area.”

More importantly, the square footage specification was just that, a specification, not the requirement or capability itself. Specs must serve the capability, not vice versa.

BI, Deliverables, and Change Control

This post may court the Business Week curse, but I’m highlighting this story on BI and the recession (here) as the jumping off point for a couple of observations.  Rob T.’s comment in the piece (sorry, but I can’t link directly) notes that:

Unlike ERP, BI can be implemented step-wise, first in targeted, strategic areas, and then using a broad brush once it’s value has been proven. A wise strategy for this economy is to start small, pick a problem or area where a quick win is possible and attack it with a 60-90 day effort.

I mentioned this opportunity for short, focused projects in my WSJ interview.  These short cycle times and the nature of BI work pose problems for traditional ERP change control and deliverables definition. 

For example. what exactly is “done” on a BI project?  The traditional ERP definition of deliverables — focused on processes that deliver tangible, measureable outcomes (e.g., Order to Cash, Purchase to Pay, Hire to Retire) — doesn’t work well for BI.  Typically, reporting projects focused on # of reports, explicitly defined.  Also, in analytics projects those reports or queries often spark more ideas for how data can be cross-tabbed, projected, etc.  Do you always want to be presenting a change orders for new reports?

What has worked for you when defining deliverables and change control for BI initiatives?  Now if you’ve read this blog for a while, then you can guess that I’m in tune attacking these issues with a capabilities-focused approach to deliverables.  This approach points one in the right direction when defining what done looks like.  However, many PMs find it hard to grok the requirements and required capabilities of analytics-savvy stakeholders and consultants.

Deliverables, work packages, and the schedule

This temptation to fix a schedule and get to work is constant in enterprise IT.  It is particularly alluring for any application tied to a SOX-compliant landscape — some governance models only allow two opportunities/year to deliver — where project durations strongly suggest themselves and time is always “a-wasting”.

Of course, as Glen Alleman reminds us here, starting with the schedule  is wrong.  I won’t recapitulate his post here, but I’ll borrow from his comment to another post which points out the fallacy in this kind of thinking:

[M]any…process improvement projects have failed, along with Enterprise IT, because the WHY of the effort is not established or well understood. The principles establish WHY we should be doing something. The practices of course tell us HOW.

This rush to “get working” short-circuits on of the most important functions of a WBS: stakeholder management.  Properly defined deliverables and work packages aren’t simply inputs to the schedule, budget, etc.  If nothing else, a WBS  is the most accessible framework for a discussion with one’s stakeholders that ensures that the what of the project supports the why of the project.  Wouldn’t it be a good idea to make sure that why and what are elaborated and priorities agreed upon — even at just a couple of levels — before getting down to who, when, where, and how?

PM Quote of the Day — Roald Amundsen

Adventure is just bad planning.”

I’ve found that the amount of adventure in a project is inversely proportional to the amount of proper planning that went into — and continued throughout — that project.  Glen hits that point (here) when he uses the 5 P’s — a long-ago Scoutmaster (a USMC sergeant) introduced them more saltily as the 6 P’s — to frame a discussion of emergence in project requirements. 

In his own way, Sergeant Martinez made emergence very clear to we tenderfeet many years ago.  When we whined about having to learn how to prevent and fix blisters, to pack correctly, to read a map, etc., he asked those very same “If -> What” questions Glen mentioned:

  • If you get blisters, what will you do (without knowing how to prevent and fix)?
  • If if rains, what will you do (without a poncho)?
  • If you get off the trail, what do you need to to get back on it (without a map and knowing how to reach it)?

When we protested that we weren’t Marines he noted — after advising us that under no circumstances would the Corps want us anyway — that it was more important for inexperienced hikers to plan so that we didn’t get into trouble.  As he went on to note, on our hikes we were looking to have fun, learn a bit of woodscraft (actually “desert craft”), and otherwise enjoy our encounter with nature.  Fending for our health and lives wasn’t one of our hikes’ requirements…

PM Quote of the Day — Pauline Kael

[T]he critic is the only independent source of information. The rest is advertising.

This quote came to mind as we’ve been going through an internal program’s requirements.  One of my colleagues regularly refers to, and insists on, using the “Four Eyes” principle.  In other words, one should always involve a second set of eyes to verify and validate work product.  If the project team is the only arbiter of progress

Too often, the four eyes principle often only gets honored in quality control — e.g., during post-build testing processes.  As my colleague’s insistence suggests, we find that an independent opinion is most valuable early in an initiative.  For example, wouldn’t it be useful to have an independent validation during planning that the requirements and deliverables really represent (and will make real) the capabilites the project or program is intended to put in place? 

It is tough enough to re-work a deliverable that doesn’t conform to requirements.  It is worse to have to re-build a deliverable that had conformed to requirements, but find that those requirements were never valid in the first place.

Changing from a “work” to “deliverable” mindset

Glen Alleman has been a roll at Herding Cats, provoking some excellent back-and-forth in recent posts and comment threads.  His post (here) on Deliverables Based Planning (which is a service mark of his firm, I believe) prompted some knowing nods on my part.  As Glen notes in a comment:

You cannot believe…or maybe you can…how uncomfortable this makes people. They want to plan tasks! They want to track tasks! They want to be in control! (Deliverables are just a detail to be de-scoped when necessary).

We have passed through this vale of tears ourselves.  Since I’m on vacation and feeling particularly lazy, I’ll simply cut-and-paste from my own comment (with a few edits for clarity):

[T[he mindset change from simply planning “work” and “effort” to focusing on well-defined deliverables has been tough.  However, once we got through driving that change, we found few problems making most SAP project deliverables “tangible or verifiable.”

This approach is especially effective when looking at the solution itself — most ERP-type deliverables should reflect enabling the execution of customer business processes (and the outcomes and benefits that ensue). This definition has proven quite tangible (the execution of enabled processes themselves) and verifiable (tests of the increasingly complex models of the processes, e.g., unit, string, integrated, business simulation tests).  Decently contructed tests should confirm whether or not the realized solution conforms to requirements. Furthermore, one can track the outcomes from these deliverables and trace the benefits — realized vs. expected. 

Troubled Projects and Engaging Change Stakeholders

Glen at Herding Cats (here) points to a Center of Business Practices study (here) on the causes of troubled projects.  I’ve posted on some of our own findings about project success (here and here), but I haven’t elaborated on what we’ve found about the composition of change control boards.  Below is an extract from a comment I made on Glen’s post:

Our project debrief analyses have consistently found that the right level of executive presence on change control boards is essential to ensure change is managed, not simply documented. In fact, the lack of such a presence (or regular absences) marks the project as a potential escalation.

When a senior manager vets the prioritization of changes by focusing the project team on the project’s goals and intended outcomes — one should usually find scope, time, and resource changes easier to manage (with fewer, more salient change orders). It also keeps the business invested in the project. Many IT shops resist this measure, but it works wonders once they “get it”. 

PM Quote of the Day — George S. Patton

A good plan, violently executed now, is better than a perfect plan next week.

This quote is an nice appropriate bookend to yesterday’s Eliot quote (here).  Most SAP projects have to be delivered within a tight time window within strict resource constraints.  With two legs of the triple constraint almost fixed — and we can’t compromise on quality with mission-critical applications — our scope planning and prioritization must be ruthless. 

While it is important to have patience with ourselves and others, dithering about features and functions doesn’t work in today’s project environment.  To slip in another quote: Say what you’ll do, do what you say… and don’t look back.

The problem w/ “gold-plating”

We had a discussion of gold-plating in project management during my leadership team meeting last week.  To refresh everyone’s memories, here’s a good, if informal, definition of the term (link here):

Gold plating is what we call it when the project team does work on the product to add features that the requirements didn’t call for, and that the stakeholder and customer didn’t ask for and don’t need. It’s called “gold plating” because of the tendency a lot of companies have to make a product more expensive by covering it in gold, without actually making any functional changes.

Andrew Stellman at Building Better Software has a post on why gold-plating is a bad name (post here).  It is an excellent discussion of what the right analogy is — he goes with a combination of gold-plated (a purely cosmetic veneer) and “gilded” (encrusted with useful, but largely-unused features).  Andrew’s bottom line is that such features are “just wasted effort, at least from the perspective of the person paying the programmer’s salary”.

I would say the bottom line is even more stark: gold-plating or gilding not only wastes money, it risks delivery of core deliverables.  Enterprise software projects usually must be delivered by a date certain.  I’ve found that development teams that get distracted by superfluous gee-gaws do so at the expense of priority features.  In other words, not only does gold-plating waste money, it diverts resources from ensuring that deliverables conform to requirements.

%d bloggers like this: