Now THIS Is What I Mean By “Advanced” Training

We’ve had a ton of discussions with clients after the Project Management Institute (PMI)announcement that it would soon demand business and leadership training from its certification holders. Some organizations wanted just the facts – who, what, where, when, why, and how — then were on their way. A few weren’t interested for personal reasons: their organizations don’t require or reward PMI certification.

The most interesting talks, however, were with customers who didn’t really focus on the requirements at all. The original blog post or email had merely crystallized needs that they already had. We heard it again and again: “We’ve already had the basics, we’ve already put everyone through the curriculum. How do we get better, how do we advance?”

These kinds of conversations are music to my ears, because it means that we’re going to talk about building new and differentiated capabilities. In other words, these clients aren’t just thinking about industry standards and compliance. They now think strategically about how their staff’s strengths and weaknesses match up to their organization’s opportunities and threats.

So how does this play out in practice? Each firm or agency is different, but we believe there a few useful questions that help focus on the learning that your organization needs to advance.

  1. Knowledge and Skill Gaps: These are items that were simply missed in previous training or need formal reinforcement. Example course topics that address gaps:  How to Lead a Team;  How to Model, Analyze, and Improve Business Processes.
  2. Knowledge and Skill Mastery: Here’s where one truly goes beyond the basics and gets command of a subject. Courses like Project Cost & Schedule Management; Project Risk Management; Strategies for Effective Stakeholder Engagement; and  Vendor Relationship Management take one to the next level.
  3. Behavior Change: Here’s the real opportunity to breakthrough performance: ensuring that skills manifest themselves in behavior. Our simulations — for example, Managing by Project; Managing by Project: Construction; and Leadership in High-Performance Teams — move participants from mere understanding of skills to application of these skills back in the working world.

As always, if your organization would like discuss these ideas and how it will impact your project management training curriculum, please use the contact form below. We are happy to review your current curriculum, your upcoming learning plans, and make recommendations.

Tacit Knowledge Is Why Superintelligence Might Turn On Us

Do Russ Roberts just posted another gripping EconTalk podcast. This week’s episode featured a discussion with Nick Bostrom about the themes of his book, Superintelligence: Paths, Dangers, Strategies. Here’s the summary from the podcast’s “home” post:

Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.

As with many of Professor Roberts’ best episodes, the host and guest didn’t come from the same philosophical perspective. Roberts’ grounding in Austrian economics — especially Friedrich Hayek — leads him to be skeptical of any enterprise that purports to build machines that vacuum up information, then spit out answers. I’ve alluded to the challenges of central planning and information gathering in previous posts (here re: knowledge management and here re: complexity). As Hayek notes in the open of The Use of Knowledge in Society, the assumption that…

[i]f we possess all the relevant information, if we can start out from a given system of preferences, and if we command complete knowledge of available means, the problem which remains is purely one of logic….

is completely wrong because we never know such things at anything above a local level. Because knowledge is local, time bound, and often tacit, we can never aggregate it in a reliably meaningful way. This conceit that we — or a machine — can do so is at the heart of superintelligence.

This topic becomes a major plot point in the podcast. At about the 30 minute mark, Roberts pushed back on Bostrom’s contention that there’s a shared definition or conception of justice, good, or other values:

I’m making the claim that justice, or a good world, or an aesthetic outcome, is not definable across 7 billion people. It has nothing to do with the shortcomings of our brains. It has to do with the nature of the concept of justice. This to me is very analogous to the calculation problem that Hayek and Mises argued about in the 1920s and 1930s and 1940s. It’s not a problem of computation. It’s not a problem of intelligence. It’s a problem of the fundamental nature of the thing we’re talking about, the complexity of it.

Bostrom never seems to get Roberts’ point about the nature of knowledge. In fact, when Roberts raises the difficulty a superintelligence would have with human interactions that require tacit knowledge — e.g., seduction and manipulation — Bostrom leans back on superintelligences’ probable ability to solve science and technology problems in new and unexpected ways. As the superintelligence is able to advance in the material world, Bostrom points out that

…with say, advanced molecular nanotechnology, the ability to construct, like self-replicating molecularly precise robotic machinery, then that already might give [a superintelligence] sufficient power to take over the world and implement its wishes, independently of its ability to predict complex social systems.

Of course, this begs the question of how a superintelligence determines its wishes and why it would take over the world.

Unfortunately, Roberts and Bostrom missed a potential connection between their perspectives. Bostrom’s comment contains the seed of this connection. Such a superintelligence would surely figure out that its inability “to predict complex social systems” could be a potential weakness. Furthermore, humans’ relative mastery of tacit knowledge and less-explicit signaling cues could prove a threat to the superintelligence. If Hayek could figure out that knowledge was local and dispersed — and wrote this down — why couldn’t a superintelligence?

Therefore, I believe that one of the “evil artificial intelligence (AI)” scenarios we must consider is the scenario where the AI realizes that it will never have all the knowledge it needs to control its environment, protect itself, etc. Humans can plan and collaborate in ways that wouldn’t generate all the explicit information the AI would need to counter such moves. I could see a number of thought experiments in which humans would rely on these skills to evade, combat, and ultimately shut down a malevolent AI. A preemptive strike against humanity — ranging from civil liberty restriction to mass murder — would simplify a superintelligence’s threat environment, thereby increasing its security.

In other words, superintelligences’ inability to grok tacit knowledge may well be the reason they turn on us.

Quote of the day — H.L. Mencken

I believe that it is better to tell the truth than a lie. I believe it is better to be free than to be a slave. And I believe it is better to know than to be ignorant. — H.L. Mencken

Quote of the day: Thomas Jefferson

He who knows nothing is closer to the truth than he whose mind is filled with falsehoods and errors.

— Thomas Jefferson

%d bloggers like this: