Tacit Knowledge Is Why Superintelligence Might Turn On Us

Do Russ Roberts just posted another gripping EconTalk podcast. This week’s episode featured a discussion with Nick Bostrom about the themes of his book, Superintelligence: Paths, Dangers, Strategies. Here’s the summary from the podcast’s “home” post:

Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now to reduce the risk. The conversation covers the likelihood of the worst scenarios, strategies that might be used to reduce the risk and the implications for labor markets, and human flourishing in a world of superintelligent machines.

As with many of Professor Roberts’ best episodes, the host and guest didn’t come from the same philosophical perspective. Roberts’ grounding in Austrian economics — especially Friedrich Hayek — leads him to be skeptical of any enterprise that purports to build machines that vacuum up information, then spit out answers. I’ve alluded to the challenges of central planning and information gathering in previous posts (here re: knowledge management and here re: complexity). As Hayek notes in the open of The Use of Knowledge in Society, the assumption that…

[i]f we possess all the relevant information, if we can start out from a given system of preferences, and if we command complete knowledge of available means, the problem which remains is purely one of logic….

is completely wrong because we never know such things at anything above a local level. Because knowledge is local, time bound, and often tacit, we can never aggregate it in a reliably meaningful way. This conceit that we — or a machine — can do so is at the heart of superintelligence.

This topic becomes a major plot point in the podcast. At about the 30 minute mark, Roberts pushed back on Bostrom’s contention that there’s a shared definition or conception of justice, good, or other values:

I’m making the claim that justice, or a good world, or an aesthetic outcome, is not definable across 7 billion people. It has nothing to do with the shortcomings of our brains. It has to do with the nature of the concept of justice. This to me is very analogous to the calculation problem that Hayek and Mises argued about in the 1920s and 1930s and 1940s. It’s not a problem of computation. It’s not a problem of intelligence. It’s a problem of the fundamental nature of the thing we’re talking about, the complexity of it.

Bostrom never seems to get Roberts’ point about the nature of knowledge. In fact, when Roberts raises the difficulty a superintelligence would have with human interactions that require tacit knowledge — e.g., seduction and manipulation — Bostrom leans back on superintelligences’ probable ability to solve science and technology problems in new and unexpected ways. As the superintelligence is able to advance in the material world, Bostrom points out that

…with say, advanced molecular nanotechnology, the ability to construct, like self-replicating molecularly precise robotic machinery, then that already might give [a superintelligence] sufficient power to take over the world and implement its wishes, independently of its ability to predict complex social systems.

Of course, this begs the question of how a superintelligence determines its wishes and why it would take over the world.

Unfortunately, Roberts and Bostrom missed a potential connection between their perspectives. Bostrom’s comment contains the seed of this connection. Such a superintelligence would surely figure out that its inability “to predict complex social systems” could be a potential weakness. Furthermore, humans’ relative mastery of tacit knowledge and less-explicit signaling cues could prove a threat to the superintelligence. If Hayek could figure out that knowledge was local and dispersed — and wrote this down — why couldn’t a superintelligence?

Therefore, I believe that one of the “evil artificial intelligence (AI)” scenarios we must consider is the scenario where the AI realizes that it will never have all the knowledge it needs to control its environment, protect itself, etc. Humans can plan and collaborate in ways that wouldn’t generate all the explicit information the AI would need to counter such moves. I could see a number of thought experiments in which humans would rely on these skills to evade, combat, and ultimately shut down a malevolent AI. A preemptive strike against humanity — ranging from civil liberty restriction to mass murder — would simplify a superintelligence’s threat environment, thereby increasing its security.

In other words, superintelligences’ inability to grok tacit knowledge may well be the reason they turn on us.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: