Opaque intelligence

Alex Tabarrok writes about what he calls opaque intelligence.

It isn’t easy suppressing my judgment in favor of someone else’s judgment even if the other person has better judgment (ask my wife) but once it was explained to me I at least understood why my boss’s judgment made sense. More and more, however, we are being asked to suppress our judgment in favor of that of an artificial intelligence, a theme in Tyler’s Average is Over. As Tyler notes notes:

…there will be Luddites of a sort. “Here are all these new devices telling me what to do—but screw them; I’m a human being! I’m still going to buy bread every week and throw two-thirds of it out all the time.” It will be alienating in some ways. We won’t feel that comfortable with it. We’ll get a lot of better results, but it won’t feel like utopia.

I put this slightly differently, the problem isn’t artificial intelligence but opaque intelligence. Algorithms have now become so sophisticated that we human’s can’t really understand why they are telling us what they are telling us. The WSJ writes about driver’s using UPS’s super algorithm, Orion, to plan their delivery route:

Driver reaction to Orion is mixed. The experience can be frustrating for some who might not want to give up a degree of autonomy, or who might not follow Orion’s logic. For example, some drivers don’t understand why it makes sense to deliver a package in one neighborhood in the morning, and come back to the same area later in the day for another delivery. But Orion often can see a payoff, measured in small amounts of time and money that the average person might not see.

One driver, who declined to speak for attribution, said he has been on Orion since mid-2014 and dislikes it, because it strikes him as illogical.

One of the iconic moments from Hitchhiker's Guide to the Galaxy is when a supercomputer finally finishes computing, after 7.5 million years, the answer to the ultimate question of life, the universe, and everything, and spits out 42. Perhaps that is how far beyond our understanding a super-intelligent AI will be. We may no more understand them than a snail understands humans. Defined that way, opaque intelligence is just artificial intelligence so advanced we don't understand it.

Someday a self-driving car will make a strange decision that will kill someone, and the software will be put on trial, and despite all the black box data recovered we may have no idea what malfunctioned. Sometimes my iPhone randomly crashes and reboots, I couldn't begin to tell you why.

I'm waiting for the dystopic sci-fi movie that postulates an armageddon scenario much more likely than Skynet in Terminator. That is, rather than waste time building cyborg robots to hunt us down, a truly super-intelligent AI that wanted to kill off humans could just simultaneously order a million self-driving cars to speed headlong into each other, all the planes in the world to plunge into the ground, all our nuclear reactors to melt down, and a dozen other scenarios far more efficient than trying to build humanoids that walk on two legs.

Not as visually exciting enjoyable as casting Arnold, though. In a way, it's reassuring that for all the supposed intelligence of Skynet, it sends back a Terminator that still has a terrible Austrian-accented English, as if artificial speech technology was the one technology that failed to keep up despite AI making leaps as complex as gaining consciousness.