When will Moore’s Law expire? It’s the question that Mike Mayberry, the chief technology officer of Intel Corp., gets more than any other. The 1965 prediction, one of the most prescient in tech, stated that the number of electronic components on an integrated circuit would double every year for a decade—a formula, tweaked over the years, that continues to produce faster, smaller, more affordable gadgets. Gordon Moore, the engineer who coined the law, co-founded Intel a few years later.
Mayberry, age 61, joined Intel in 1984 as a process integration engineer. He believes that new materials and approaches to fabricating chips will ensure that Moore’s Law endures. As the head of Intel Labs, the Silicon Valley chipmaker’s product-research division, Mayberry is working on a “neuromorphic” chip. Like the human brain, the chip would make decisions in response to learned patterns and associations. A chip in a self-driving car, for example, could recognize objects and react to them. In 2017, Intel betted big on driverless technology with its $15.3 billion purchase of Mobileye, an Israeli startup developing hardware and software for autonomous vehicles. Intel is also racing other tech giants, including Microsoft, Alphabet Inc.’s Google, and International Business Machines, to develop quantum computers—powerful machines that could crack complex encryption and help engineer new drugs.
Mayberry spoke with The Future of Everything about the shortcomings of today’s AI, the timeline for quantum computing, and the questions holding back autonomous vehicles.
The Delay in Driverless Cars: Humans, Not Tech
The future where the autonomous vehicle is cheap enough and valuable enough that everybody wants to own one seems to be quite a ways off. But then that becomes less of a technology prediction and more of an economic prediction, a behavioral prediction. When I was young, I couldn’t wait to get my driver’s license, and then, surprisingly, my kids were not as quick to want to do that, and perhaps their kids will not want to drive at all. The majority of cars will be driven by people probably for at least the next decade. Now, there’s a trust factor, too. Do you still have to deal with the problem of how you are going to regulate? How are you going to ensure safety? How are you going to have people accept driverless vehicles? Different regions, different governments around the world may take different approaches to it.
Driverless Trucks as a First Step
We may have long-range transportation automated before we have the local, congested piece automated. [Driverless trucks are] possible. But what do you do when you get to the [destination], and you’ve got to back up the truck into a loading dock? Do you have a human driver take over at that point or not?
Copying Evolution Will Give Us Better Computers
An interesting thing about how your brain works is you store information in a multi-layered manner. If I said, “How was your commute to work?” you’d give me one answer. If I say, “Were there any crazy drivers this morning?” you would give a different answer. You don’t have to go back and essentially run the tape all the way through your journey to figure out these things. Today’s [machine-learning] systems have to do that. That’s partly because of the way that we’ve constructed them. It’s partly because we don’t necessarily understand how human beings store information well enough to copy that kind of stuff. If we can build a neuromorphic system that models the brain, then not only do we help understand the brain, but we can then possibly find better applications for the neuromorphic system because we’ve managed to essentially copy the way that evolution has figured out how to do something.
The very first inklings of AI were really rule-based. That turns out to be a perfect match for conventional computing, where you process one instruction at a time. But eventually, you run out of the ability to tackle problems because, frankly, our behavior is not completely rule-based. And current AI has essentially no notion of time. There’s nothing that would be the equivalent of an event and a trigger. Neuromorphic [computing] is an example of going one step further and saying, “Alright, what if I build time into the processor?” That notion of time and interaction, and feedback is an important advance beyond what we’ve been able to do today. A neuromorphic computing system might be able to simulate a chaotic system more readily than conventional computing. Several things are chaotic at some level. Traffic is an example. Weather is an example. There are countless decisions you make every day where you are balancing multiple constraints and maybe don’t find the perfect answer, but you find a good enough answer quickly. Well, that’s exactly the kind of thing that we think we could do with these kinds of systems—that we can essentially pose the problem of navigating through obstacles and finding a good enough route more quickly, with less computational effort, than the exhaustive trial with different combinations.
Quantum Computers Are a Decade Away (With One Big Caveat)
We say [we’re] about 10 years away [from widespread quantum computing use]. When we started three years ago, we said about a dozen years with a pretty big error bar. There will be people that will say, “Well, we’ll have systems sooner than that,” and I would agree with that. But those systems are not necessarily big enough to solve the kinds of problems that people are really interested in doing. We intend to build an engineering-scale system in the next few years. Maybe in the 500-, 1,000-qubit range. If you want to simulate a drug molecule, you’re going to need qubits in the millions. Obviously, it’s difficult to predict that a better way to simulate a drug will make a better drug. Today the best way to figure out if a drug works are to make the drug and test it. When we’re successful with quantum computing, we can simulate things that we can only do experiments on today. That will accelerate the rate that we can look through different materials. That doesn’t guarantee we’ll have better material, but it helps the front end of the process.