For the best experience, open
https://m.greaterkashmir.com
on your mobile browser.

Looking into the Future

01:00 AM Nov 09, 2023 IST | GK READINGS DESK
looking into the future
Advertisement

“I definitely think that people should try to develop Artificial General Intelligence with all due care. In this case, all due care means much more scrupulous caution than would be necessary for dealing with Ebola or plutonium.”

Advertisement
   

Michael Vassar is a trim, compact man of about thirty. He holds degrees in biochemistry and business, and is fluent in assessments of human annihilation, so words like “Ebola” and “plutonium” come out of his mouth without hesitation or irony. One wall of his high-rise condo is a floor-to-ceiling window, and it frames a red suspension bridge that links San Francisco to Oakland, California. This isn’t the elegant Golden Gate—that’s across town.

Advertisement

This one has been called its ugly stepsister. Vassar told me people bent on committing suicide have been known to drive over this bridge to get to the nice one.
Vassar has devoted his life to thwarting suicide on a larger scale. He’s the president of the Machine Intelligence Research Institute, a San Francisco–based think tank established to fight the extinction of the human race at the hands, or bytes, of artificial intelligence.

Advertisement

On its Web site, MIRI posts thoughtful papers on dangerous aspects of AI, and once a year it organizes the influential Singularity Summit. At the two-day conference, programmers, neuroscientists, academics, entrepreneurs, ethicists, and inventors hash out advances and setbacks in the ongoing AI revolution. MIRI invites talks from believers and nonbelievers alike, people who don’t think the Singularity will ever happen, and people who think MIRI is an apocalyptic techno cult.

Advertisement

Vassar smiled at the cult idea. “People who come to work for MIRI are the opposite of joiners. Usually they realize AI’s dangers before they even know MIRI exists.”
I didn’t know MIRI existed until after I’d heard about the AI-Box Experiment.

Advertisement

A friend had told me about it, but in the telling he got a lot wrong about the lone genius and his millionaire opponents. I tracked the story to a MIRI Web site, and discovered that the experiment’s creator, Eliezer Yudkowsky, had cofounded MIRI (then called the Singularity Institute for Artificial Intelligence) with entrepreneurs Brian and Sabine Atkins. Despite his reputed reticence, Yudkowsky and I exchanged e-mails and he gave me the straight dope about the experiment.

Advertisement

The bets placed between the AI played by Yudkowsky and the Gatekeeper assigned to rein him in were at most thousands of dollars, not millions. The game had been held just five times, and the AI in the box won three of these times. Meaning, the AI usually got out of the box, but it wasn’t a blowout.

Advertisement

Some parts of the AI-Box rumor had been true—Yudkowsky was reclusive, stingy with his time, and secretive about where he lived. I had invited myself to Michael Vassar’s home because I was pleased and amazed that a nonprofit had been founded to combat the dangers of AI, and young, intelligent people were devoting their lives to the problem. And I hoped my conversation with Vassar would smooth my final steps to Yudkowsky’s front door.

Before jumping feet first into AI danger advocacy, Vassar had earned an MBA and made money cofounding Sir Groovy, an online music-licensing firm. Sir Groovy pairs independent music labels with TV and film producers to provide fresh soundtracks from lesser known and hence cheaper artists. Vassar had been toying with the idea of applying himself to the dangers of nanotechnology until 2003. That year he met Eliezer Yudkowsky, after having read his work online for years. He learned about MIRI, and a threat more imminent and dangerous than nanotechnology: artificial intelligence.

Excerpt From: James Barrat. “Our Final Invention: Artificial Intelligence and the End of the Human Era.”

Advertisement
×