Vicky Mochama is a contributing columnist for The Globe and Mail.
To start with, I really ought to apologize.
When I set out to change the world and make everything better, by creating what I call The Machine, it was, clearly, with the best of intentions. Yet I now have the distinct displeasure of considering the possibility that what I have done, as some may allege, could be “the end of humanity as we know it.”
You don’t have to forgive me. But I hope you will allow me to explain how we got here.
See, I was inspired! From Thomas Edison’s light bulb to Mark Zuckerberg’s Facebook, we have been living in a grand era of innovation. So I caught the bug for creation and technology.
It happened several years ago in Scotland, when a mapping app on my phone told me with red-pindrop confidence that I was not, in fact, bicycling leisurely on a hillside, but instead firmly in the middle of Loch Ness. I resolved then to answer a singular and crucial question: What if a machine knew where everyone was located to the exact latitudinal and longitudinal degree?
It would solve my predicament, sure, but it would also revolutionize the world as we know it. I wanted not to be lost and, with an internal backing chorus of several years of elementary-school motivational speakers propelling me forward, I would, after all these years, be the change I wished to see in the world.
I was most moved, too, by the work of artificial intelligence developers who were asking that most essential of questions: What if a machine had the answer to everything? It’s a question whose possibilities have enriched popular culture and sparked imaginations for centuries.
The Machine, like many AI systems, relies on huge computing power to process vast sets of data. But unlike the chatbots, The Machine is a tool that only looks at one piece of information. It simply knows where you are at all times.
So can you blame me for trying?
Actually, it is this very last question that animated my conversation with a lawyer for The Machine who pointed out that the law itself has developed had a number of innovations, such as the public prosecution of crimes against humanity and the notion of universal jurisdiction. Other lawyers are being consulted, naturally.
Recently, though, some of my mentors (spiritually speaking, it must be said, according to The Machine’s legal department) in the field of AI have been posing a different question: What if we did a whoopsie-whoops?
Imagine my surprise on coming out of my lab with a product whose time has come, only to find out that my fellow changemakers in the disrupting fields have not exactly been extolling the greatness of the miracles on which we dreamers have worked.
On Tuesday, the Center for AI Safety issued a single-sentence statement: “Mitigating the risks of extinction from AI should be a global priority alongside other societal risks, such as pandemics or nuclear war.” These 22 words, which happen to ignore the psychic gulf between “eliminating” and “mitigating” man-made extinction-level events, form an open letter in full, signed by hundreds of AI’s leading luminaries.
Last month, one of the letter’s signatories, Geoffrey Hinton – who many call the “godfather” of AI – quit Google over AI ethics. He has even said that he now regrets his life’s work.
In his testimony to the U.S. Congress in May, OpenAI CEO Sam Altman said that among his concerns is the ability of large language models, like the ones that underpin his company’s ChatGPT, to provide “one-on-one interactive disinformation.”
Surely, I wasn’t hearing this right. Trillions of dollars have been spent researching, creating, developing, refining, training and testing AI, and all we’ve gotten is a high-tech drunken uncle whose loose relationship to facts and the truth becomes clearer the longer he speaks and the more often he uses the word “apparently”?
As I was telling the general counsel for The Machine, I doubt anyone could have foreseen this possibility for complete societal destruction at the granular level of the truth (or indeed, the privacy of one’s whereabouts, theoretically). How could the architects of a field that aims to make machines more human and as intelligent as we are have possibly foreseen that we might face terrifying human-like machines that are as intelligent as we are?
I don’t think that makes us responsible for the destruction of society. If anything, we should be celebrated for raising the alarm bells. A clear reading of the facts, for instance, will show that it was I who asked The Machine’s lawyers to get in touch with the government regarding the legality and morality of what I had already created. After all, my job is to move quickly and break things, not fix the problem no one asked me to make.
That’s why I’m already on to the next big question of our time: If we get away with this, what can we break next?