The early promise of social media was as a force for good, a way to build communities. You could share pictures of your kids, wish a cousin happy birthday or reconnect with a high-school buddy.
It was just that, for a while. Then, over the years, a dark side came into focus.
The development of social media over the past two decades has taught us a lot about the promises and dangers of modern technology. As artificial intelligence develops at a breakneck pace, it’s incumbent on policy-makers to learn the right lessons so we can head off problems.
First is the harm to individuals. Social media have been a powerful vector for misinformation, helping to fuel a political chain of events that led to the Jan. 6 insurrection in Washington (and whatever comes next). It’s a disconcertingly effective tool for cyberbullying and for harming teenagers’ mental health. Whistleblowers have alleged the companies knew about these downsides and were unable, or chose not to, deal with them.
Policy-makers in Canada and elsewhere have generally been slow to respond. Ottawa’s long-promised bill to crack down on online hate speech and child sexual exploitation has still not materialized.
AI can, unfortunately, supercharge these concerns. It provides the machinery for bad actors to create factories of false stories and fake people, who can dupe unwitting victims over text, voice or video.
The federal government introduced a bill last year to start bringing in some AI oversight. This space has supported the bill in principle – with some improvements – though truly addressing the issues goes beyond a single bill.
One example is the new field of AI-generated pornography that features likenesses of nonconsenting participants. Member of Parliament Michelle Rempel Garner pointed out in a parliamentary debate in October the inadequacies in a bill to update the sex offenders’ registry. The legal definition of an “intimate image” defines it as a visual recording, leaving a loophole as to whether it applies to pictures made by AI, which can look real.
Ms. Rempel Garner argued unsuccessfully for the loophole to be closed.
Beyond the very real danger to individuals, there is also the big-picture threat to economic prosperity.
The leader of this phase of the AI revolution is OpenAI, makers of the ChatGPT language generator. The company is in turmoil this week after its board (temporarily) ousted high-profile chief executive officer Sam Altman. The reasons are not exactly clear, but the board’s concerns about the risks posed by AI may have played a part.
What is more clear is the role of Microsoft, which has invested heavily in OpenAI and worked successfully to reinstall Mr. Altman after his dismissal.
And this is what should make regulators wary. The past 30 years of tech innovations and lax antitrust enforcement have created corporate behemoths that can stare down large countries.
Companies such as Google and Amazon have amassed monopolies through acquisitions and predatory practices, according to lawsuits from the U.S. Federal Trade Commission that have only recently been filed. Meta – the parent company of Facebook – has retained its top position in social media by buying emerging rivals, such as Instagram. And before the FTC’s recent suits, its last major enforcement action had been against Microsoft itself, over its dominance of the desktop-computer market in the 1990s.
AI holds so much promise as a technology that regulators should be pro-active in making sure there is robust competition for the tools. Competition-killing acquisitions and contracts that stifle access should be examined carefully and blocked if needed.
Of course, all this isn’t to say AI is only about risk. If developed responsibly, it can have enormous upsides, too. Some early-adopter companies say using generative AI for tasks such as customer service has already delivered cost savings and generated more revenue. Canada in particular could badly use an upgrade to its productivity.
A common tenet of science fiction is that small actions in the past can have enormous effects on the future. Policy-makers should act now to create the future we want and head off an AI dark side.