Skip to main content
opinion

Gus Carlson is a U.S.-based columnist for The Globe and Mail.

As public relations hustles go, Meta’s META-Q new Instagram Teen Account program is a pretty slick one.

The program, introduced this week in the U.S., Canada, Britain and Australia, promises new safeguards to limit young Instagram users from exposure to online creeps and provocative content. Parents also have more options to manage their kids’ use of the popular social-media platform that has more than two billion active monthly users.

Sounds pretty good, even if Meta admits many program features are easily circumvented by the same users they are meant to protect, potentially rendering the changes more placebo than prevention.

Those pesky details aside, don’t be fooled by the tech monolith’s good-guy persona. The new program is a carefully orchestrated PR buffer against looming U.S. legislative regulation on platform design and lawsuits from dozens of states based on parental fears about the damaging impact of Meta’s Instagram and Facebook platforms.

Those lawsuits allege Meta is harming young people and contributing to the youth mental-health crisis by purposely designing features in its popular social-media platforms that addict children to content.

The pushback has also been strong in Canada. Nine school boards and two private schools have sued the companies that own Facebook, Instagram, SnapChat and TikTok, accusing them of designing unsafe and addictive products that harm the mental health of students and disrupt learning.

The timing of the U.S. announcement is no coincidence, especially considering the profile of social-media issues in U.S. Congress and the urgency to solve them. The Kids Online Safety Act recently passed the Senate and went to the House this week.

The bill seeks to establish guidelines to protect minors from harmful material on social-media platforms and require platforms to disable addicting features. The bill would require platforms to design and implement features to prevent and mitigate harm to minors, such as content promoting sexual exploitation, eating disorders or suicide.

From a distance, the new Meta program seems aligned with the spirit of the new law. But on closer examination there is a big difference: Meta places the bulk of the responsibility for security on the users – both parents and their kids – with various opt-in and opt-out options, rather than on itself. The new law would place the onus on companies such as Meta to incorporate safeguards into their platform designs.

In simple terms, the new Meta program provides that anyone under 18 who signs up for Instagram will get a restrictive teen account and existing accounts held by minors will be migrated over to the new format in the next 60 days. European Union teen accounts will migrate later this year.

Teen users can receive private messages only from people they follow or are already connected to. What Meta considers to be sensitive content, such as videos of people fighting or those promoting plastic surgery, will be limited.

Teenaged users will also be notified if they are on Instagram for more than 60 minutes. A sleep mode feature will turn off notifications and send auto-replies to messages from 10 p.m. until 7 a.m.

Meta acknowledges that teen users may lie about their ages and will require multiple verifications, including instances where they may try to create new accounts using adult birth dates.

The company also said it will actively search for instances where teens pose as adults and automatically place them in restricted teen accounts. These settings will be activated for all teens, but 16 and 17-year-olds will be able to turn them off. Those under 16 will need parental permission to do so.

As with any program like this, the real test lies in its commercial impact. Meta wouldn’t say how the changes would affect its business, suggesting only that there may be a short-term dip in Instagram use by young users.

Some suggest the opposite may be true. If parents believe their kids are more protected, they may be more willing to allow them to use the platform.

Others predict that if the safeguards render Instagram use too boring, teens may increase their use of platforms such as TikTok. That would be supreme irony, considering the U.S. government’s current efforts to shut down TikTok if the platform doesn’t jettison its Chinese owners over Washington’s national security concerns.

It doesn’t take much critical thinking to see the obvious problem of having Meta self-regulate Instagram. It’s akin to having the fox design a new security system for the henhouse, and then take credit for it.

Meta, however, is as smart as it is slippery. In riding the white horse of altruism, the company is seeking to fend off independent solutions to the problem that may come from pending legislation and multiple lawsuits by demonstrating compassion and contrition.

In the end, it comes down to whether or not you trust Big Tech to do the right thing. Ultimately, it’s unlikely Meta would make any real changes to Instagram that would have material effect on revenue, even if they were in the best interest of its young users.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe