Skip to main content

Most Canadians who use generative artificial intelligence for work-related purposes don’t always check its accuracy, and have passed off its creations as their own, which could have serious ramifications for their employers.

That’s according to a recent study by consulting firm KPMG, which also found 20 per cent of Canadians have used generative AI tools like ChatGPT to assist in their work or studies, with the vast majority claiming it’s enhanced the quality of their work. At the same time, more than two thirds who used it have claimed AI-generated content as their own, and nearly a quarter say they do so “all of the time.”

“Disclosure is always encouraged, and is quite necessary, especially when you venture into something as ground-breaking as generative AI,” says Ven Adamov, a partner in KPMG’s generative AI practice. “Claiming something that was generated by a machine as your own is as equally as unethical as it is to present something that another human generated as your own.”

Spencer Knibutat, an associate lawyer at Filion Wakely Thorup Angeletti LLP, a Toronto-based employment law firm, says generative AI users could find themselves in legal jeopardy if products or services they create are largely built using content generated by the software.

“Employers may have some difficulty, for example, claiming copyright protections with respect to work product that was solely created by generative AI tools,” he says. “This can also create risks for employers based on the lack of reliability.”

AI industry leaders create forum to build powerful tech safely

Mr. Knibutat says these tools are prone to bias and creating factually inaccurate content. For example, ChatGPT is limited to only using data that was published on the internet before September of 2021 and made available to the public. If an HR professional uses ChatGPT to generate a job description, Mr. Knibutat says it might not adhere to regulatory changes that have come into effect since.

KPMG’s Mr. Adamov says the technology is poised to change the workplace as much as or more than the personal computer and the internet did before it, but warns it’s still early days and there are plenty of other hazards to watch for including security and confidentiality concerns.

The KPMG survey found about a quarter of Canadian generative AI users have inputted company information into the software, 15 per cent have entered proprietary company data, 13 per cent admit to feeding data about customers and clients without their names and 10 per cent have entered private company financial data. Mr. Adamov says generative AI software providers are free to use any of the data inputted into its tools to train future iterations.

“If they pasted a contract into an AI chatbot to get a summary of its content, that would be a violation of confidential information requirements, because now the entire confidential contract with their supplier is out there,” he says. “There’s no control over the information and what’s done with it That’s where the concern comes from.”

In April, ChatGPT maker OpenAI announced a forthcoming subscription-based version of the software, ChatGPT Business, that will allow enterprise customers to abstain from having their employees’ prompts used in the training of future models. But such restrictions don’t necessarily protect users from getting into hot water with their employer.

“Just because OpenAI has announced these policies doesn’t necessarily mean their data management practices will be compliant with potential privacy laws, or from an employee’s perspective, ensure that they are compliant with their confidentiality obligations to their employers,” warns Knibutat.

Canadians use generative AI for a range of work-related functions — from brainstorming and drafting internal communications to generating legal contracts — and there is no universal standard for determining the point at which a disclosure becomes necessary.

“If you copy verbatim then attribution is necessary, but it’s a sliding scale,” says Dr. Stephen Thomas, a professor of management analytics at the Smith School of Business at Queen’s University in Kingston. “People use lots of tools to generate ideas — you might Google stuff, read some books, watch some TV, then get an idea — and in the past we haven’t said you have to cite everything you’ve ever done that led to this idea.”

OpenAI’s head of trust and safety steps down

As a result, Dr. Thomas says it’s important for organizations to develop clear policies around how and when generative AI can be used in the workplace, including details on when a disclosure is necessary, based on the nature of the work.

The Globe and Mail newsroom, for example, has a policy for generative AI tools that limits the cases where they can be used in the course of journalists’ day-to-day work, and requires they remain transparent to readers whenever it is.

“If it’s clear what’s allowed and what’s not, the risks go way down,” says Dr. Thomas. “It’s a bit of the wild west out there — where each individual or each department is figuring it out for themselves and learning the hard way — so even a little training would go a long way.”

Generative AI tools that easily make or edit images are cluttering our digital lives with misleading or out-and-out fake content, with consequences for our view of the past, present and future. Patrick Dell, The Globe's senior visuals editor, highlights the challenges we face separating the real from the AI-created.

The Globe and Mail

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe