Federal unions watching AI's usage in the workplace closely
CRA, National Defence among departments exploring ChatGPT among other technologies
The use of artificial intelligence (AI) technology is now authorized in certain contexts for federal government workers, but some experts and unions are calling for caution in how it's deployed.
Camille Awada, president of the Canadian Association of Professional Employees (CAPE), the third-largest union for federal workers, said the union is monitoring the situation.
"CAPE understands that there is no avoiding AI entirely, but we advise the government to see how it can assist federal public servants in doing their jobs more efficiently without the loss of quality and ensuring employment rights are protected through the process of any AI implementation," Awada said.
Awada said the union has been looking into the issue and has a committee to ensure the protection of "every public servant out there."
A dozen departments contacted by Radio-Canada indicated that they follow the "directive on automated decision-making" — a document developed by the federal Treasury Board Secretariat, as a guideline for public servants' use of AI.
The federal government website says the directive was made "to ensure that automated decision systems are deployed in a manner that reduces risks to clients, federal institutions and Canadian society."
According to the directive's criteria, any automated decision-making must undergo an "algorithmic impact assessment" which determines the level of risk.
Items assessed include:
- The significance of the decision to the person affected.
- The duration of the potential impact.
- The data used.
- Their collection method and type.
- The nature of the algorithm used and its role in the decision-making process.
The results of these assessments must then be made public online.
Karen Eltis, a law professor at the University of Ottawa and an expert in cyber security, said the rise of AI is inevitable and government should apply caution and transparency in its use.
Eltis likened the emergence of AI to the industrial and digital revolutions, saying the federal government has the opportunity to make considerations early before it develops out of their control.
She said she's heartened to see the government taking strides at this stage, but compared the task of regulating "cyber law" to meditating in Times Square.
"We really need to think very deeply about these questions," Eltis said. To ensure "that government [regulates AI] in a way that fosters citizen confidence and addresses all the issues."
Union doesn't want Phoenix repeat
Canada Revenue Agency confirmed by email that some employees are exploring generative AI technologies like ChatGPT, including its ability to write correspondence. The agency is evaluating the wider use of AI generative tools for common tasks.
Employees at Innovation, Science and Economic Development Canada say they are using AI to improve processes like patents.
Some employees at the Ministry of National Defence also make limited use of ChatGPT.
"This is done in small-scale administrative settings, in a controlled and ethical manner," the ministry said in a statement in French.
Certain departments and agencies, such as Global Affairs Canada, National Defence, Employment and Social Development, Health Canada, the Canada Revenue Agency and the Canada Border Services Agency say they are preparing their own strategies to study the repercussions of the use of AI.
No department or agency has yet reported instances of employees misusing AI for work.
Jennifer Carr, president of the Professional Institute of the Public Service of Canada, which has 70,000 members working in various levels of government, said federal workers have been burned by technology replacing human oversight before.
"When we make an over-reliance on technology, AI in particular, to make decisions, we think that it's the panacea and that we can cut workforce," Carr said.
"When we switched over to Phoenix, we got rid of our pay and compensation advisers because the system would do more and more decisions automated, but that didn't work out and you know we are still paying for it seven years later."
Corrections
- A previous version of this story used a wrong pronoun for Camille Awada.Aug 04, 2023 10:38 AM ET
With files from Radio-Canada's Patrick Foucault