AI could have catastrophic consequences — is Canada ready? | 24CA News
Nations — Canada included — are operating out of time to design and implement complete safeguards on the event and deployment of superior synthetic intelligence programs, a main AI security firm warned this week.
In a worst-case state of affairs, power-seeking superhuman AI programs may escape their creators’ management and pose an “extinction-level” risk to humanity, AI researchers wrote in a report commissioned by the U.S. Department of State entitled Defence in Depth: An Action Plan to Increase the Safety and Security of Advanced AI.
The division insists the views the authors expressed within the report don’t replicate the views of the U.S. authorities.
But the report’s message is bringing the Canadian authorities’s actions up to now on AI security and regulation again into the highlight — and one Conservative MP is warning the federal government’s proposed Artificial Intelligence and Data Act is already old-fashioned.
AI vs. everybody
The U.S.-based firm Gladstone AI, which advocates for the accountable improvement of secure synthetic intelligence, produced the report. Its warnings fall into two major classes.
The first considerations the chance of AI builders dropping management of a man-made normal intelligence (AGI) system. The authors outline AGI as an AI system that may outperform people throughout all financial and strategically related domains.
While no AGI programs exist up to now, many AI researchers imagine they don’t seem to be far off.
“There is evidence to suggest that as advanced AI approaches AGI-like levels of human and superhuman general capability, it may become effectively uncontrollable. Specifically, in the absence of countermeasures, a highly capable AI system may engage in so-called power seeking behaviours,” the authors wrote, including that these behaviours may embody methods to stop the AI itself from being shut off or having its targets modified.
In a worst-case state of affairs, the authors warn that such a lack of management “could pose an extinction-level threat to the human species.”
“There’s this risk that these systems start to get essentially dangerously creative. They’re able to invent dangerously creative strategies that achieve their programmed objectives while having very harmful side effects. So that’s kind of the risk we’re looking at with loss of control,” Gladstone AI CEO Jeremie Harris, one of many authors of the report, mentioned Thursday in an interview with CBC’s Power & Politics.
A brand new report is warning the U.S. authorities that if synthetic intelligence laboratories lose management of superhuman AI programs, it may pose an extinction-level risk to the human species. Gladstone AI CEO Jeremie Harris, who co-authored the report, joined Power & Politics to debate the perils of quickly advancing AI programs.
The second class of catastrophic threat cited within the report is the potential use of superior AI programs as weapons.
“One example is cyber risk,” Harris instructed P&P host David Cochrane. “We’re already seeing, for example, autonomous agents. You can go to one of these systems now and ask,… ‘Hey, I want you to build an app for me, right?’ That’s an amazing thing. It’s basically automating software engineering. This entire industry. That’s a wicked good thing.
“But think about the identical system … you are asking it to hold out a large distributed denial of service assault or another cyber assault. The barrier to entry for a few of these very highly effective optimization functions drops, and the damaging footprint of malicious actors who use these programs will increase quickly as they get extra highly effective.”
Harris warned that the misuse of advanced AI systems could extend into the realm of weapons of mass destruction, including biological and chemical weapons.
The report proposes a series of urgent actions nations, beginning with the U.S., should take to safeguard against these catastrophic risks, including export controls, regulations and responsible AI development laws.
Is Canada’s legislation already defunct?
Canada currently has no regulatory framework in place that is specific to AI.
The government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 in November of 2021. It’s intended to set a foundation for the responsible design, development and deployment of AI systems in Canada.
The bill has passed second reading in the House of Commons and is currently being studied by the industry and technology committee.
The federal government also introduced in 2023 the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, a code designed to temporarily provide Canadian companies with common standards until AIDA comes into effect.
At a press conference on Friday, Industry Minister François-Philippe Champagne was asked why — given the severity of the warnings in the Gladstone AI report — he remains confident that the government’s proposed AI bill is equipped to regulate the rapidly advancing technology.
“Everyone is praising C-27,” said Champagne. “I had the possibility to speak to my G7 colleagues and … they see Canada on the forefront of AI, you realize, to construct belief and accountable AI.”

In an interview with 24CA News, Conservative MP Michelle Rempel Garner said Champagne’s characterization of Bill C-27 was nonsense.
“That’s not what the consultants have been saying in testimony at committee and it is simply not actuality,” said Rempel Garner, who co-chairs the Parliamentary Caucus on Emerging Technology and has been writing concerning the want for presidency to behave quicker on AI.
“C-27 is so old-fashioned.”
AIDA was introduced before OpenAI, one of the world’s leading AI companies, unveiled ChatGPT in 2022. The AI chatbot represented a stunning evolution in AI technology.
“The proven fact that the federal government has not substantively addressed the truth that they put ahead this invoice earlier than a elementary change in expertise got here out … it is form of like making an attempt to manage scribes after the printing press has gone into widespread distribution,” said Rempel Garner. “The authorities most likely wants to return to the drafting board.”

In December 2023, Gladstone AI’s Harris told the House of Commons industry and technology committee that AIDA needs to be amended.
“By the time AIDA comes into drive, the 12 months might be 2026. Frontier AI programs could have been scaled a whole bunch to 1000’s of instances past what we see immediately,” Harris told MPs. “AIDA must be designed with that degree of threat in thoughts.”
Harris told the committee that AIDA needs to explicitly ban systems that introduce extreme risks, address open source development of dangerously powerful AI models, and ensure that AI developers bear responsibility for ensuring the safe development of their systems — by, among other things, preventing their theft by state and non-state actors.
“AIDA is an enchancment over the established order, but it surely requires important amendments to satisfy the total problem prone to come from near-future AI capabilities,” Harris instructed MPs.
