
How one Ontario city is blazing the trail for public sector AI use | 24CA News
Artificial intelligence is spreading past non-public trade and into the general public sector, with one metropolis in Ontario making a reputation for itself in its distinctive software of AI: in its battle towards homelessness.
Several consultants say the City of London’s strategy has been accountable thus far, however argue that the unregulated use of AI within the public sector necessitates the pressing want for requirements and guardrails.
“Any use of AI that can make vulnerable people’s lives better or more comfortable is great,” mentioned Marcel O’Gorman, professor and college analysis chair on the University of Waterloo and director of the Critical Media Lab.
“As long as it’s being done in a way that’s not putting them at some current or future risk, or exposing them to some kind of heavy-handed policing.”
The Office of the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) say it’s tough to find out how widespread the usage of AI is inside municipalities as they aren’t conscious of any statutory reporting necessities.
“Without (appropriate guardrails), AI technologies risk crossing the lines beyond what Ontarians consider legally, socially and ethically acceptable,” the 2 our bodies mentioned in a joint electronic mail to Global News.

Using AI to foretell continual homelessness
The City of London’s Chronic Homelessness Artificial Intelligence, or CHAI, is supposed to foretell the chance of somebody changing into chronically homeless within the subsequent six months. For this objective, continual homelessness is outlined as spending 180 days or extra per 12 months in a public shelter.
“From what I’ve read, there may be some initiatives like this happening in the U.S., but I haven’t read any others like this happening in Canada for sure,” mentioned O’Gorman.
He added municipalities which are utilizing AI are usually doing so to trace power consumption and supply. Other examples of AI within the public sector embody for visitors administration in New Westminster, B.C., and for quite a lot of functions from constructing security codes and inspections to fireside rescue in Edmonton, Alta., in line with GovLaunch, which payments itself as a global wiki for “local government innovation.”
However, London’s CHAI software might simply broaden to different municipalities.
“What’s interesting is that the developers are making this an open-source system so that anyone can use the same system. It’s not proprietary. They’re not trying to sell it.”
CHAI went dwell in August 2020 at a time when the municipality was prioritizing its COVID-19 response. It’s been working within the background since that point, in line with Kevin Dickens, deputy metropolis supervisor of Social and Health Development with the City of London.
“(From) August 2020 to May of 2023, we’ve seen the number of individuals experiencing homelessness double in our community, and we’re starting to see a large population that is experiencing chronic homelessness and unsheltered homelessness that simply did not exist when this tool was conceived.”
Dickens says these outlined as chronically homeless make up about 4 per cent of shelter customers, however they use roughly 1 / 4 of shelter sources.
Mat Daley, director of Information Technology and Services with the City of London, mentioned CHAI can be utilized to foretell heavy shelter use, which provides workers the chance to reinforce “resource allocation and operations.”
The City of London has invested roughly $57,000 in CHAI thus far, with ongoing month-to-month prices of roughly $1,100, mentioned Daley.

The AI software makes use of knowledge from the Homelessness Individuals and Families Information System, a “comprehensive data collection and case management system” that may be a federal normal, Daley defined.
“HIFIS would receive the data from the 20 organizations across the City of London who are supporting homelessness,” he mentioned.
Currently, CHAI is utilizing “anonymitized information from HIFIS and it is running the machine learning models to provide that intelligence to caseworkers in homelessness,” mentioned Daley.
O’Gorman has described the event of CHAI as “respectable,” noting the builders selected to comply with the rules of the General Data Protection Regulation (GDPR), established by the European Union.
“The guidelines help ensure that the data is used fairly and equitably, and it’s based on a model of consent so that a person has to give consent to enter into this project. And they are able to leave at any time.”
O’Gorman mentioned the builders additionally adopted the ideas of “explainable AI,” which signifies that they can observe how the AI got here to its conclusions.
“It will tell you like, ‘This is how I figured out why this person is susceptible to chronic homelessness.’ And that’s good. I mean, it allows you to look at those decisions and say, ‘Is there any bias happening here? Can we trust this?’”
Furthermore, CHAI doesn’t make selections, it supplies data that caseworkers (who “have a better grasp of the full human context of homelessness”) can use to make selections.
A stream chart displaying how knowledge from HIFIS is utilized by the AI.
through CHAI pre-print, Github.com/Sept. 2020
As talked about, it’s additionally obtainable via an open-source licence, which means town shouldn’t be attempting to promote the software.
“Anybody can use it, build on it and improve it. This is truly a ‘we’re all in it together’ type of undertaking,” Daley instructed Global News on May 23.
“Next week, I’m meeting with an individual from London, England, who’s interested in what we’re doing with the CHAI model and the potential for it to be implemented in England.”
The undertaking was ruled by the homelessness sector in a homelessness sector-led committee, underwent bias and privateness checks, and all the outcomes have been reviewed by privateness consultants within the metropolis clerk’s workplace in addition to a third-party knowledge scientist professional, Daley added.
“And as part of a review for an article by Reuters, two unaffiliated computer science experts and a privacy lawyer found that ‘the program seems to take the necessary steps to protect users’ personal information.’”
AI is just pretty much as good as the info it’s based mostly on
However, regardless that a number of consultants imagine London’s strategy to creating CHAI was accountable, O’Gorman notes that finally, the mannequin continues to be one among surveillance.
“It doesn’t necessarily mean it’s all bad or evil, but we have to call a spade a spade.”
He added that usually the general public notion is that tech improvements impression privileged populations first, however applied sciences related to surveillance are inclined to impression “those who are already in vulnerable situations and subject to policing and surveillance.”
Jacqueline Thompson is the chief director of Life Spin, a charity supporting low-income households in London, and believes CHAI displays an present bias in London’s homelessness response.
“That bias excludes our aging population. It excludes women with children. It excludes new immigrant families. It excludes Indigenous families living off reserve in the city. It also discriminates against folks who have mental health challenges.”
Thompson mentioned that as a result of the info for CHAI comes from HIFIS, it doesn’t embody data from these utilizing the so-called “private shelter system,” for instance, folks couch-surfing or residing with a number of households in a single small dwelling.
“There’s forgotten thousands that do not use and will not use public shelter spaces or take their children to live on the street,” she defined.
“I had a woman show me a picture on her phone of the space where her children sleep, and it was a living room space with sleeping bags lining the walls. It breaks your heart. She’s like, ‘This is where I live. I need a home.’ And they don’t count in the London system as it stands because they’re only drawing data from places that serve public shelters.”

A letter from Life Spin obtained by the Community and Protective Services Committee in May said that the charity has seen “a marked increase in families sharing the rentals of homes and apartments to keep their children under a roof.”
“We have seen a similar increase in numerous family members sharing small apartments, such as a family of six, living in a one-bedroom apartment,” the letter reads.
“Just because they are not sleeping in the doorway of your workplace when you arrive for work does not mean that they should be discriminated against.”
O’Gorman mentioned that bias ensuing from lacking knowledge is a identified subject for synthetic intelligence.
“Even if you look at the reports currently about what the primary demographic is for homelessness, I believe it was a white man, approximately 52 years of age, single, no children, jobless. And that then becomes your profile for risk,” he defined.
“You have to ask people, what was that profile based on and what is the quality of the data that went into arriving at that profile? Who’s being left out? Who’s missing?”
A graph displaying completely different function explanations and their related contribution to the likelihood of continual homelessness.
through CHAI pre-print, Github.com/Sept. 2020
Daley agrees that CHAI doesn’t embody everybody experiencing homelessness, however he mentioned CHAI isn’t meant to deal with all homelessness.
“The purpose of that model was to identify individuals at a higher risk of homelessness as measured in the shelter system.”
Dickens harassed that CHAI is solely one software and never “the” software in addressing homelessness in London and was essential of ideas that town is ignoring segments of the homeless inhabitants.
“What is our urgent crisis right now? Our urgent crisis is that there’s 38 active encampments in London and people are at significant risk of unnecessary death… What’s also a crisis is we have a severe shortage of housing and we have next to zero high-supportive housing,” he mentioned.
“Are we also focusing on people that currently have a roof over their heads, are precariously housed, and might be in dangerous or risky situations? Absolutely. On a daily basis. And we’re not taking money away from those programs, those benefits, those supports to address the other. We’re trying to do both.”
“Responsible” AI
While professional accounts level to a accountable strategy from the City of London, O’Gorman urged that it’s irresponsible to imagine that can all the time be the case.
“What could be done with this system if it was put into the wrong hands or into a different government context, different bureaucratic context, that’s more driven by policing or by cutting funding to social programs? It might make use of the data and the AI that is being used to process that data and analyze the data.”
The federal authorities is engaged on laws that goals to, partly, “ensure the development of responsible AI in Canada” and “prohibit reckless and malicious uses of AI,” however Innovation, Science and Economic Development Canada (ISED) says the earliest the laws might come into impact is 2025. Global News is awaiting remark from ISED.
Provincially, a spokesperson for the Ministry of Public and Business Service Delivery says, “We continue to have conversations with our partners, including many municipalities, on how to best take advantage of emerging technologies like AI.”
When requested if there have been any present reporting necessities for municipalities on their use of AI, the ministry mentioned that the province is within the technique of creating its Trustworthy AI Framework.
The Office of the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) just lately launched a joint assertion urging the federal government to take proactive measures to deal with AI within the public sector.
In an electronic mail to Global News, the human rights fee and privateness commissioner mentioned AI applied sciences can be utilized for good – fast-tracking the supply of presidency providers or fixing main public well being points, for instance – however guardrails are important.
AI applied sciences “often rely on personal information or de-identified data and it is imperative that this information be lawfully collected and properly protected.” It can be vital that Ontarians are protected against “unjustifiable or unnecessary surveillance,” they add.
The two additionally pointed to the potential for biases within the expertise and lack of accountability or transparency as extra dangers.
Even past the general public sector, the IPC and OHRC say Ontarians impacted by AI applied sciences “should be able to challenge both inputs that are collected without justification as well as outputs that they believe to be unfair or discriminatory.” As nicely, any use of AI ought to be disclosed to the general public.
Authorities worldwide are racing to rein in synthetic intelligence, together with within the European Union, the place groundbreaking laws handed a key hurdle this week with lawmakers agreeing to modifications in draft guidelines proposed by the European Commission. However, it might nonetheless be years earlier than any guidelines take impact.
In the roughly six months because the launch of ChatGPT, reputation and curiosity in AI has surged, bringing with it a rising refrain of concern. The World Health Organization warned of the dangers of misuse of AI in well being care whereas the top of the United Nations backed requires the creation of an worldwide AI watchdog physique.
Outside of synthetic intelligence, there’s additionally rising curiosity in elevated regulation of the tech sector at giant.
In Canada, a senate committee is learning a invoice that will make Meta and Google pay for Canadian journalism that helps the businesses generate income. Prime Minister Justin Trudeau says the invoice is supposed to assist forestall these corporations from weakening Canada’s democracy by threatening its home media trade.
In 2021, a Facebook whistleblower raised issues in regards to the platform’s impression on kids and politics.
O’Gorman added that exterior regulation is critical as a result of self-regulation merely doesn’t work.
“(Big technology companies) can’t be trusted to do that, we’ve seen over and over again, in part because they don’t really fully understand the implications of the technologies they’re developing.”
However, exterior our bodies have a tendency to maneuver extra slowly than expertise can develop, making regulation tough.
The concentrate on technological fixes over growing funding in human helps additionally issues O’Gorman, who views the event of one thing like CHAI for instance of “really backward priorities in our society and economy” with AI builders making “much better salaries than the people on the ground who are engaged in the actual care.”
“There’s something sexy about finding a techno-fix to homelessness,” mentioned O’Gorman.
“But we can’t let that allow us to lose sight that this real problem is about human beings on the ground trying to survive and the people who are trying to care for them and help them survive.”
– with a file from The Associated Press’ Kelvin Chan, The Canadian Press’ Mickey Djuric, and Reuters’ Michelle Nichols
