For years, Stacey Wales tried to brainstorm what she would say at the sentencing of her brother’s killer.
“I wanted to yell,†Wales told the Star in an interview on Friday. “I would have these thoughts bubble up, while I was driving or in the shower, often of anger or frustration, and just read them into my phone.â€
In 2021, Wales’ brother, Christopher Pelkey, was fatally shot while at a red light in Chandler, Arizona. His killer, Gabriel Horcasitas, first faced a jury in 2023, but the case ended in a mistrial. After a retrial in March, he was found guilty of manslaughter.
When it came time for Wales to put pen to paper, all she could hear was Pelkey’s voice. So, she began to write in his words. It worked.
Then, with the help of her husband, who has experience using generative artificial intelligence, Wales set off to create a video of her brother’s likeness, reading the statement in his own voice.
The video was the last of 10 statements read out at the May 1 sentencing hearing.
“To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,†Pelkey’s facsimile, donning a grey baseball cap, told in court. “In another life, we probably could have been friends.â€
“I believe in forgiveness, and a God who forgives. I always have and I still do.â€
A man killed in a road rage incident in 2021 directly addressed his killer in an AI-generated video statement during a sentencing hearing in an Arizona county court earlier this month.
Credit: Submitted by Stacey Wales
It wasn’t a perfect likeness. The recreation of Pelkey jolts unnaturally throughout the nearly four-minute video. But it seemed to leave a favourable impression on Maricopa County Superior Court Justice Todd Lang, who described it as “genuine.â€
“I loved that AI,†Lang said. “Thank you for that. And as angry as you are, and justifiably angry as the family is, I heard the forgiveness.
Horcasitas received just over 10.5 years’ jail time.
The case joins a growing list of U.S. court proceedings in which parties have reached for generative artificial intelligence.
In a high-profile example from 2023, former lawyer for President Trump, Michael Cohen, claimed he’d . More recently, a plaintiff in a New York court tried to employ — an attempt that was quickly swatted down by the judge.
For Ryan Fritsch, policy counsel of the Law Commission of Ontario, the rise in use “speaks to the interest and enthusiasm out there for new forms of efficiencies in the criminal justice system.â€
“There are some considerable promises,†Fritsch told the Star on Friday. “But at the same time, concerns should arise if there are not sufficient rules, guardrails or governance models in place.â€
How is AI used in the Canadian criminal justice system?
As it stands, the use of AI in the criminal justice system is more commonly found in policing, often controversially, in which services across the country have employed technology such as facial recognition systems and automatic licence plate readers.
In Canadian courts, AI has been less prevalent – though Fritsch says he’s starting to see upticks in its use. Just this week, the conduct of an Ontario lawyer was after a judge suspected ChatGPT had been used to craft a factum submitted in civil proceedings. She has since been ordered to attend a hearing with the judge to explain the discrepancies.
Where it’s becoming most common, he says, is in cases where people are self-represented.
“Right now, what we’re mostly seeing is an increasing number of self- and un-represented people relying on generalist AI tools like ChatGPT to make their case for them,†he said. “And the consequence is that they’re actually spending more time disavowing the errors than reaping any benefits.â€
Are there laws on the use of AI in Canadian courts?
There are currently no laws specific to the use of artificial intelligence in the Canadian justice system. In the absence of that framework, whether AI-generated material is permitted into a legal case often falls on the individual judge or justice.
As a result, some individual courts, police services and legal associations have started to come up with policies. pc28¹ÙÍøpolice, for example, were the first service in Canada to introduce , in 2022.
A patchwork of policies, however, can open the court up to unnecessary litigation, says Fritsch, and worsen backlogs and delays.
“Without a framework, there’s going to be a lot of struggle for courts, cops and Crowns to interpret how our existing laws, and our civil rights, are going to apply to the use of AI,†Fritsch said. “And there’s going to be a lot of varying opinions on that.â€
Amending laws to regulate AI will take time, plus there’s the “long leg†problem that court cases come months or years after new technology develops, Fristch said. “There could be years of misuse in the meantime,†he added.
What are the risks?
One of the most significant concerns for Fritsch is whether AI technologies can effectively understand and uphold Canadian standards of law.Â
“We know that AI is prone to bias,†Fritsch said. “So if it’s going to be used, we really need to make sure we’re interpreting its use through the lens of the Charter of Rights and Freedoms and procedural fairness.â€
For example, in the U.S., algorithms have long been used to assess risk in bail and release decisions, but Fritsch says they’ve been known to miss the mark.
“What we’ve seen from a couple of cases in the US is some really, really harsh recommendations about people who are facing first offences, or who are who are doing time for minor offences.â€
As a result, the need for human oversight remains, whether through the due diligence of staff or the discretion of a judge.
Are there any potential benefits?
For most, the criminal justice system is unfamiliar, and navigating its nuances can be a daunting task. For older citizens or otherwise vulnerable populations, AI, if used properly and transparently, “could actually increase access and justice for a lot of people,†Fritsch said.
The most common case for the use of AI in the public sector is efficiency, says Shion Guha, assistant professor at the University of Toronto’s Faculty of Information – something the courts are not known for.
“A lot of public sector agencies are basically looking towards generative AI as a way to reduce administrative overhead,†Guha told the Star Friday. “The idea is that this will increase human efficiency and reduce costs.â€
Those promises, he says, have not been properly vetted, though.
“There hasn’t been any formal, finished research on whether or not this evaluative statement is true.â€
Could generative AI be allowed to craft victim impact statements in Canadian courts?
In the absence of laws governing AI use, it’s hard to say — it would come down to the presiding judge or justice, says Fritsch.
In the Arizona case, he said, the judge likely admitted the video on the basis it served as an expression of the family’s feelings, not as a statement from Pelkey.
“I think the court, in their generosity, likely admitted it as almost a courtesy, and it might not be given a whole lot of weight.â€
While Wales wrote the script for her brother’s video, Fritsch pointed out that AI could also be used to generate the statements read out by a person’s likeliness, further complicating the issue.
“AI that can be trained on the sum total of all the comments a person may have made on social media or in emails or texts over years, and then used to simulate the person,†Fritsch said.
“There’s no doubt it would not be admitted for the truth of its contents — because it’s all made up — but might it be allowed for, say, compassionate reasons only, and with no bearing on the sentencing?†he asked. “Who knows?â€