Dreamstime.
Read: 4 min

With fewer volunteers knocking on doors and staffing campaign offices, political parties are turning to artificial intelligence to help engage the voting public.

“AI is definitely starting to be used for voter contact,” said Cameron Bonesso, president of the campaign management firm Constituent Manager Solutions.

“And I know for a fact that major political parties across the country are looking at it to supplement the type of calling they’re doing.”

Experts say the technology creates new opportunities and risks for campaigns and voters, but also has its limits.

“At the end of the day, authenticity is really going to be what’s gold here,” said veteran political strategist Jeff Ballingall. “People are still going to want emotional ties and a human element to politics.”

Finding efficiencies

Bonesso, who managed telecommunications for Peter MacKay’s 2020 Conservative Party leadership bid, refers to AI voter contact as a “very new space.” 

At his own firm, they have been testing different use cases for AI calling. “[I]t’s probably something we’re going to be rolling out in the next few months,” he said.

In his view, AI has the potential to make campaigns far better at connecting with constituents.

“We see this a lot with … ethnic outreach, where it’s often more effective to have somebody of that ethnic background call them, [somebody] that speaks the language they may speak in addition to English,” he said.

“So if we’re calling a Chinese person that speaks Mandarin, we can have an AI that sounds Chinese over the phone and can speak Mandarin.”

But AI will not be used for all types of voter outreach, Bonesso notes. 

“I think you’ll pretty well need that human element [for fundraising calls]. But for those standard voter ID calls where you’re calling through thousands upon thousands of people to try to get those responses, that’s where it’s going to supplement a lot of those [outreach] efforts,” he added.

Ballingall says AI can also be useful for generating different iterations of fundraising and campaign messages to see what is most effective. 

“I think AI will be very useful for data analytics, for segmenting audiences, for that sort of data-[focused] work,” said Ballingall, who is today president of the digital and campaign strategy firm Mobilize Media Group. 

“And I think it’ll allow people to do … better message targeting and create more A/B testing … lots more variant testing.”

Chris Tenove, a public policy researcher at the University of British Columbia, notes iterative messaging has long been a standard marketing practice — and is also not a silver bullet. 

“There are real limits to whether further micro-targeting will really lead to more persuasive political messages,” he told Canadian Affairs in an email. “There are diminishing returns in trying to use more of people’s characteristics to target them.

“Furthermore, people consume tons of competing political messages, particularly during elections, meaning that it’s unlikely for any party or corporation to generate super-sticky, irresistible messages.”

Fragmented feeds

Ballingall and Tenove both noted AI risks further fragmenting the political information landscape.

“I don’t think [Canadians] understand how siloed people are becoming, and the death of the monoculture,” Ballingall said. 

Even people living in the same household may now inhabit very different political realities online.

“ A wife and a husband will see completely different things on their social media,” Ballingall said. “Siblings will see different things. You and your best friends will see completely different things.” 

For Tenove, a related risk is the growing volume of low-quality or misleading content — what he called “information pollution” — and rising public skepticism toward political information that conflicts with existing beliefs.

He also raises concerns about the increasing reliance on large language models like ChatGPT or Elon Musk’s Grok for political information, noting users may gravitate toward systems that reinforce their preferred viewpoints.

He says this will be a challenge that policymakers will ultimately need to confront.

“It is unrealistic to expect people to stop using social media, chatbots, online shopping platforms, and so on, simply because they know that algorithms nudge their choices,” he said. 

“The more realistic response is to develop and enforce standards that limit manipulation and harm, while also investing in healthier, more public-serving digital tools and spaces.”

Ground rules

Another risk in the political information sphere is deepfakes. Deepfakes are audio, video or images created with AI to convincingly depict a real person saying or doing something they did not do or say. 

In 2024, the Canadian Internet Registration Authority published a report showing half of Canadians believe deepfakes pose a threat to elections. The authority manages Canada’s “.ca” internet domain and conducts research on online trust and cybersecurity.

“Two-in-ten Canadians say they have encountered deepfakes online in the past year, and one quarter don’t know whether they have,” said its report.

In a written statement to Canadian Affairs, Elections Canada said the Canada Elections Act “does not have any provisions that explicitly speak to the use of artificial intelligence.” 

But the act does have rules governing fraud or impersonation. These rules include prohibitions on publishing material that falsely claim to be authorized by Elections Canada, a political party or a candidate.

Elections Canada has previously recommended Parliament change its laws to address emerging risks associated with AI and deepfakes. In 2024, Ottawa introduced legislation to modernize aspects of the Canada Elections Act, but Parliament was dissolved before these changes were passed. 

Some provinces have moved more decisively.

B.C. recently amended its provincial election law to strengthen rules around false statements and misrepresentation during election periods. And in November, Manitoba passed election integrity legislation, which prohibits publishing false information, including deepfakes of candidates or other election participants, during election periods. 

Legislative changes that force greater transparency from tech companies would strengthen Canadian democracy, says Tenove. But he also says part of the problem lies in the lack of real-world conversations.

“We need more civil, face-to-face political engagement,” he said.

Sam Forster is an Edmonton-based journalist whose writing has appeared in The Spectator, the National Post, UnHerd and other outlets. He is the author of Americosis: A Nation's Dysfunction Observed from...

Leave a comment

This space exists to enable readers to engage with each other and Canadian Affairs staff. Please keep your comments respectful. By commenting, you agree to abide by our Terms and Conditions. We encourage you to report inappropriate comments to us by emailing contact@canadianaffairs.news.

Your email address will not be published. Required fields are marked *