QUICK TAKE
- 85% of Canadians want government AI regulation with 57% strongly supporting oversight
- 78% worry AI threatens jobs despite 70% of Gen Z seeing productivity gains
- Deepfakes targeting politicians create immediate brand protection risks for businesses
- 75% satisfaction rate among current AI users suggests quality implementation drives success
- Federal focus shifts from regulation to economic maximization, creating self-governance opportunities
A striking majority of Canadians are demanding artificial intelligence regulation as deepfake threats escalate across the business landscape, according to a new Leger poll revealing stark generational divides and immediate risks to corporate leadership.
The Leger poll shows 85% of Canadians want governments to regulate artificial intelligence, with 57% strongly supporting oversight amid rising deepfake threats. The findings expose critical tensions between innovation demands and public safety concerns that businesses must navigate carefully.
Public Trust Creates Strategic Business Divide
Canadians draw clear distinctions on AI applications that businesses must approach thoughtfully. While 64% trust AI for household tasks and learning support, only 36% trust AI for medical advice, 31% for legal advice, and just 18% believe AI can replace teachers.
Jennifer McLeod Macey, senior vice-president of public affairs at Leger, notes: “Public opinion around different types of AI varies in terms of how much we trust AI or how concerned we are with it. It won’t be a one-size-fits-all; it’s really quite nuanced.”
This trust gap presents both challenges and opportunities for businesses deploying AI solutions. Companies need to demonstrate responsible implementation while addressing varying comfort levels across different AI applications.
Generational Workforce Split Demands Strategic Response
AI’s productivity effects reveal sharp generational divides influencing hiring and training strategies. An Ipsos poll for TD Bank found 70% of Gen Z workers believe AI enhances productivity, compared with 50% of Gen X and 38% of Baby Boomers.
However, 78% of Canadians express concern that AI threatens human jobs, creating tension between productivity improvements and employment security. Forward-looking companies are tailoring AI rollouts to workforce demographics while addressing job displacement worries.
Deepfake Threats Demand Immediate Executive Protection
The misuse of AI is manifesting quickly across Canada’s corporate landscape. Steve DiPaola, professor at Simon Fraser University, warns that deepfakes pose immediate risks to corporate leadership and brand integrity.
DiPaola explains: “Regulating deep fakes, surely taking someone’s persona, and we’re seeing more and more of this in social media where there are celebrities or even politicians who appear to be in front of you like a TV commercial selling something that in fact they’ve never approved.”
Saskatchewan Premier Scott Moe’s government is monitoring deepfakes targeting public figures, including federal leaders. The Canadian Centre for Cyber Security warns that threat actors use AI-generated messages to impersonate senior officials, targeting financial resources and sensitive information.
Regulatory Gap Creates Self-Governance Opportunities
Despite public demand for regulation, the federal government has shifted focus toward economic maximization rather than oversight. The Minister of Innovation, Science and Industry currently manages AI policy, leaving regulatory gaps that businesses can address through self-governance.
This offers competitive advantages for companies building trust through transparent policies and ethical AI deployment. Early adopters who address public concerns with strategic governance will establish sustainable market positions.
The office of Minister François-Philippe Champagne confirmed Ottawa’s investments in secure infrastructure and support for institutions like the AI Safety Institute to identify risks early. Parliament is scheduled to receive updates upon resumption in September.
Market Reality Shows Divided Consumer Landscape
Polling data reveals significant insights for business strategy. With 34% of Canadians viewing AI as beneficial and 36% seeing it as harmful, companies face a divided market that requires nuanced approaches.
Leaders must align AI tools with public sentiment and organizational values. High concern rates for privacy (83%) and cognitive decline (46%) present clear parameters for responsible AI deployment.
Current Usage Patterns Guide Investment Priorities
According to Leger’s poll, 57% of Canadians use AI currently, signifying substantial market expansion despite regulatory uncertainty. AI users report a 75% satisfaction rate for tools rated excellent or good, indicating quality implementation fosters positive experiences.
Chatbots such as ChatGPT lead usage at 73%, while AI-powered search engines see 53% adoption. These usage patterns inform strategic investments for businesses entering the AI sector.
Strategic Implementation Framework
Businesses have the opportunity to pioneer ethical AI deployment while competitors hesitate. Success hinges on addressing privacy concerns and employment fears through transparent communication and responsible governance.
Canada’s AI development reflects global tensions between innovation and ethics. Companies that responsibly deploy AI and address public concerns will influence future regulations and build consumer trust amid a divided landscape.