Ottawa is reviving controversial legislation to regulate online content, raising alarm bells about government overreach and Charter violations

Canada’s federal government is once again attempting to regulate online speech through sweeping legislation that critics warn could fundamentally alter the country’s approach to free expression. While framed as protecting children and vulnerable individuals from harmful content, the proposal’s broad language and enforcement mechanisms have reignited fierce debate about the line between digital safety and state censorship.

The Ghost of Bill C-36

The current push represents an evolution of Bill C-36, legislation that died amid widespread criticism in 2021. That earlier attempt drew sharp condemnation from civil liberties advocates and academics, with researchers at the University of Toronto’s Citizen Lab describing the government’s approach as “aggressive,” “punitive,” and “disturbing.” The bill was ultimately abandoned, but its core components have returned under a new guise.

Heritage Minister Steven Guilbeault is now championing a revised framework that maintains many of the original bill’s controversial elements while attempting to address concerns about Charter compliance. Speaking before a House of Commons committee, Guilbeault insisted the legislation targets only “clearly harmful content” and is “designed to comply with the Charter of Rights and Freedoms.”

What’s Actually in the Proposal

The legislation would establish a Digital Safety Commission with extensive regulatory powers over online platforms. This body would have authority to compel tech companies to remove flagged content within 24 hours in certain categories, including intimate images shared without consent and child exploitation material.

Beyond these universally condemned categories, however, the proposal ventures into more contested territory. It includes provisions to significantly expand Criminal Code penalties for “hate propaganda,” including life imprisonment for promoting genocide. The bill would also create a new “hate crime” offense with enhanced sentencing.

Perhaps most controversially, the legislation would allow judges to issue “peace bonds” restricting someone’s freedom based on predictions of future hate-based offenses—essentially limiting speech that hasn’t yet occurred based on anticipated harm.

The proposal also seeks to amend the Canadian Human Rights Act, enabling individuals to file complaints over online expression deemed to constitute “detestation or vilification” under Supreme Court precedent. While Guilbeault maintains this standard excludes merely offensive speech, critics argue the subjective nature of these definitions creates dangerous ambiguity.

Additionally, the government wants to criminalize non-consensual deepfake pornography and impose stricter penalties for sharing intimate images without permission—measures that have garnered broader support but remain embedded within the larger controversial framework.

The Enforcement Question

The proposed Digital Safety Commission would represent a significant expansion of government power over online expression. Platforms would face mandatory takedown timelines, with both content creators and complainants given opportunities to respond before final regulatory decisions.

This creates a system where a state-backed body makes binding determinations about permissible speech—a structure that civil liberties advocates warn could have profound chilling effects on legitimate discourse, particularly around controversial political and social issues.

The “peace bond” provision raises particularly thorny questions about preemptive speech restrictions. By allowing courts to limit someone’s freedom based on predicted future offenses rather than actual criminal conduct, the legislation ventures into preventive law that many legal scholars consider fundamentally incompatible with traditional principles of justice and free expression.

Charter Concerns and Vague Language

Despite Guilbeault’s assurances about Charter compliance, serious constitutional questions remain. The Canadian Charter of Rights and Freedoms provides robust protections for freedom of expression, though these rights are subject to “reasonable limits prescribed by law as can be demonstrably justified in a free and democratic society.”

The government’s challenge lies in demonstrating that its broadly worded restrictions meet this justification standard. Terms like “detestation,” “vilification,” and “hate propaganda” have been interpreted by courts, but their application to the vast and varied landscape of online speech creates unavoidable gray areas.

Critics point out that what constitutes “harmful content” beyond clearly illegal material like child exploitation often depends heavily on subjective judgment and context. Speech that some find offensive or even hateful may constitute legitimate political or religious expression to others. Handing a government commission the power to make these determinations raises fundamental questions about who decides the boundaries of acceptable discourse in a democracy.

The University of Toronto researchers who condemned the original bill were particularly concerned about the legislation’s potential to capture legitimate expression. Their criticism suggests that even with revised language, the fundamental structure of government-mandated content regulation poses inherent risks to free speech.

International Context

Canada’s approach exists within a broader international trend toward online speech regulation. European Union regulations and similar efforts in other democracies have attempted to balance platform accountability with speech protections, with mixed results. Critics of these initiatives point to documented cases of over-removal, where platforms err on the side of caution by taking down content that may be controversial but legal.

The 24-hour removal timeline contemplated in the Canadian proposal could exacerbate these tendencies, giving platforms little time for nuanced assessment of context and creating incentives for automated systems that may struggle with the complexities of human communication.

What Comes Next

The government has not yet introduced formal legislation, and the exact scope of the new proposals remains undetermined. Guilbeault has indicated that elements may be split across multiple bills, with Minister Fraser’s Bill C-9 addressing some aspects while other components await separate introduction.

This piecemeal approach may be strategic, allowing the government to pass less controversial elements—like deepfake criminalization—while continuing to refine language around hate speech and platform regulation that drew the most criticism.

Civil liberties organizations, legal scholars, and free speech advocates are preparing for renewed battles over the legislation’s constitutional validity and practical implications. The outcome will likely determine not just what Canadians can say online, but who gets to make that determination—a question that strikes at the heart of democratic governance in the digital age.

For now, the skeleton of Bill C-36 has indeed returned, dressed in new language but animated by the same fundamental tension between protecting vulnerable individuals and preserving the robust exchange of ideas that democratic societies require.

As this legislation develops, Canadians face a fundamental choice about the proper role of government in regulating online expression—and whether the promised protections are worth the potential costs to free speech.