Canadian government's proposed online harms legislation threatens our human rights
No other liberal democracy in the world has been willing to accept these restrictions, writes Ilan Kogan
This column is an opinion from Ilan Kogan, a Canadian JD/MBA student at Yale Law School and Harvard Business School. For more information about CBC's Opinion section, please see the FAQ.
The Canadian government is considering new rules to regulate how social media platforms moderate potentially harmful user-generated content. Already, the proposed legislation has been criticized by internet scholars — across the political spectrum — as some of the worst in the world.
Oddly, the proposed legislation reads like a list of the most widely condemned policy ideas globally. Elsewhere, these ideas have been vigorously protested by human rights organizations and struck down as unconstitutional. No doubt, the federal government's proposed legislation presents a serious threat to human rights in Canada.
The government's intentions are noble. The purpose of the legislation is to reduce five types of harmful content online: child sexual exploitation content, terrorist content, content that incites violence, hate speech, and non-consensual sharing of intimate images.
Even though this content is already largely illegal, further reducing its proliferation is a worthy goal. Governments around the world, and particularly in Europe, have introduced legislation to combat these harms. The problem is not the government's intention. The problem is the government's solution.
Serious privacy issues
The legislation is simple. First, online platforms would be required to proactively monitor all user speech and evaluate its potential for harm. Online communication service providers would need to take "all reasonable measures," including the use of automated systems, to identify harmful content and restrict its visibility.
Second, any individual would be able to flag content as harmful. The social media platform would then have 24 hours from initial flagging to evaluate whether the content was in fact harmful. Failure to remove harmful content within this period could trigger a stiff penalty: up to three per cent of the service provider's gross global revenue or $10 million, whichever is higher. For Facebook, that would be a penalty of $2.6 billion per post.
Proactive monitoring of user speech presents serious privacy issues. Without restrictions on proactive monitoring, national governments would be able to significantly increase their surveillance powers.
The Canadian Charter of Rights and Freedoms protects all Canadians from unreasonable searches. But under the proposed legislation, a reasonable suspicion of illegal activity would not be necessary for a service provider, acting on the government's behalf, to conduct a search. All content posted online would be searched. Potentially harmful content would be stored by the service provider and transmitted — in secret — to the government for criminal prosecution.
Canadians who have nothing to hide still have something to fear. Social media platforms process billions of pieces of content every day. Proactive monitoring is only possible with an automated system. Yet automated systems are notoriously inaccurate. Even Facebook's manual content moderation accuracy has been reported to be below 90 per cent.
Social media companies are not like newspapers; accurately reviewing every piece of content is operationally impossible. The outcome is uncomfortable: Many innocent Canadians will be referred for criminal prosecution under the proposed legislation.
But it gets worse. If an online communication service provider determined that your content was not harmful within the tight 24-hour review period, and the government later decided otherwise, the provider could lose up to three per cent of their gross global revenue. Accordingly, any rational platform would censor far more content than the strictly illegal. Human rights scholars call this troubling phenomenon "collateral censorship."
Identifying illegal content is difficult, and therefore the risk of collateral censorship is high. Hate speech restrictions may best illustrate the problem. The proposal expects platforms to apply the Supreme Court of Canada's hate speech jurisprudence. Identifying hate speech is difficult for courts, let alone algorithms or low-paid content moderators who must make decisions in mere seconds. Although speech that merely offends is not hate speech, platforms are likely to remove anything that has even the slightest potential to upset. Ironically, the very minority groups the legislation seeks to protect are some of the most likely to be harmed. That's why so many Canadian anti-racism groups have opposed the legislation.
We must demand better
So, what to do about online harms? One step in the right direction is to recognize that not all harmful content is the same. For instance, it is far easier to identify child pornography than hate speech. Accordingly, the timelines for removing the former should be shorter than for the latter.
And although revenge pornography might be appropriate to remove solely upon a victim's request, offensive speech might need input from the poster and an independent agency or court before removal is required by law. Other jurisdictions draw distinctions. Canada should too.
Regulating online harms is a serious issue that the Canadian government, like all others, must tackle to protect its citizens. Child pornography, terrorist content, incitement, hate speech, and revenge pornography have no place in Canada. More can be done to limit their prevalence online.
But the proposed legislation creates far more problems than it solves. It reads as a collection of the worst policy ideas introduced around the world in the past decade. No other liberal democracy has been willing to accept these restrictions.
The threats to privacy and freedom of expression are obvious. Canadians must demand better.
Do you have a strong opinion that could add insight, illuminate an issue in the news, or change how people think about an issue? We want to hear from you. Here's how to pitch to us.