Skip to content

Online disinformation should not solely be the responsibility of digital platforms to address.

In the year of 1835, the readers of American newspaper The Sun were captivated by a sequence of articles titled "Extraordinary lunar findings," which alleged that a scientist had constructed the biggest telescope ever and observed hordes of winged half-men and unicorns on the moon. It took a...

Online disinformation isn't solely the responsibility of digital platforms to address. Policymakers...
Online disinformation isn't solely the responsibility of digital platforms to address. Policymakers ought to take an active role in dealing with the issue.

Online disinformation should not solely be the responsibility of digital platforms to address.

In the digital age, the responsibility of promoting responsible Internet usage and combating disinformation has become a shared endeavour for tech companies, policymakers, and various organizations.

Recent investments by tech giants, such as Facebook, Google, Amazon, and Microsoft, in digital literacy initiatives, demonstrate a commitment to equipping the general public with the necessary skills to navigate the online world. These initiatives include interactive lessons, videos, and online classes designed for young people, as well as partnerships with organizations like the World Economic Forum to improve digital skills in Southeast Asia.

However, the issue of disinformation is not a new one. Policymakers are urged to focus on proven solutions, such as increasing digital and news literacy among the populace. This approach aims to empower individuals to discern reliable information from misleading content, thereby reducing the impact of disinformation.

While some forms of disinformation, like political propaganda, may be undesirable, they are not necessarily illegal in Western democracies. This complexity calls for a nuanced approach, with policymakers holding those producing misinformation accountable for their content, much like in the offline world.

In the face of state-backed foreign actors involved in election interference, international sanctions and diplomatic responses may be necessary. However, when it comes to content moderation on online platforms, a balance must be struck. Threatening to penalize platforms for inadvertently allowing impermissible content could lead to stringent moderation rules that could potentially diminish free speech online.

Critics, including former Member of the European Parliament Marietje Schaake, have accused tech companies of creating the problem of disinformation. Schaake has collaborated with numerous NGOs to promote digital and media literacy and advocate for policies that strengthen justice, democracy, human rights, and sustainability in digital contexts.

During the COVID-19 pandemic, several platforms, including Facebook and Twitter, took action against misleading news and conspiracy theories. Google banned ads on websites spreading misinformation about the virus, and YouTube prohibited anti-vaccination content.

Yet, no moderation system is perfect, and some permissible content may be denied, while some impermissible content may slip through. It is crucial for platforms to detail their content policies, enforcement methods, and user reporting procedures, as well as publish transparency reports on their content moderation decisions.

Finland offers an inspiring example with a series of classes teaching critical thinking online and how to detect and counter fake news. These classes are available to residents, students, journalists, and politicians, underscoring the importance of education in combating disinformation.

Lastly, it is essential for governments themselves to be mindful of their role in spreading disinformation. Policymakers should not order companies to remove online content that would be lawful offline, maintaining the integrity of free speech in the digital realm.

Read also: