Demand-side Measures for Tackling Electoral Disinformation and Misinformation on Social Media

By Kamesh Shekar

Legislative Assembly Elections in Tamil Nadu are around the corner, and information on various issues related to the State, local political parties and politicians are surging on social media platforms almost every day. While the availability of information is useful, a lack of understanding about the integrity of the information is problematic, especially during elections, as misinformation and disinformation spread faster than the truth.

While disinformation is intentional, the misinformation happens predominately due to two reasons a) users’ incapability in differentiating between authentic and wrong set of information b) users pre-existing biases. A survey report by the Reuters Institute at Oxford University suggests 39% of Indian respondents express trust (out of 28% of Indian respondents consume information in the form of news from social media) in the information they read on social media (higher than many countries). While there can be various reasons for trusting a piece of information, users must also check for the integrity of the information before trusting it.

Therefore, we need some robust demand-side measures to ensure users are aware and equipped to check the integrity of the information. While demand-side measures are necessary, it is also essential to be mindful that it is not fitting to place the responsibility of this issue only on users.

Tackling the users’ incapability in differentiating

To equip users to discern, there is a need for creating awareness through both familiarisation and education.

To get users familiar with what could be disinformation and misinformation, it is important that social media platforms start flagging/labelling them. For instance, social media platforms did flag/label disinformation and misinformation during the 2020 US Presidential Election as part of policy measures. But it is important that platforms follow the same beyond the US as part of election-related measures. Also, a study by NYU Tandon School of Engineering shows that flagging could help in slowing down the spread of false information.

But a recent research study by Rensselaer Polytechnic Institute shows that AI-driven flagging of misinformation on established beliefs emerges to be ineffective amongst users. At the same time, for new topics, there is still scope. Therefore, social media platforms should act fast and consistently while flagging/labelling disinformation and misinformation (related to ‘election-related information’) to ensure it intervenes before beliefs come in the way.

As disinformation and misinformation could fall through the cracks of flagging/labelling, it is important to educate users to discern through various pedagogical programs. The pedagogical program shouldn’t stop at the level of differentiating, whereas it should also encourage users to utilise the knowledge to correct their networks with fact-checked counterview. While correcting the network, users must choose their battles wisely so that they don’t confuse others and put them in a state of cognitive dissonance.

But research has proven that in the Indian context, pedagogical intervention for recognising disinformation and misinformation has less impact on civilians. This research points out that lower literacy rate is one characteristic that makes users vulnerable to misinformation, where this study was conducted in the context of the Bihar (35th in the rank of literacy rate) election. Ergo this also signals that the response to pedagogical programs could differ state-wise according to literacy rate, where Tamil Nadu’s literacy rate stands at 80.33%. Hence, various governmental, non-government organisations, field workers, and activists need to indulge in strategic pedagogical programs on disinformation and misinformation by setting up camps, organising focused group discussions, etc.

In addition to this, as a user we must cultivate a habit of reading fact-check news to: (a) get familiar with the false information before even getting exposed to it (b) spread the same fact-checked news extensively within the network to warn and alert them about false information in circulation.

Tackling the issue of users’ pre-existing biases

To combat the pre-existing beliefs, as a thumb rule, we as users should introspect whether the information we are about to share complies with our ideology. If it does, we have to take one step backwards and cross-check the integrity of the information through referencing multiple credible sources.

Even before applying the said thumb rule, it is fundamentally crucial for us to be aware of our biases and ideologies. Confirmation bias (one of the cognitive biases) is inevitable, but confronting it helps us work our ways through it. Besides, to make users aware of their biases, the pedagogical programs should conduct an implicit association test and also use the test results to customise the program accordingly.

However, while demand-side measures are incredibly crucial to tackle the issue of disinformation and misinformation, there is a slippery slope where the onus of this issue could transfer onto users who will always suffer out of information asymmetry, unlike the social media platforms. Hence in addition to the demand-side measures, it also important to have robust regulatory mandates and supply-side measures to tackle this issue.

The author is a tech policy enthusiast. He is currently pursuing PGP in Public Policy from the Takshashila Institution. Views are personal and do not represent Takshashila Institution’s policy recommendations. The author can be reached at kameshsshekar@gmail.com

Previous
Previous

Election Manifestoes in Tamil Nadu: Great leap ahead but not enough money

Next
Next

Why banking the unbanked should be a national priority