Access to technology has created negligent users, propagandists and alarmists. The coronavirus pandemic has, in addition, strained our trust in the information ecosystem just as much as it has our public health infrastructure. The deployment of artificial intelligence to peddle, and to stem the flow of fake news by vested parties has drastically complicated and undermined the quintessence of truth. The absence of targeted technological and legal instruments, especially in India, to combat the issue is extremely concerning.
The paper argues that the volume of AI-generated misinformation, particularly piggybacking off of the pandemic, can only be curbed through skepticism, behavioral changes, responsible technological use and appropriate policy making, and implementation. The methodology followed is such that it highlights instances of AI having been used, and avenues for
use of perpetuating falsehoods; asserts the key involvement of human fact-checkers; and analyses the mannerisms and drawbacks of utilizing machine-learning capable AI to reduce user vulnerability to deception by combatting fabrications.
Furthermore, international approaches to tackling fake news have been juxtaposed with the contemporary Indian legal paradigm to ascertain prevailing legislative gaps, and to understand intermediary responsibilities. The conclusion, and the recommendations are meant to instill responsibility into all stakeholders; ensure progressive decision, and law-making; and
guarantee transparency, non-partisanship and a balance between human and automated intervention. The infodemic must not be allowed to fundamentally damage the flow of verified information.
Keywords: Artificial intelligence, coronavirus, misinformation, propaganda, human intervention, user vulnerability, intermediaries.
The circulation of misinformation, especially on social media platforms, has been a pandemic long before the coronavirus outbreak crippled nations. It has given birth to unauthentic claims that Vitamin C, colloidal silver and vaping of organic oregano oil can be used as cures, even when no vaccine is available, has caused the hoarding of masks
and sanitizers, 3 and led to racial discrimination 4 and islamophobia. 5 The adverse impact of technological democratization has been, inter alia, the enabling of government agencies, entities and netizens to successfully peddle their own rhetoric, the hastening of information dissemination without adequate accuracy detection, and the creation of panic amongst the masses.
It is therefore imperative that the potential for fake news to cause extraordinary harm, impact global development, and mould public opinion implicitly must be addressed. COVID-19 has only exposed our frailties as a society in terms of our law and order system, healthcare 6 and social psychology. The integration of artificial intelligence (AI) was aimed to combat today’s paradigm of yellow journalism, by reducing human intervention and increasing adherence to the truth, but it has instead been weaponized into a tool for fabrication. We are now at a critical juncture where the damage caused by
the pandemic must not be accelerated by the multi-modal, voluminous nature of AI-generated misinformation, but mitigated through skepticism, behavioral changes, responsible technological use and appropriate policy-making and implementation.
User Vulnerability To Misinformation And Deception
As Sadhguru opines, the globalization of gossip has caused disruptions. 5 With Facebook, Whatsapp, Twitter, Youtube, Telegram, WeChat and Tiktok collectively boasting billions of users, social networks have become the new, convenient and preferred sources of relevant information, 6 irrespective of their inadequacy of the extent of their manipulation. People are more interested in entertaining or gaining attention than reliability of the information, and they are neither under the impression that sharing of fake news has real consequences, nor do they feel that they are under any obligation to verify the veracity of the claims they are perpetuating. Furthermore, many netizens have ideological allegiances that translate to selective sharing, and in doing so, they conveniently omit key information.
Opportunistic private parties have capitalized on the fear surrounding the pandemic by creating dangerous domains that exploit unsuspecting users through phishing, malware attacks and hacking into their bank accounts, email and social media. Over 4000 fraud portals related to the virus were recently created, 7 and even fake UPI IDs were generated to mislead donors who intended on giving money to the PM-CARES Fund. 8 Even though the fraudsters can be booked under the Indian Penal Code, 1860 for cheating, and under the Information Technology Act, 2000, for identity theft, early detection of misinformation is crucial in preventing continued real-time impact, which can be disastrous. Two months ago, one man took his own life falsely believing that he had the virus and was a threat to society, after watching multiple videos on it. 9 What exacerbates the problem is AI being used to sow paranoia, which leads to a paralysis of the democratic system.
Fake information can take the form of rumors, which are pieces of circulating information whose accuracy isn’t verified yet, disinformation, which is inaccurate information that is intentionally false and deliberately spread, hoaxes, which are fabricated falsehoods masquerading as the truth, fake news, which is a verifiably false news article, and misinformation, which is fake information spread unintentionally, among others. 11 While this paper will primarily rely on misinformation/disinformation or fake news interchangeably, they have different classification in terms of motive, deception and damage. Rumors and Fake News, when not debunked for too long, become disinformation or misinformation, which can become concretized into a hoax. A user’s vulnerability increases as a falsehood continues circulating in the information ecosystem.
Therefore, fact-checkers are required to quell paranoia, elevate trustworthy sources to increase accessibility and visibility, cut short the life cycle of misinformation and provide reliable guidance at the time of need. Non-partisan, ethical journalism and sound methodology is integral in ensuring the debunking of unsubstantiated reports. Boom Live, NewsMobile, AltNews.in, Factly shoulder the burden of fact-checking in India, composed primarily of stakeholders from various fields, following a similar debunking methodology- selection of a claim, researching, evaluation, writing of a fact-check and its continuous updation. 12 Boom Live, also has a Whatsapp helpline number where you can send forwards that have gone viral to be verified. 13 While it is not an airtight system, the information ecosystem is built on trust, and if it is infiltrated on a massive
scale, it desensitizes people to events that truly demand attention. They are, however, woefully under-equipped to deal with the volume that continues to be generated.
Role of Artificial Intelligence in Disseminating Misinformation
Cambridge Analytica’s systematic social engineering of targets by leveraging bots, A/B testing, dark posts on Facebook, to keep them on an emotional leash without ever letting them go, found resounding success during the 2016 US Elections and the Brexit Campaign. Computational propaganda, though initiated by a mixture of dumb bots controlled my humans using psychographic hacks, has evolved into machine-learning driven AI that has access to personality profiles and uses emotional manipulation, brute force to implement counter-measures to disable defences. The growth has been exponential and a huge cause for concern: from generating noise, and silent alteration of online conversation to functionally conversational, intuitive, adaptive and capable of conducting phishing attacks and denial-of-service attacks, without human intervention.
Primarily, AI can be used for propaganda distribution, targeting and content creation, likedeep-fakes, self-help blogs and literature. 16 One research studied the extent of AI’s ability to produce credible-sounding texts by evaluating the human ability to detect AI-generated information, the credibility distribution of three different AI models trained using increasingly larger datasets to create variations in power, and the likelihood of a partisan believing politically congenial stories even in the backdrop of disclaimers. The research found that individuals could identify some hallmarks of AI-generated text such as logical inconsistencies and grammatical errors but the readers generally perceived it to be a credible human-written article; that the credibility of the synthetic text only marginally improved with increased model size; and that partisan bias barely influenced perceived media credibility and did not remedied attitudes about contentious issues. Confusion between synthetic and authentic texts is treated as a satisfactory result if persuasion failed.
While the ‘weaponized AI propaganda machine’ has presently been mostly employed in a political scenario, it can just as easily be used to push falsehoods regarding disaster management and pandemic response. In times like this, fact-checking is key to stem the flow of falsehoods.
Role of Artificial Intelligence in Combating Fabrications
Machine learning continuously improves AI, and based on current efforts to detect misinformation, the AI has to be trained in the following four categories of features:
- Content-based methods that rely on textual or visual features extracted from the information like videos or picture s.
- Social context-based methods that involve interaction characteristics like commenting, posting etc.
- Feature fusion methods that combine social context and content-based features, and
- Deep-learning based methods that aim to extract representations of abstract misinformation data (Chinese whispers) automatically.
The nexus between the above features and authenticity is that they are presently the hallmarks of human and linguistic interaction on the internet, and AI’s attempts to mimic the same presently only flatters to deceive.
AI was inducted in the fight against misinformation to detect it in real-time and to reduce manpower, and thereby human intervention, so as to allow them to make value judgments through evaluation. Delegation of fact-checking, at least of the underlying grunt work such as research and updation, to the AI can help tackle the volume. 20 Its continuous improvement in a short span of time would ensure that better results are guaranteed, and local vernaculars can be included as well. When it has had all relevant stakeholders – journalists, researchers, computer scientists- contribute to its development and deployment, it can provide a holistic picture on the veracity of the claims. Additionally, if the datasets used to train the AI are inclusive and unbiased, moderators can be protected from traumatizing content and their biases can be bypassed. Couching AI in Blockchain technology would further its algorithm’s, and output’s credibility, and hasten its learning through the sharing of data and models. 21 Lastly, its API integration in different websites and social media networks could prevent misinformation from reaching a critical mass of users, 22 not post-facto, and provide reasoning as to how and why a
piece of information is deemed trustworthy or not.
There are presently multiple fact-checking entities, like Factmata, Metafact Blackbird.AI, Full Fact and The Duke Tech & Check Cooperative by the Reporters’ Lab that seek to automate the process, create tools for fact-checkers to verify claims and prevent the general populace from being misled. Facmata uses 9 different parameters or ‘signals’ to evaluate truthfulness. When considering the use-case of the coronavirus pandemic, Blackbird produced two volumes of the COVID-19 Disinformation Report analysing millions of inorganic tweets. They created a Blackbird Manipulation Index (BBM) to quantify the percentage of inorganic posts to assess the degree of manipulation on the information ecosystem, and after evaluating narratives such as media delegitimization, bio-weapons conspiracy, delegitimizing Chinese culture and the Democratic Party, religious anti-meat scams and the Tencent numbers that arose from the coronavirus, Blackbird concluded that, on average, there was high degrees of manipulation.
Admittedly, AI-powered fact-checkers aren’t completely foolproof, nor trustworthy and run the risk of false positives. While natural language processing is utilized by Metafact to “detect and monitor the fake data cacophony online”, but regional languages, fake audio, videos and photos pose a challenge that is yet to be overcome. The Centre for Media Studies concluded that video was “the most powerful modality in spreading fake news in India”, 25 which means that the most prevalent source of misinformation isn’t being tackled adequately. Inability to tackle vernacular misinformation, possibility of the machine-learning algorithm being poisoned even before it is deployed, 26 short stretches of text-sharing and budgetary constraints that could limit the quality of the underlying algorithm are all additional problems that have to be considered.
Additionally, AI’s reliance on big datasets creates a blind spot in terms of data outside its purview. Inherent bias in such datasets and a particular government’s propensity to those that color them positively would impact the trustworthiness of AI’s debunking abilities. Furthermore, while journalists would be capable of stating if a story is partly true, partly false or simply misleading, AI would not be able to identify such nuances, as it would mostly be only able to recognize if something is true or false. 27 This translates to humans having to aid the AI to process preliminary findings, just as it helps them. There is also no metric presently for the AI to effectively balance its core objectives with the Fundamental Right to Free Speech. Lastly, India’s overreliance on Whatsapp due to its user-friendly nature results in faster transmission of misinformation than other channels, and its End-to-end encryption further impedes the traceability of the manipulation.
Using AI to tackle human or AI-generated problematic material has flaws, but the prevailing legal regime must seek to empower it to ensure that disinformation does not continue to subsist, but is actively identified.
Nations have adopted a multi-pronged approach in tackling misinformation, with task-forces, committees, bills, policies, awareness campaigns and legislations all working to limit it. France has a legislation, the Law against the Manipulation of Information, that defines criteria, mostly during election periods, to establish that news is fake and being disseminated deliberately on a large scale. Germany’s NetzDG dictates that content that is classified as fake news, hate speech or simply illegal must be taken down by social networks within a day of receiving the direction, or face fines of up to 50 million euros.
Ironically, China has banned companies from relying on AI to produce fake news through deep fakes, and failure to comply would be a criminal offence. 30 The Protection from Online Falsehoods and Manipulation Act, in Singapore made it illegal “to spread false statements of fact under circumstances in which that information is deemed prejudicial to Singapore’s security,
public safety, public tranquility, or to the friendly relations of Singapore with other countries.” The sites that peddle misinformation, or accounts that share such falsehoods can be blocked by technology companies if they are ordered to do so. 31 While some lessons need to be adopted by India, some must either be disregarded given their propensity for abuse, or must be altered for our unique socio-cultural circumstances.
Present Legal Paradigm in India
Digital disruption has led to mass technological proliferation, with Cisco estimating the presence of around 1.5 billion internet-connected devices in India. Concurrently, there has been an increase of 150% of organized social media manipulation campaigns internationally over the last two years. 33 Therefore, it is imperative that governments and companies regulate proactively, and consolidate the scattered provisions dealing with the same.
Section 505(1) of the Indian Penal Code imposes a prison sentence of up to 3 years or fine or both, on any individual whose publication, creation and/or circulation of reports, statements or rumors that cause or may cause alarm or fear to the public or a portion of the public. Additionally, Section 415, read with 420 of the IPC penalizes cheating, which includes any act or
omission that induces another person into an act or an omission that is likely to result in damage to the body, mind or property of an individual, with imprisonment for up to seven years and fine.
Section 66D of the IT Act, 2000 states that any person who uses a computer resource or communication device to cheat by personation would be punished with up to 3 years of imprisonment and fine of up to 1 lakh rupees.
While the Information Technology Rules of 2011 have prescribed ‘best practices’ to ensure accuracy and security of stored data, it is grossly inadequate in isolation. Therefore, the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018, though yet to be notified, was meant to impose duties upon intermediaries defined as under Section 2(1)(w) of the IT Act. However, the Rules goes too far by shifting the burden of responsibility to intermediaries, imposes unreasonable due diligence requirements, includes too many entities under the scope of the Rules, violates free speech and privacy of individuals, does not make any reference to AI-created content and is intentionally vague to allow arbitrariness in governmental action. It is also ambiguous by not clarifying whether online media portals are included within the purview of the Rules.
Concurrent reading of Rule 2(f), 2(j) and 3 imposes obligations on intermediaries to tackle information sharing if it threatens health and safety. Providing assistance to trace the originator of a particular piece of information on their platform is logistically impossible and violates privacy, and automated takedown of ‘unlawful content’ can be abused to stifle speech and dissent. Moreover, there is no metric to identify how the five million user requirement is calculated and would increase business costs for, and reduce investor confidence in firms meeting that requirement.
The chilling effect of overbreadth was illustrated conclusively in the judicial precedent Shreya Singhal vs. Union of India. While the courts championed the Right to Free Speech and Expression by striking down Section 66A of the IT Act and defining “actual knowledge” by ISPs as a court or governmental agency order for content takedown, a re-reading of the above section
with the fundamental concepts of discussion, advocacy and incitement needs to be considered, in light of today’s comprised state of information flows. Misinformation falls neither within the attracting requirements of Section 66A, nor does it fall squarely under motives of discussion, advocacy or incitement espoused in the judgement. Free speech should not shield fake
information given its potential for damage is immense.
Intermediaries cannot shirk their responsibilities, even after receiving “actual knowledge”, as they are the gate-keepers of information. Section 79 of the IT Act ensures that they are exempt from liability only if they are diligent, the transmission or storage is temporary and they are not a party to the violation. To provide punitive effect, it is read with the following:
- Section 54 of the Disaster Management Act, 2005, wherein an individual is imprisoned for up to a year with fine, if he/she causes panic regarding a disaster’s magnitude or severity by circulating or making a warning or a false alarm, or
- Section 3 of the Epidemic Diseases Act, 1897 which causes misinformation sharing to obstruct dispensing of government duty under Section 2 or 2A, further read with Section 188 of the IPC which penalizes disobedience to governmental orders.
Conclusion and Recommendations
Over 90% of fellows surveyed from the American Association of Artificial Intelligence believed that super-intelligence was “beyond the foreseeable horizon.” AI, on its own, does not possess the means to squash misinformation, especially if you bring in other AI-generated content in the mix. While there is no simple fix to tackle computational propaganda online, companies need to widen their myopic focus, set aside their arrogance and naïveté and put netizen interest first. Lawmakers also have a responsibility to make stringent laws that are effective in meeting the challenges of the techno-revolution. They must provide proper mechanisms that police social media through positive reinforcement, and not through negative reinforcement methods they presently employ. They must strike a balance between respecting user rights, placing undue burden on companies and limiting fake news in a non-partisan manner, even if they can benefit from misinformation during election cycles. It is imperative that users must use technology responsibly, implement fundamental behavioural changes and try to actively ascertain accuracy of information.
- Media Pluralism and Literacy Initiatives: Diversity of expression allows informed choice. Apps and social networks must have integrated, contemporaneous accurate information hubs that aren’t buried, available in multiple languages and objective. User awareness is of utmost importance.
- Transparent Source Indicators: Instead of simply de-prioritizing misinformation by altering the algorithm, users should be told how search results or their feeds are influenced using digestible parameters.
- Immediate Relevant Legislation Implementation: India must put forth, after consultation, a cybersecurity policy, data protection legislations, bi-partisan anti-fake news laws and rules and responsible AI-use policies to provide clarification on the subject.
- These should alleviate the burden on companies and protect user rights, without compromising the flow of information. Governments must be responsible by placing safeguards to prevent systemic surveillance and abuse of anti-defamation laws.
- Media houses which broadcast information, or politicians and eminent figures whose words can sway the populace must be penalized differently from the common man. The former’s licence must be revoked if it doesn’t take adequate measures to verify information accuracy, and the latter must be fined or prevented from campaigning if they spread misinformation.
- Independent Appeal and Audit: Platforms dealing with user data and sharing of information must be audited by an independent committee that also considers the role automation plays. It must consist of multiple competent stakeholders to represent their interests.
- Extent of Human Intervention in Automation: Utilization of AI for moderation purposes must have humans in a position making value judgements, with users being able to appeal and understand the same.
- Integration of Verification Mechanisms: Apps, services and networks must have verification hubs that either proactively limit sharing of misinformation or allow a user to independently check the veracity of a claim, or both. The reasoning and associated disclaimers must also be attached.
- Research Centres: Since the area is still in its infancy, research is required to shed light on some of the complexities and increase transparency vis-a-vis AI, so as to better amend the laws when technology inevitably undergoes change.
- Corporate Social Responsibility: Service providers must also join hands in combating misinformation, and make it an integral part of their CSR protocols.
- Setting up of Nodal Agency to Tackle Misinformation: A governmental agency must be set-up to assist intermediaries, social media networks and other information providers on reducing the scourge of fake news.
Also Published In: Lawtechreview.in