From AI Algorithm Biases to Deepfakes: The Need to Regulate Cyberspace

Battling the Dark Side of AI: The Urgent Need for Regulation to Curb Biases of AI Algorithm

After reading this comment many may presume that the journal has an anti-AI or a pro-digital censorship editorial stance.  This is not the case.  We have all benefitted from AI applications and innovations—thecover of this issue of CQ has been generated with the help of an AI application.  However, there is another side (related to AI algorithm biases, deepfakes, etc.), the toxicity of which is affecting millions, and this must be addressed (without getting apocalyptic).

The potential of social media as a catalyst for mass movements was witnessed in 2010-2011 through the Arab Spring.  The movement began in December 2010, as the “Jasmine Revolution” in Tunisia after the “self-immolation of Mohamad Bouazizi”.  It was successful in removing President Zine al-Abidine Ben Ali, who eventually fled the country in January 2011.  Egypt was next in line and was able to remove President Hosni Mubarak from power by February 2011.  The success in these two countries unleashed a wave of movements/protests in Yemen, Libya, Bahrain and Syria (https://www.britannica.com/event/Arab-Spring).

Social media was used to organize, disseminate content, create awareness, gather people, etc., however, branding of the Arab Spring as a “Facebook Revolution” was an overstatement. It was used as a campaigning, management, organizational and broadcasting tool, but was not responsible for what preceded or proceeded the movement, i.e. the cause and result.  Simply put, when reactions to injustices and inequalities embedded in societal hierarchy were already in play, the new media provided a platform to manage and propagate these grievances in an effective manner.

Things have changed since the Arab Spring. The exponential growth of “computational processing power” (Moore’s Law:  Gordon Moore, the cofounder of Intel, predicted that the number of transistors in a computer chip, in other words the computational processing power, would double approximately every two years—this prediction has remained accurate for nearly 60 years), advances in connectivity and big data have made Artificial Intelligence (AI) ubiquitous.

The widespread use of Generative Artificial Intelligence, along with the use of social media as the primary source of information and news, especially for the youth, has, amongst other impacts (beneficial and harmful), augmented biases and the sway and effectiveness of disinformation and misinformation. This, in turn, is aggravating preexisting grievances and extending the digital world’s ‘sphere of influence’ towards the ‘preceding and proceeding’ segments of agitations around the world.

Augmenting Biases

An integral subset of AI is machine learning—the use of algorithms to learn from data—whether supervised (deep learning) or unsupervised, requires (or at least does till now) human input which, inadvertently or advertently, may have biases.  These biases can affect AI applications designed for search engines, information extraction, social networks, etc.

Furthermore, AI Algorithms generate search results in accordance with the user’s profile or online history.  For instance, if a person searched for articles written by John J. Mearsheimer (an American scholar on international relations known for his theory of offensive realism) then search engine algorithms will, most likely, filter future searches on international relations to reflect the realist perspective.

In the book, Age of AI: And Our Human Future, Henry Kissinger et al. wrote, “… in cyberspace, filtration is self-reinforcing.  When the algorithmic logic that personalizes searching and streaming begins to personalize the consumption of news, books, or other sources of information, it amplifies some subjects and sources and, as a practical necessity, omits others completely.  The consequence of de facto omission is twofold: it can create personal echo chambers, and it can foment discordance between them.”

Deepfakes, Disinformation and Misinformation

To confound matters, deep fakes are boosting the already pervasive presence of disinformation and misinformation in cyberspace.   Deep fakes use AI technology and deep learning (the supervised use of data sets) to generate fake images, videos and audios. While beneficial to some industries, for instance entertainment, its nefarious applications have created discord, radicalization and polarization, especially when it reinforces preconceived biases.

The most effective forms of deep fakes use a particular authentic incident or a decontextualized quote or image and create layers of lies around it.  The references used to generate these types of deep fakes are real and, therefore, acceptable with the masses, particularly those who are already inclined towards similar biases. The next step is circulation or ‘making it viral’ and that is done through the efficient use of social media platforms.

As a result, an industry that was meant to unify humankind through online social interaction is now inundated with disinformation, misinformation and deep fakes, that encourage tribalism, polarization and, in many cases, “toxic polarization”.

Addressing Toxic Polarization in Pakistan

Polarization, to a certain extent, is healthy for democracy. Diverse points of view and opposing arguments encourage deep introspection of ideas and policies.  However, one of the reasons for the degradation of polarization to toxic levels is the reinforcement of biases generated online.

There is a need to regulate digital space.  The state mechanism in Pakistan has installed a firewall to monitor and filter cyberspace activity.  Public backlash has been confined primarily to the affect it has had on internet accessibility and speed—effecting the burgeoning digital industry—with a periphery concern pertaining to suppressing freedom of speech.

The merits and demerits of arguments from either side should be premised around a holistic analysis of the current state-of-affairs in the country.  “Existential” is a word frequently used to denote the level of threat and fragility that the state must overcome.  The term “existential threat” has been in play since independence, yet the state has persevered. The intensity of the challenges that Pakistan has hitherto faced has been severe, however, whether the threats faced were ‘existential’ in nature is debatable.

The present scenario in the country is different.   The mass circulation of misinformation and disinformation has created disharmony, chaos and an ever-deepening polarization between state institutions and the people. Polarization is a global phenomenon.  In the case of Pakistan, however, the ramifications of polarization can be overwhelming as the state is already enmeshed in a polycrisis shaped by the effects of political instability, economic insecurity, climate change devastations and the reemergence of terrorism. Perhaps, in this scenario, the word “existential” may be prefaced to threat.

The term “digital terrorism” was used for the public to relate to the intensity of the virtual battle that is being waged.  Since 9/11, literature has been published on the causes of ‘conventional terrorism’ and the successful manipulation of grievances by a lunatic fringe.  Digital terrorism is using the same strategy by manipulating grievances through deep fakes and the circulation of misinformation to radicalize and polarize society.

In both forms of terrorism, the common denominator allowing manipulation to be effective is longstanding grievances that have not been addressed.  The causes of polarization may be unique to a region or country.  For instance, some regions may be facing a backlash from the presumed effects of globalization and the concomitant impact on migration and job losses.  In Pakistan’s case, governance catering to the political, economic and social elite, while ignoring the plight of the masses by successive governments is fueling resentment, radicalization and the need to find alternatives.

Building a firewall for monitoring and censoring may be affective, considering the alternative of chaos and anarchy augmented by deep fakes and misinformation. It should, however, be used as a stopgap arrangement; giving the state the space and time to address the genuine grievances of the masses and shift from an elitist to an egalitarian mindset. Unless the core issues are addressed, the agitation, inevitably, will reemerge in another shape or form.

Regulating at the “Speed of Moore’s Law”

National and global regulators need to acquire skills and technology to keep pace with developments in cyberspace. Policies that are relevant today may be redundant within months.  In his book, ‘Thank You for Being Late,’ Thomas L. Freidman—a three-time Pulitzer Prize winner and a columnist for the New York Times—wrote, “Every institution, whether it is the patent office… or any other major regulatory body, has to keep getting more agile—it has to be willing to experiment quickly and learn from mistakes.  Rather than expecting regulations to last for decades, it should continuously reevaluate the ways in which they serve society.  Universities are now experimenting with turning over their curriculum much faster and more often to keep up with the change in pace—putting a “use-by date” on certain courses.  Government regulators need to take a similar approach.  They need to be as innovative as the innovators.  They need to operate at the speed of Moore’s law.”

In addition, while regulating the misuse of AI in digital space, regulators must be cautious of not impeding AI potential for uplifting humanity in science, medicine, universal access to education and healthcare, etc. In unchartered territory, there is a tendency to suppress more knowledge and information than is necessary, especially by fragile states.

The critique either for or against AI is as flexible as its definition.  Considering recent trends, the need for regulation is irrefutable.  The brilliance of Stephen Hawkins summed up the challenge in a succinct and clear manner (albeit for different reasons).  He stated, “The rise of powerful AI will either be the best or the worst thing ever to happen to humanity.  We do not yet know which.”

Scroll to Top