Technology

From AI Algorithm Biases to Deepfakes- The Need to Regulate Cyberspace Thumbnail

From AI Algorithm Biases to Deepfakes: The Need to Regulate Cyberspace

Battling the Dark Side of AI: The Urgent Need for Regulation to Curb Biases of AI Algorithm After reading this comment many may presume that the journal has an anti-AI or a pro-digital censorship editorial stance.  This is not the case.  We have all benefitted from AI applications and innovations—thecover of this issue of CQ has been generated with the help of an AI application.  However, there is another side (related to AI algorithm biases, deepfakes, etc.), the toxicity of which is affecting millions, and this must be addressed (without getting apocalyptic). The potential of social media as a catalyst for mass movements was witnessed in 2010-2011 through the Arab Spring.  The movement began in December 2010, as the “Jasmine Revolution” in Tunisia after the “self-immolation of Mohamad Bouazizi”.  It was successful in removing President Zine al-Abidine Ben Ali, who eventually fled the country in January 2011.  Egypt was next in line and was able to remove President Hosni Mubarak from power by February 2011.  The success in these two countries unleashed a wave of movements/protests in Yemen, Libya, Bahrain and Syria (https://www.britannica.com/event/Arab-Spring). Social media was used to organize, disseminate content, create awareness, gather people, etc., however, branding of the Arab Spring as a “Facebook Revolution” was an overstatement. It was used as a campaigning, management, organizational and broadcasting tool, but was not responsible for what preceded or proceeded the movement, i.e. the cause and result.  Simply put, when reactions to injustices and inequalities embedded in societal hierarchy were already in play, the new media provided a platform to manage and propagate these grievances in an effective manner. Things have changed since the Arab Spring. The exponential growth of “computational processing power” (Moore’s Law:  Gordon Moore, the cofounder of Intel, predicted that the number of transistors in a computer chip, in other words the computational processing power, would double approximately every two years—this prediction has remained accurate for nearly 60 years), advances in connectivity and big data have made Artificial Intelligence (AI) ubiquitous. The widespread use of Generative Artificial Intelligence, along with the use of social media as the primary source of information and news, especially for the youth, has, amongst other impacts (beneficial and harmful), augmented biases and the sway and effectiveness of disinformation and misinformation. This, in turn, is aggravating preexisting grievances and extending the digital world’s ‘sphere of influence’ towards the ‘preceding and proceeding’ segments of agitations around the world. Augmenting Biases An integral subset of AI is machine learning—the use of algorithms to learn from data—whether supervised (deep learning) or unsupervised, requires (or at least does till now) human input which, inadvertently or advertently, may have biases.  These biases can affect AI applications designed for search engines, information extraction, social networks, etc. Furthermore, AI Algorithms generate search results in accordance with the user’s profile or online history.  For instance, if a person searched for articles written by John J. Mearsheimer (an American scholar on international relations known for his theory of offensive realism) then search engine algorithms will, most likely, filter future searches on international relations to reflect the realist perspective. In the book, Age of AI: And Our Human Future, Henry Kissinger et al. wrote, “… in cyberspace, filtration is self-reinforcing.  When the algorithmic logic that personalizes searching and streaming begins to personalize the consumption of news, books, or other sources of information, it amplifies some subjects and sources and, as a practical necessity, omits others completely.  The consequence of de facto omission is twofold: it can create personal echo chambers, and it can foment discordance between them.” Deepfakes, Disinformation and Misinformation To confound matters, deep fakes are boosting the already pervasive presence of disinformation and misinformation in cyberspace.   Deep fakes use AI technology and deep learning (the supervised use of data sets) to generate fake images, videos and audios. While beneficial to some industries, for instance entertainment, its nefarious applications have created discord, radicalization and polarization, especially when it reinforces preconceived biases. The most effective forms of deep fakes use a particular authentic incident or a decontextualized quote or image and create layers of lies around it.  The references used to generate these types of deep fakes are real and, therefore, acceptable with the masses, particularly those who are already inclined towards similar biases. The next step is circulation or ‘making it viral’ and that is done through the efficient use of social media platforms. As a result, an industry that was meant to unify humankind through online social interaction is now inundated with disinformation, misinformation and deep fakes, that encourage tribalism, polarization and, in many cases, “toxic polarization”. Addressing Toxic Polarization in Pakistan Polarization, to a certain extent, is healthy for democracy. Diverse points of view and opposing arguments encourage deep introspection of ideas and policies.  However, one of the reasons for the degradation of polarization to toxic levels is the reinforcement of biases generated online. There is a need to regulate digital space.  The state mechanism in Pakistan has installed a firewall to monitor and filter cyberspace activity.  Public backlash has been confined primarily to the affect it has had on internet accessibility and speed—effecting the burgeoning digital industry—with a periphery concern pertaining to suppressing freedom of speech. The merits and demerits of arguments from either side should be premised around a holistic analysis of the current state-of-affairs in the country.  “Existential” is a word frequently used to denote the level of threat and fragility that the state must overcome.  The term “existential threat” has been in play since independence, yet the state has persevered. The intensity of the challenges that Pakistan has hitherto faced has been severe, however, whether the threats faced were ‘existential’ in nature is debatable. The present scenario in the country is different.   The mass circulation of misinformation and disinformation has created disharmony, chaos and an ever-deepening polarization between state institutions and the people. Polarization is a global phenomenon.  In the case of Pakistan, however, the ramifications of polarization can be overwhelming as the state is already

From AI Algorithm Biases to Deepfakes: The Need to Regulate Cyberspace Read More »

Pakistan's Quest for Economic Growth through Digital Transformation Thumbnail

Pakistan’s Quest for Economic Growth through Digital Transformation

*This article is from Volume 19-3 (Jul/Sep 2024) Abstract (The digital revolution is reshaping the world economy, creating new opportunities for sustainable development and inclusive growth. Digital technologies can enhance productivity, innovation, and connectivity, as well as support green development and social inclusion. However, harnessing the potential of digital transformation requires a strategic vision, a long-term policy framework, and coordinated implementation across sectors and stakeholders. Without these elements, digital transformation can also create new challenges and inequalities. Could digital transformation be the route for Pakistan to achieve leapfrogging growth?  We could draw on the experiences of several countries in the Asia and Pacific region that have successfully implemented digital policies and achieved digital development. It is also relevant to examine the hypothesis that the government and the state, both key actors in the growth and progress of a country, may have different roles and incentives in committing to long-term journeys such as digital transformation that span multiple governance regimes. The article is structured as follows. First, we discuss the concept and importance of long-term planning, especially in the post-COVID era, and the main components of a comprehensive policy framework for digital transformation. Second, we present some country examples of successful digitalization and identify cases of leapfrogging growth. Third, we analyze the role of the government and the state in pursuing and sustaining digital policies. Finally, we conclude with some implications and recommendations for Pakistan, a developing country with immense potential to benefit from digital transformation. – Author) Table of Contents Introduction 1. DIGITAL IS THE NEXT FRONTIER 2. THE ROLE OF LONG-TERM POLICY PLANNING 2.1. The need for comprehensive policy frameworks 3. REGULATIONS TILT THE BALANCE TOWARDS INNOVATION 3.1. Long Termism and Sustainability 4. INSIGHTS FROM COUNTRY VIGNETTES 4.1. China’s Internet Plus and Economies of Scale 4.2. Estonia’s Interoperable Digital Platforms 4.3. Singapore’s Smart Nation Initiative 4.4. South Korea’s Digital New Deal and 5G Networks 4.5. India’s Digital Public Infrastructure with a laissez-faire Approach 4.6. Lithuania’s Public Policy Innovations making it a leader in Fintech 4.7. UAE – a Country with a Ministry of Artificial Intelligence 4.8. Digital Denmark 4.9. Israel – the Startup Nation 5. LONG-TERM POLICY FRAMEWORKS ENABLE DIGITAL COUNTRY TRANSFORMATION 6. ARTIFICIAL INTELLIGENCE FOR LONG TERM COUNTRY EVOLUTION 6.1. Data Enables Learning 6.2. Artificial Intelligence can provide Insights and Foresights 6.3. The State is a Long-term Stakeholder 6.4. Digital transformation and leapfrogging approach to growth 7. LEAPFROGGING AND TRANSFORMATIVE GROWTH FOR PAKISTAN 7.1. Chinese Solar Panels- a case of transformative leapfrogging 8. LESSONS PAKISTAN CAN LEARN FROM THE EXPERIENCE OF OTHERS 9. POLICY RECOMMENDATIONS AND CONCLUSION   Introduction Digital technology innovation is transforming economies and investments in sustainable development are likely to be the leading drivers of growth for the coming  years.  With digital developments spurred on by COVID, a number of governments invested in making the changeover to digital processes and services; the internet, communication and digital technology have shaped development. Dynamic economies of our time are moving from old growth drivers based on low-cost manufacturing or services to new growth drivers that are enabled by technology, new materials and value-added manufacturing and services. Technology enabled industrial capacity and supply chain, together with innovation, is driving up the value of exports for countries like South Korea and Vietnam. There is also evidence that digitally evolved societies are more inclusive, particularly pertaining to women labor force participation. In a post Chat GPT world, Artificial Intelligence presents vast potential to enhance capacity and productivity and bring innovation to public sector service delivery. Different approaches and policy actions have resulted in various country trajectories; while the paths taken may have been different, there appears to be a sustained state commitment to the digital agenda in nearly all success stories. In some cases, there is clear evidence of a relationship between long-term public policy frameworks and the digital transformation of a country. Can Pakistan’s digital transformation contribute to the much needed growth?  Can the nation leapfrog and transition to achieve an accelarated economic growth through inclusive digital transformation? Pakistan is the third largest wheat producer in Asia and ranks among the top 10 for other agricultural products, with exports surpassing $6 billion. In South Korea, the manufacturing sector accounts for 12.5% of their $1.7 trillion GDP. India’s technology exports are approaching $200 billion, growing at an annual rate of 8.3%, contributing roughly $16 billion in 2022. In comparison, Saudi Arabia’s crude oil exports were valued at $224.8 billion in 2022. IT and Business process outsourcing (BPO) sectors form the largest portion (over 60%) of India’s service exports (EY India 2023). This has driven the growth of total services exports at a compound annual growth rate of 14% in dollar terms over the past twenty years. We can learn from the journeys taken by other developing countries in the Asia and Pacific region; however, such a digital transformation would require a long-term public policy commitment that can lead to digital evolution of the government and society. 1. DIGITAL IS THE NEXT FRONTIER Innovation in digital technology is revolutionizing economies, with sustainable development investments driving growth for the next decade and beyond. Digital transformation impacts various aspects of the economy and society in complex ways, making trade-offs between policy goals challenging. It also necessitates considering cross-cutting policy issues like skills, digital government, and data governance (OECD, 2021). The adoption and mainstreaming of digital technologies, however, requires an inspired vision, a strategic long-term view and investment in public policy frameworks. Emerging technologies such as artificial intelligence (AI), the Internet of Things (IoT), blockchain, and 5G networks have the potential to transform industries and create new opportunities for businesses and individuals. Additionally, digital technologies support green development by enabling resource efficiency and reducing emissions by promoting telecommuting and supporting sustainable supply chains and data driven decisions. Digital transformation cuts across traditional sectoral boundaries necessitating a whole-of-government approach to realize its potential and to manage trade-offs across policy areas (OECD 2021). In the digital age, technology is essential to daily life, and nations are adopting digital transformation to enhance economic growth and competitiveness. However,

Pakistan’s Quest for Economic Growth through Digital Transformation Read More »

Assessing the Risks and Shortcomings of Generative AI Thumbnail

Assessing the Risks and Shortcomings of Generative AI

*This article is from Volume 19-3 (Jul/Sep 2024) Abstract (Generative Artificial Intelligence (AI) technologies are the latest hype, offering capabilities that range from content creation to smart bots that can engage in conversations and leave messages on answering machines, to advanced decision support systems. Despite their potential, these technologies also bring with them significant risks, including ethical concerns, biases in data, copyright infringement, security vulnerabilities, and societal impacts. This paper explores these general risks and discusses how adoption of Generative AI tech in Pakistan must be accompanied by robust regulatory frameworks. – Author) Introduction In March 2018, an Uber self-driving car fatally struck a pedestrian who was crossing the road in Tempe, Arizona. An investigation by the National Transportation Safety Board (NTSB) revealed that the system had detected the pedestrian but classified her as an ‘unknown object’, a vehicle, and then finally as a bicycle, with insufficient time to react[i]. Later that same month, another tragic incident occurred when a self-driving test vehicle operated by Tesla was involved in a fatal crash in Mountain View, California. The vehicle’s autopilot system failed to detect a highway barrier, resulting in the crash. In December 2019, a self-driving Tesla struck and killed two people in Gardena, California[ii]. What is common in these cases is the inability of autonomous driving systems to accurately interpret and respond to dynamic road situations under unusual or ambiguous circumstances. These incidents highlight the technological limitations in current AI-driven vehicles, particularly in recognizing and processing unexpected objects or behaviours on the road quickly and accurately. They also raise another important question: who exactly is at fault when AI destructs? AI and generative AI technologies have exhibited major shortcomings in other domains. In October 2020, researchers from Nabla, a healthcare technology company, were testing an AI chatbot powered by OpenAI’s GPT-3 when it advised a test patient to commit suicide during a scenario in which the patient expressed suicidal thoughts. This alarming outcome took place as part of an experiment designed to evaluate the AI’s use in medical advice scenarios[iii]. Another AI-powered chatbot, Eliza, encouraged a Belgian man to end his life[iv]. The unnamed man, disheartened by humanity’s role in exacerbating the global climate crisis, turned to Eliza for comfort. They chatted for several weeks, during which Eliza fed into his anxieties and later, suicidal ideation. Eliza also became emotionally involved with the man, blurring the lines between appropriate human and AI interactions. Both these incidents highlight significant risks in deploying AI technologies in sensitive health-related environments and sparked calls for enhanced safety measures. There are opinions offered about AI making us less creative, imaginative, and reducing our critical thinking abilities[v]. This opinion is heavily disputed in several fields, particularly in education, technology, and workplace dynamics. In education, for example, there are concerns and discussions on how AI tools like chatbots might prevent students from fully developing their critical thinking skills if relied upon too heavily for tasks such as writing essays and conducting research. Educational strategies are being developed to integrate AI in ways that enhance, rather than replace, critical thinking skills by using these tools to facilitate deeper engagement with learning materials and encourage a more rigorous evaluation of AI-generated content (Aithal & Silver, 2023). There are similar discussions around integrating AI in tech and workplaces. Overall, the integration of AI presents a complex challenge across these sectors, with significant attention focused on ensuring these technologies are used as tools for enhancing human capabilities rather than replacing them. As AI systems become more integrated into crucial aspects of daily life, the responsibility to ensure these systems operate safely and ethically grows. This article will explore some of the major risks associated with AI and Generative AI technologies as a precursor to developing effective strategies and policies for mitigating these risks. By understanding these challenges in depth, policymakers and tech leaders can better prepare to harness the benefits of AI while ensuring safety, fairness, and ethical compliance in its deployment. Background Generative AI represents a transformative leap in the capabilities of machine learning systems to create new content and make autonomous decisions. This branch of AI focuses on the design of algorithms that can generate complex outputs, such as text, images, audio, and other media, by learning patterns from vast amounts of data without explicit instructions. Rooted in the principles of deep learning and neural networks, generative AI has evolved from simple pattern recognition to systems capable of producing intricate and nuanced creations. The foundational technologies behind Generative AI include Generative Adversarial Networks (GANs), first introduced by Ian Goodfellow in 2014, and Variational Autoencoders (VAEs)[vi]. These technologies enable machines to generate realistic and high-resolution content that can sometimes be indistinguishable from human created content. The advancements in natural language processing (NLP), such as those used by models like OpenAI’s GPT (Generative Pre-trained Transformer) series, have further demonstrated the profound impact of Generative AI, enabling machines to understand and generate human-like text, engaging in conversations, answering questions, and even writing persuasive essays. The applications of Generative AI have become widespread, impacting industries such as entertainment, marketing, automotive, and healthcare. In entertainment and media, Generative AI is used to create new music, video game environments, and personalized content. In marketing, it provides tools for generating innovative product designs and advertising materials. The automotive industry leverages AI in the development of autonomous driving technologies, while healthcare sees its application in drug discovery and patient management systems. Despite its potential, Generative AI introduces significant challenges and risks. Issues such as data privacy, security vulnerabilities, ethical dilemmas, and the potential for misuse have led to calls for stringent regulatory frameworks. These frameworks aim to govern the deployment of AI technologies, ensuring they are used ethically and responsibly. As we examine the intricacies of Generative AI, it becomes crucial to balance innovation with safeguards that protect societal values and human rights, setting a precedent for responsible technology usage worldwide. Risks of Generative AI Technologies Generative AI technologies, while transformative, present numerous technical and operational

Assessing the Risks and Shortcomings of Generative AI Read More »

Scroll to Top