Assessing the Risks and Shortcomings of Generative AI Thumbnail

Assessing the Risks and Shortcomings of Generative AI

*This article is from Volume 19-3 (Jul/Sep 2024) Abstract (Generative Artificial Intelligence (AI) technologies are the latest hype, offering capabilities that range from content creation to smart bots that can engage in conversations and leave messages on answering machines, to advanced decision support systems. Despite their potential, these technologies also bring with them significant risks, including ethical concerns, biases in data, copyright infringement, security vulnerabilities, and societal impacts. This paper explores these general risks and discusses how adoption of Generative AI tech in Pakistan must be accompanied by robust regulatory frameworks. – Author) Introduction In March 2018, an Uber self-driving car fatally struck a pedestrian who was crossing the road in Tempe, Arizona. An investigation by the National Transportation Safety Board (NTSB) revealed that the system had detected the pedestrian but classified her as an ‘unknown object’, a vehicle, and then finally as a bicycle, with insufficient time to react[i]. Later that same month, another tragic incident occurred when a self-driving test vehicle operated by Tesla was involved in a fatal crash in Mountain View, California. The vehicle’s autopilot system failed to detect a highway barrier, resulting in the crash. In December 2019, a self-driving Tesla struck and killed two people in Gardena, California[ii]. What is common in these cases is the inability of autonomous driving systems to accurately interpret and respond to dynamic road situations under unusual or ambiguous circumstances. These incidents highlight the technological limitations in current AI-driven vehicles, particularly in recognizing and processing unexpected objects or behaviours on the road quickly and accurately. They also raise another important question: who exactly is at fault when AI destructs? AI and generative AI technologies have exhibited major shortcomings in other domains. In October 2020, researchers from Nabla, a healthcare technology company, were testing an AI chatbot powered by OpenAI’s GPT-3 when it advised a test patient to commit suicide during a scenario in which the patient expressed suicidal thoughts. This alarming outcome took place as part of an experiment designed to evaluate the AI’s use in medical advice scenarios[iii]. Another AI-powered chatbot, Eliza, encouraged a Belgian man to end his life[iv]. The unnamed man, disheartened by humanity’s role in exacerbating the global climate crisis, turned to Eliza for comfort. They chatted for several weeks, during which Eliza fed into his anxieties and later, suicidal ideation. Eliza also became emotionally involved with the man, blurring the lines between appropriate human and AI interactions. Both these incidents highlight significant risks in deploying AI technologies in sensitive health-related environments and sparked calls for enhanced safety measures. There are opinions offered about AI making us less creative, imaginative, and reducing our critical thinking abilities[v]. This opinion is heavily disputed in several fields, particularly in education, technology, and workplace dynamics. In education, for example, there are concerns and discussions on how AI tools like chatbots might prevent students from fully developing their critical thinking skills if relied upon too heavily for tasks such as writing essays and conducting research. Educational strategies are being developed to integrate AI in ways that enhance, rather than replace, critical thinking skills by using these tools to facilitate deeper engagement with learning materials and encourage a more rigorous evaluation of AI-generated content (Aithal & Silver, 2023). There are similar discussions around integrating AI in tech and workplaces. Overall, the integration of AI presents a complex challenge across these sectors, with significant attention focused on ensuring these technologies are used as tools for enhancing human capabilities rather than replacing them. As AI systems become more integrated into crucial aspects of daily life, the responsibility to ensure these systems operate safely and ethically grows. This article will explore some of the major risks associated with AI and Generative AI technologies as a precursor to developing effective strategies and policies for mitigating these risks. By understanding these challenges in depth, policymakers and tech leaders can better prepare to harness the benefits of AI while ensuring safety, fairness, and ethical compliance in its deployment. Background Generative AI represents a transformative leap in the capabilities of machine learning systems to create new content and make autonomous decisions. This branch of AI focuses on the design of algorithms that can generate complex outputs, such as text, images, audio, and other media, by learning patterns from vast amounts of data without explicit instructions. Rooted in the principles of deep learning and neural networks, generative AI has evolved from simple pattern recognition to systems capable of producing intricate and nuanced creations. The foundational technologies behind Generative AI include Generative Adversarial Networks (GANs), first introduced by Ian Goodfellow in 2014, and Variational Autoencoders (VAEs)[vi]. These technologies enable machines to generate realistic and high-resolution content that can sometimes be indistinguishable from human created content. The advancements in natural language processing (NLP), such as those used by models like OpenAI’s GPT (Generative Pre-trained Transformer) series, have further demonstrated the profound impact of Generative AI, enabling machines to understand and generate human-like text, engaging in conversations, answering questions, and even writing persuasive essays. The applications of Generative AI have become widespread, impacting industries such as entertainment, marketing, automotive, and healthcare. In entertainment and media, Generative AI is used to create new music, video game environments, and personalized content. In marketing, it provides tools for generating innovative product designs and advertising materials. The automotive industry leverages AI in the development of autonomous driving technologies, while healthcare sees its application in drug discovery and patient management systems. Despite its potential, Generative AI introduces significant challenges and risks. Issues such as data privacy, security vulnerabilities, ethical dilemmas, and the potential for misuse have led to calls for stringent regulatory frameworks. These frameworks aim to govern the deployment of AI technologies, ensuring they are used ethically and responsibly. As we examine the intricacies of Generative AI, it becomes crucial to balance innovation with safeguards that protect societal values and human rights, setting a precedent for responsible technology usage worldwide. Risks of Generative AI Technologies Generative AI technologies, while transformative, present numerous technical and operational

Assessing the Risks and Shortcomings of Generative AI Read More »