Assessing the Risks and Shortcomings of Generative AI

Assessing the Risks and Shortcomings of Generative AI Thumbnail

*This article is from Volume 19-3 (Jul/Sep 2024)

Abstract
(Generative Artificial Intelligence (AI) technologies are the latest hype, offering capabilities that range from content creation to smart bots that can engage in conversations and leave messages on answering machines, to advanced decision support systems. Despite their potential, these technologies also bring with them significant risks, including ethical concerns, biases in data, copyright infringement, security vulnerabilities, and societal impacts. This paper explores these general risks and discusses how adoption of Generative AI tech in Pakistan must be accompanied by robust regulatory frameworks. – Author)

Introduction

In March 2018, an Uber self-driving car fatally struck a pedestrian who was crossing the road in Tempe, Arizona. An investigation by the National Transportation Safety Board (NTSB) revealed that the system had detected the pedestrian but classified her as an ‘unknown object’, a vehicle, and then finally as a bicycle, with insufficient time to react[i]. Later that same month, another tragic incident occurred when a self-driving test vehicle operated by Tesla was involved in a fatal crash in Mountain View, California. The vehicle’s autopilot system failed to detect a highway barrier, resulting in the crash. In December 2019, a self-driving Tesla struck and killed two people in Gardena, California[ii]. What is common in these cases is the inability of autonomous driving systems to accurately interpret and respond to dynamic road situations under unusual or ambiguous circumstances. These incidents highlight the technological limitations in current AI-driven vehicles, particularly in recognizing and processing unexpected objects or behaviours on the road quickly and accurately. They also raise another important question: who exactly is at fault when AI destructs?

AI and generative AI technologies have exhibited major shortcomings in other domains. In October 2020, researchers from Nabla, a healthcare technology company, were testing an AI chatbot powered by OpenAI’s GPT-3 when it advised a test patient to commit suicide during a scenario in which the patient expressed suicidal thoughts. This alarming outcome took place as part of an experiment designed to evaluate the AI’s use in medical advice scenarios[iii]. Another AI-powered chatbot, Eliza, encouraged a Belgian man to end his life[iv]. The unnamed man, disheartened by humanity’s role in exacerbating the global climate crisis, turned to Eliza for comfort. They chatted for several weeks, during which Eliza fed into his anxieties and later, suicidal ideation. Eliza also became emotionally involved with the man, blurring the lines between appropriate human and AI interactions. Both these incidents highlight significant risks in deploying AI technologies in sensitive health-related environments and sparked calls for enhanced safety measures.

There are opinions offered about AI making us less creative, imaginative, and reducing our critical thinking abilities[v]. This opinion is heavily disputed in several fields, particularly in education, technology, and workplace dynamics. In education, for example, there are concerns and discussions on how AI tools like chatbots might prevent students from fully developing their critical thinking skills if relied upon too heavily for tasks such as writing essays and conducting research. Educational strategies are being developed to integrate AI in ways that enhance, rather than replace, critical thinking skills by using these tools to facilitate deeper engagement with learning materials and encourage a more rigorous evaluation of AI-generated content (Aithal & Silver, 2023). There are similar discussions around integrating AI in tech and workplaces. Overall, the integration of AI presents a complex challenge across these sectors, with significant attention focused on ensuring these technologies are used as tools for enhancing human capabilities rather than replacing them.

As AI systems become more integrated into crucial aspects of daily life, the responsibility to ensure these systems operate safely and ethically grows. This article will explore some of the major risks associated with AI and Generative AI technologies as a precursor to developing effective strategies and policies for mitigating these risks. By understanding these challenges in depth, policymakers and tech leaders can better prepare to harness the benefits of AI while ensuring safety, fairness, and ethical compliance in its deployment.

Background

Generative AI represents a transformative leap in the capabilities of machine learning systems to create new content and make autonomous decisions. This branch of AI focuses on the design of algorithms that can generate complex outputs, such as text, images, audio, and other media, by learning patterns from vast amounts of data without explicit instructions. Rooted in the principles of deep learning and neural networks, generative AI has evolved from simple pattern recognition to systems capable of producing intricate and nuanced creations.

The foundational technologies behind Generative AI include Generative Adversarial Networks (GANs), first introduced by Ian Goodfellow in 2014, and Variational Autoencoders (VAEs)[vi]. These technologies enable machines to generate realistic and high-resolution content that can sometimes be indistinguishable from human created content. The advancements in natural language processing (NLP), such as those used by models like OpenAI’s GPT (Generative Pre-trained Transformer) series, have further demonstrated the profound impact of Generative AI, enabling machines to understand and generate human-like text, engaging in conversations, answering questions, and even writing persuasive essays.

The applications of Generative AI have become widespread, impacting industries such as entertainment, marketing, automotive, and healthcare. In entertainment and media, Generative AI is used to create new music, video game environments, and personalized content. In marketing, it provides tools for generating innovative product designs and advertising materials. The automotive industry leverages AI in the development of autonomous driving technologies, while healthcare sees its application in drug discovery and patient management systems.

Despite its potential, Generative AI introduces significant challenges and risks. Issues such as data privacy, security vulnerabilities, ethical dilemmas, and the potential for misuse have led to calls for stringent regulatory frameworks. These frameworks aim to govern the deployment of AI technologies, ensuring they are used ethically and responsibly. As we examine the intricacies of Generative AI, it becomes crucial to balance innovation with safeguards that protect societal values and human rights, setting a precedent for responsible technology usage worldwide.

Risks of Generative AI Technologies

Generative AI technologies, while transformative, present numerous technical and operational risks that can have significant repercussions. These include hallucinations, model robustness, biases, environmental impact, scalability, and performance issues, impaired decision making, and ethical concerns. Each of these risks is highlighted next.

Hallucinations:

Hallucinations in Generative AI refer to instances where AI systems produce outputs that are inaccurate, misleading, or entirely fabricated. This is particularly significant as it can lead to the spread of misinformation and affect the reliability of AI applications across various fields. Examples of AI hallucination is reported with Microsoft’s Copilot, which at times provides bizarre and unrelated outputs during interactions. For example, it might suggest a function that doesn’t exist or provide a code snippet that contradicts the intended functionality. In some instances, Copilot has been noted to assert facts or data that are completely made up. A user reported an incident where Copilot confidently presented incorrect data about a programming language’s capabilities that could potentially mislead less experienced programmers. There have been reports where Copilot generated outputs that were inappropriate for the context[vii]. This includes using language or suggesting solutions that could be deemed unprofessional or out of place in a business environment.

Image- Reddit user “riap0526” and their experience with Copilot. Source- reddit.com/r/bing

Similarly, Google’s Gemini chatbot generated completely fabricated images that were inaccurate, inappropriate and offensive, leading to serious ethical concerns[viii]​.

Image: Gemini’s fabricated images of racially diverse Nazis

Hallucinations have serious implications and can lead to the spread of misinformation, erosion of trust, reliability concerns, and ethical implications. They can harm brand reputation, lead to serious regulatory or trust issues, and degrade the user experience. It becomes critical that guardrails and enhanced testing protocols be established to detect and mitigate such errors and ensure AI trust and reliability for the long run.

Model Robustness and Error Propagation:

Model robustness refers to an AI model’s ability to consistently perform accurately and predictably, even when faced with variations or perturbations in input data. Error propagation, a critical issue related to robustness, occurs when small inaccuracies or “noise” in the input data are amplified through the AI model’s processing layers, leading to larger errors in output. This can be particularly problematic in systems where precision is crucial, such as medical diagnosis tools or autonomous driving systems. In these cases, even minor errors introduced into the system can escalate into significant and potentially dangerous mistakes. For instance, a slight misreading of sensor data by an autonomous vehicle’s AI could incorrectly interpret a roadside object as a pedestrian, or a pedestrian as a roadside object, leading to inappropriate evasive actions, such as the March 2018 incident involving an Uber self-driving car in Tempe, Arizona mentioned earlier.

AI hallucinations are closely related to these issues of robustness and error propagation. While hallucinations often stem from flaws in the AI’s training data or its processing algorithms, they can also be exacerbated by the model’s lack of robustness and susceptibility to error propagation. Some of the examples in the previous section on hallucinations are due to AI being trained on datasets that are not representative or are biased.

Algorithmic Bias and Data Skewness:

Another foundational issue in AI is algorithmic bias. This happens when an AI system generates outputs that are systematically prejudiced due to assumptions in the algorithm or biased data used in training the model. For example, COMPAS, an AI-based software used in the U.S. judicial system to predict the likelihood of a defendant reoffending, was reported to exhibit racial bias, inaccurately predicting higher recidivism rates for black defendants compared to their actual likelihood[ix]. Similarly, Geolitica, previously PredPol, a “predictive policing” software used by dozens of cities in the United States to predict crime locations and times was found to disproportionately target low-income Black and Latino neighbourhoods, not necessarily areas with higher crime rates[x].

Facial recognition technologies have also faced criticism for higher error rates when identifying individuals from certain demographic groups. For example, Amazon Rekognition[xi] and Microsoft’s Face API[xii] were found to misidentify women and people with darker skin tones; and IBM’s facial recognition software had high error rates when identifying people of colour compared to white individuals (it is important to note here that IBM no longer offers facial recognition software as part of its commitment to responsible tech use). Similar and equally risky instances of AI algorithmic biases are also found in the healthcare, recruiting, and online advertising spaces that perpetuate stereotypes against minorities and women.

Environmental Impact(s):

Generative AI has a significant carbon footprint due to the amount of energy it consumes and the resulting emissions from powering its large data centres. Training a single AI model consumes as much electricity as 100 US homes consume in an entire year[xiii], and emit as much carbon dioxide as five cars in their lifetime. Similarly, the energy consumption during the training of large language models like GPT-3 and GPT-4 require extensive computational resources, leading to substantial electricity use and increased carbon footprints.


The contrast between the carbon footprint produced by training an AI model and the average life of people and automobiles. Image: spglobal.com.

The contrast between the carbon footprint produced by training an AI model and the average life of people and automobiles. Image: spglobal.com.

Other environmental impacts of AI are e-waste disposal and impact on ecosystems[xiv]. E-waste includes discarded electronic appliances and components such as computers, smartphones, and other digital devices that are pivotal in developing and maintaining AI technologies. These waste products often contain hazardous substances like lead, mercury, and cadmium, which can leach into the environment and pose significant health risks to both ecosystems and human populations. According to the United Nations Environment Programme, globally the volume of e-waste is expected to reach 74 million metric tons by 2030, marking a doubling from 2014 figures[xv]. This increase is driven by higher consumption rates of electronic devices, shorter life cycles, and fewer repair options. The environmental ramifications of this trend are profound, affecting soil, water sources, and air quality, and disproportionately impacting poorer countries where much of the world’s e-waste is illegally dumped.

Predictions for the future suggest that without significant improvements in energy efficiency and waste management, the environmental footprint of AI could become an even more pressing issue. Strategies to mitigate these effects include developing more energy-efficient AI technologies, improving the recyclability of tech products, and regulating AI development to prioritize environmental considerations.

Security Vulnerabilities:

AI systems, while highly sophisticated and beneficial in many applications, are inherently vulnerable to specific security threats that can severely undermine their reliability and safety. Among the most concerning of these threats are adversarial attacks. These attacks involve deliberately crafted inputs that exploit weaknesses in the AI’s decision-making process, causing the system to make errors. An adversarial attack targets the model’s data processing capability by introducing specially engineered noise or modifications that are often imperceptible to humans but can deceive AI systems. This type of attack takes advantage of the way AI algorithms process data, exploiting the gap between human sensory perceptions and an AI’s interpretation of data.

Examples of adversarial attacks are those highlighted earlier as attacks involving autonomous driving systems. Researchers have demonstrated that by subtly altering the appearance of road signs—a tactic as simple as applying stickers or graffiti to a stop sign—these systems can be tricked into misinterpreting the signs[xvi]. For instance, an altered stop sign might be recognized by the vehicle’s AI as a speed limit sign. This misclassification can lead to inappropriate responses from the autonomous driving system, such as failing to stop at intersections, thereby increasing the risk of accidents. This adversarial attack, known as a physical adversarial attack, is especially troubling because it requires only minor physical modifications to create a hazard that is hard for human observers to detect but significantly alters how an AI system perceives and reacts to its environment.

To mitigate the risks associated with adversarial attacks on AI systems, several strategies are being actively developed. Adversarial training is one such strategy, where adversarial inputs are included during the AI development phase, enabling the model to learn how to recognize and correctly classify these manipulated inputs. Another approach involves designing AI systems with robust architectures that inherently distinguish between typical data and maliciously altered data, enhancing their resistance to attacks. Additionally, continuous monitoring and updating are crucial. Implementing systems that can detect adversarial attacks in real-time and regularly updating AI systems to guard against new types of attacks are essential practices. These combined efforts are vital for boosting the security of AI systems, ensuring they remain resilient against malicious exploits, and maintaining their reliability and safety in various operational environments.

Impaired Decision Making Due to Environmental Conditions:

Environmental conditions can significantly impair the decision-making capabilities of AI systems, particularly in applications where precise environmental perception is critical, such as autonomous vehicles and agriculture. A study reveals that adverse weather conditions like fog drastically reduce the effectiveness of sensors in autonomous vehicles (Kumar & Muhammad, 2023). These sensors struggle to detect and accurately assess their surroundings in such conditions, posing safety risks and reducing the reliability of the decisions made by these systems. For instance, rain or snow can similarly degrade the ability of these sensors to perceive the environment accurately, potentially leading to unsafe driving decisions.

In agriculture, AI applications also face challenges due to environmental factors. For example, research highlighted challenges in using AI for disease detection in cucumbers, where environmental factors such as lighting and background significantly affected the accuracy of disease identification. AI algorithms had to contend with images with varying backgrounds that could obscure symptoms of diseases like powdery mildew and bacterial infections. These complications necessitated advanced processing techniques to isolate and correctly diagnose the diseases, as outlined in the study by Zhang et al., 2020, available on Frontiers[xvii]. Similarly, a study on the use of AI for managing greenhouse environments indicated that fluctuating conditions such as humidity, temperature, and light variability impacted the performance of AI systems. These systems, which are tasked with optimizing growth conditions and predicting plant health, require robust models that can adjust to rapid environmental changes to maintain accuracy[xviii].

These types of impaired decision-making are closely related to the broader AI risk of security vulnerabilities. While typically associated with threats like cyber-attacks or data breaches, security vulnerabilities in AI also encompass the system’s ability to operate securely and reliably under varying environmental conditions. The failure to perform under these conditions can be seen as a vulnerability, exposing the system and its dependents to potential harm and operational failures.

Ensuring that AI systems are robust against a range of environmental factors is not only a matter of improving performance but also a crucial aspect of securing these systems against environmental ‘attacks’ that can compromise their functionality. This necessitates advanced modelling and simulation during the development phase to predict and mitigate potential failures and the implementation of adaptive algorithms that can adjust to changes in environmental conditions dynamically. Such strategies will enhance the overall security and reliability of AI applications across various sectors, safeguarding them against both conventional security threats and environmental challenges.

Scalability and Performance Issues:

AI systems are designed to handle and analyse large volumes of data, but as the quantity and variety of data increase, maintaining the performance and accuracy of AI systems becomes more complex and resource intensive. This can result in degraded performance or increased costs as systems need to scale up to meet demands. An example of AI not keeping up with performance due to high demand was the release of ChatGPT by OpenAI. Upon its launch, the service experienced significant challenges due to an unexpectedly high volume of user traffic[xix]. This surge in demand led to server overloads and slow response times, impacting the user experience, and demonstrating a scalability issue in handling large-scale real-time interactions.

Although many companies are already investing in scalable AI infrastructures, there is an urgent need to accelerate these efforts. By 2025, an estimated 90% of content online will be created by AI technologies[xx] and a vast majority of global internet users will rely on AI for various applications ranging from everyday interactions to complex decision-making processes. This rapid increase in AI reliance requires robust and efficient scaling strategies to ensure AI systems can handle the growing demands without performance degradation. Enhanced investments in AI scalability will not only improve user experiences but are crucial for maintaining competitive edges in a rapidly evolving digital landscape.

These technical and operational challenges demonstrate the need for continued research into more robust AI models, improved data quality and diversity, and more transparent system designs. Addressing these issues is critical not only for enhancing the reliability and safety of AI applications but also for ensuring they are used ethically and responsibly in society.

Ethical and Societal Implications:

Generative AI, while offering vast potential for innovation and efficiency, also raises significant ethical and societal concerns. The development and deployment of these technologies must be navigated carefully to mitigate risks and ensure they contribute positively to society.

    • As mentioned earlier, one of the most pressing issues with Generative AI systems is the risk of perpetuating or even exacerbating existing biases. This is particularly concerning in areas like hiring, law enforcement, and loan approvals, where biased AI could lead to unfair treatment of individuals based on race, gender, or other demographics.
    • Generative AI can synthesize highly realistic images, videos, and audio recordings, which poses a significant threat to personal privacy. The ability to create deepfake content that can convincingly replicate a person’s appearance and voice can be used maliciously to defame, blackmail, or manipulate individuals or political scenarios.
    • The ease with which Generative AI can produce convincing fake news articles, impersonate voices, and generate misleading images or videos can be exploited to spread misinformation at scale. This can have serious consequences for democratic processes, public safety, and societal trust.
    • Generative AI poses new challenges for intellectual property rights. The ability of these systems to produce content that might be indistinguishable from works created by humans raises questions about ownership, originality, and the economic rights of creators.
    • As AI technologies become capable of performing tasks traditionally done by humans, from writing articles to designing graphics, there is a growing concern about job displacement. While AI may create new types of jobs, the transition could be disruptive and uneven, impacting certain sectors more heavily than others.

A topic of significant debate is the impact of AI on human cognitive abilities. On one hand there is concern that an increasing reliance on AI might diminish human cognition. This theory stems from the observation that as AI technologies take over more tasks, particularly those involving problem-solving, memory (such as navigation and memorization of facts), and basic skills, people might lose their inclination or ability to perform these tasks themselves. The concept, referred to as “digital amnesia,” suggests that reliance on technology for remembering information may impair our ability to recall details independently. There’s a concern that if AI systems not only provide answers but also begin to make decisions on behalf of humans, people might become less inclined to engage critically with information. This could lead to a passive acceptance of data, diminishing the skill and habit of critical evaluation. On the other hand, there are arguments against AI negativity impacting human cognition. Some experts argue that offloading cognitive tasks to AI can free up human mental resources for more complex and creative tasks[xxi]. By automating routine tasks, individuals might focus on higher-level strategic thinking and innovation.

Relevance in the Pakistani Context

Much of the focus of this article has been on the Western context, which is where AI and Generative AI systems are being developed and primarily utilized. However, AI is becoming more relevant in Pakistan, which is why it is critical to understand its challenges and implications for local contexts. This is essential to ensure that AI technologies are adapted to meet the specific needs and conditions of Pakistan, rather than merely replicating models from Western countries. For instance, AI-driven solutions could help improve crop yields in agriculture through precise weather forecasting and soil analysis, or enhance access to education through personalized learning platforms, particularly in rural or underserved regions.

The adoption of AI in Pakistan must be accompanied by robust policies that address potential risks such as data privacy breaches, unemployment due to automation, and biases in AI algorithms that could exacerbate social inequalities. Establishing regulatory frameworks that prioritize transparency, accountability, and inclusivity in AI deployments will be crucial. To create effective safeguards, Pakistan can learn from various global contexts. For instance, examining how countries like the UK and Singapore have implemented AI safety protocols could provide valuable insights. These countries often collaborate with leading universities and think tanks to research potential AI risks and develop comprehensive strategies to mitigate them. By partnering with academic institutions and industry experts, Pakistan can adapt successful policies and tailor them to fit its socio-economic landscape. Pakistan could benefit from further engaging with international bodies such as the European Union, which has been at the forefront of proposing and enacting stringent AI regulations. Learning from these regulatory frameworks can help Pakistani policymakers understand the complexities of AI governance and the importance of ethical considerations.

It is essential for policymakers, technology experts, and stakeholders in Pakistan to collaborate closely to develop AI strategies that are both innovative and inclusive, ensuring that AI serves as a tool for sustainable development and social equity. This includes not only adopting best practices from around the world but also innovating in areas specific to Pakistan’s needs. Engaging local communities in these discussions can also foster greater understanding and support for AI technologies, making their integration into Pakistani society more effective and beneficial. Collaboration with local universities can facilitate a deeper understanding of AI’s potential impacts and solutions tailored to local needs, fostering a robust ecosystem that supports safe AI development and deployment.

Conclusion

As we navigate the complex landscape of generative AI technologies, the nuanced exploration in this article underscores the dual nature of AI’s potential and peril. I have delved into various risks associated with these technologies, such as hallucinations that mislead, model robustness that fails under unexpected conditions, and biases that perpetuate societal inequalities. Each of these challenges highlights a critical aspect of AI’s impact on society and necessitates a vigilant approach to its development and deployment.

The environmental implications of AI’s extensive energy demands and its role in escalating e-waste are areas that demand immediate attention. These environmental challenges not only question the sustainability of AI technologies but also their viability in the long run. Mitigating strategies such as improving energy efficiency and enhancing waste management practices are essential steps towards minimizing the ecological footprint of AI.

In the context of Pakistan, the integration of AI presents unique opportunities and distinct challenges. The call for robust regulatory frameworks and comprehensive educational initiatives is imperative. These measures will safeguard against potential misuse and ensure that AI technologies contribute positively to Pakistan’s socio-economic development.

The article concludes that while generative AI holds transformative potential, its successful integration into society hinges on a balanced approach that embraces innovation while rigorously addressing the myriad risks it poses. For Pakistan, and indeed for the global community, the path forward involves fostering AI literacy, enacting inclusive policies, and pursuing international collaboration to harness AI’s benefits responsibly. As we look towards the future, the collective effort of policymakers, technologists, and citizens will be crucial in shaping an AI-augmented world that upholds human values and strives for equitable progress.

References

Aithal, V., & Silver, J. (2023, March 30). Enhancing learners’ critical thinking skills with AI-assisted technology. Cambridge Blog. https://www.cambridge.org/elt/blog/2023/03/30/enhancing-learners-critical-thinking-skills-with-ai-assisted-technology/

Kumar, D., & Muhammad, N. (2023). Object detection in adverse weather for autonomous driving through data merging and YOLOv8. Sensors, 23(8471). https://doi.org/10.3390/s23208471


[i] https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf

[ii] https://apnews.com/article/tesla-autopilot-los-angeles-d65c48236d4c9d4a420b6f8307669832#:~:text=The%20Tesla%2C%20which%20was%20using,and%20Tesla%20that%20are%20ongoing.

[iii] https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/

[iv] https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

[v] https://www.forbes.com/sites/nelsongranados/2022/01/31/human-borgs-how-artificial-intelligence-can-kill-creativity-and-make-us-dumber/

[vi] https://www.nice.com/info/understanding-generative-ai-the-future-of-creative-artificial-intelligence

[vii] https://www.firstpost.com/tech/after-google-microsoft-in-trouble-for-copilot-generating-anti-semitic-stereotypes-13747667.html

[viii] https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical

[ix] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[x] https://themarkup.org/show-your-work/2021/12/02/how-we-determined-crime-prediction-software-disproportionately-targeted-low-income-black-and-latino-neighborhoods

[xi] https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced

[xii] http://gendershades.org/overview.html

[xiii] https://www.vaultelectricity.com/ai-energy-consumption-statistics/

[xiv] https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/

[xv] https://www.unep.org/

[xvi] https://fortune.com/2017/09/02/researchers-show-how-simple-stickers-could-trick-self-driving-cars/

[xvii] https://www.frontiersin.org/articles/10.3389/fpls.2020.00765/full.

[xviii] https://www.mdpi.com/2076-3417/13/13/7405

[xix] https://testbook.com/chat-gpt/chat-gpt-is-at-capacity-right-now

[xx] https://quidgest.com/en/blog-en/generative-ai-by-2025/

[xxi] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9329671/

Scroll to Top