Documented Incidents and Potential Future Risks of AI Harm to Humans
1. Introduction:
Artificial Intelligence (AI) rapidly transforms numerous aspects of modern life, demonstrating its potential to enhance productivity, improve decision-making, and drive innovation across various sectors 1. AI's increasing prevalence is undeniable, from powering personalized recommendations on e-commerce platforms to assisting in complex medical diagnoses. However, alongside its transformative capabilities, there is a growing concern within the research community and broader society regarding the potential for AI systems to cause harm to humans 1.
This concern extends beyond theoretical possibilities, with documented instances highlighting the real-world implications of AI's deployment. This report aims to provide a comprehensive analysis of these documented incidents and explore the potential future risks of AI harming humans, as identified by researchers and documented in various sources.
Understanding the multifaceted nature of these harms is crucial for fostering the responsible development and deployment of AI technologies. The increasing reliance on AI in critical infrastructure and personal lives necessitates a thorough understanding of its potential negative consequences to ensure responsible development and deployment.
Integrating AI into healthcare 1, transportation 1, and even legal proceedings 6 signifies that failures or malicious use can lead to significant real-world impacts, underscoring the urgency of comprehending and addressing potential harms.
2. Documented Incidents of AI Harm:
The AI Incident Database is a critical resource for cataloging real-world incidents where AI systems have caused harm or near harm 8. Its mission is to learn from these experiences, similar to incident databases in aviation and computer security, to proactively prevent or mitigate adverse outcomes in the future.
2.1. Fraud and Scams:
Several documented incidents highlight the role of AI in enabling and amplifying fraudulent activities 8. One notable case involved a Canadian fraud ring that allegedly utilized AI voice cloning technology in a multi-year "grandparent scam," defrauding elderly Americans across 46 states of over $21 million 8.
This incident demonstrates how AI can create highly convincing impersonations, making it significantly easier to deceive vulnerable individuals. Deepfake technology has also impersonated the Prime Minister of Armenia in online scams 8. The ability to generate realistic fake images and videos allows scammers to create a semblance of legitimacy, thereby increasing the likelihood of their schemes succeeding.
Beyond targeting individuals, AI has facilitated significant financial fraud against organizations. A Hong Kong multinational finance company reportedly paid $25 million to fraudsters who used deepfake technology to impersonate the company's chief financial officer during a video conference call 6. Similarly, cybercriminals used deepfake audio to defraud the CEO of a U.K.-based energy company out of US$243,000 6.
These incidents underscore the sophisticated nature of AI-powered fraud, where even high-level executives can be deceived. Furthermore, deepfakes of prominent individuals have been used to promote fraudulent investment schemes, misleading potential investors into believing they are endorsed by trusted figures 6. Even the realm of online content has seen AI-driven deception, with an AI-generated persona known as the "Coochie Doctor" allegedly deceiving millions on TikTok with fake medical advice 8.
The Federal Trade Commission (FTC) has increasingly recognized AI's potential to facilitate fraud and impersonation, highlighting the need for measures to protect consumers from these evolving threats 3. The ease with which AI can now create convincing fake audio and video content has significantly amplified the scale and sophistication of fraudulent activities, making it considerably more challenging for individuals and organizations to effectively detect and prevent scams.
Traditional scams often rely on manual manipulation and limited impersonation capabilities. However, AI tools such as voice cloning and deepfakes automate and personalize these manipulations, rendering them more convincing and substantially more challenging to identify as fraudulent. Online platforms' speed and widespread reach further exacerbate this risk, allowing fraudulent schemes to proliferate rapidly and target a more significant number of victims.
2.2. Disinformation and Manipulation:
AI has also been implicated in several incidents involving disinformation and manipulation, posing a threat to democratic processes and social cohesion 8. An Iranian hacker group, Cotton Sandstorm, reportedly integrated AI into its cyber influence operations, targeting US election websites and media outlets 8. This suggests state-sponsored actors leverage AI to enhance their ability to interfere with democratic elections.
Furthermore, a Russian disinformation campaign allegedly used a fake news site and a deepfake video to falsely accuse Vice President Kamala Harris of a hit-and-run incident 8. This incident illustrates how AI can be used to create fabricated stories and disseminate them through seemingly legitimate channels to damage the reputation of public figures. In another instance, a Russian influence operation purportedly used AI to create a fake Kamala Harris campaign website and a rhino-hunting hoax, demonstrating the versatility of AI in generating misleading content for political purposes 8.
Researchers have also noted the broader use of AI at scale on social media platforms as a potent tool for political candidates to manipulate their way into power, with evidence suggesting its deployment to influence political opinion and voter behavior in various elections globally 9.
Combining AI with the rapidly improving ability to distort or misrepresent reality through deepfakes further exacerbates this issue. AI-driven information systems can potentially undermine democracy by causing a general breakdown in trust in information sources and by driving social division and conflict 9.
The capacity of AI to generate realistic text, images, and videos at scale makes it easier to create and disseminate fake news and propaganda. This can manipulate public opinion, sow discord, and undermine trust in legitimate sources of information, posing a significant threat to democratic processes and overall social stability.
2.3. Bias and Discrimination:
Documented incidents reveal numerous instances where algorithmic bias in AI systems has led to harmful and discriminatory outcomes across various domains 3. In law enforcement, the predictive policing biases inherent in tools like PredPol have raised concerns about discriminatory outcomes 8. These biases can lead to over-policing in specific communities, disproportionately affecting marginalized groups. Facial recognition technology has also been shown to exhibit bias, with studies indicating a higher likelihood of misclassifying gender in individuals with darker skin tones 9.
Furthermore, the FTC alleged that Rite Aid's use of facial recognition technology falsely tagged consumers as shoplifters, with women and people of color being particularly affected 3. This highlights how biased AI can lead to unwarranted suspicion and potential adverse interactions.
In healthcare, an AI-driven pulse oximeter was found to overestimate blood oxygen levels in patients with darker skin, potentially resulting in the undertreatment of hypoxia in these individuals 9.
This demonstrates how biases in training data can lead to disparities in healthcare outcomes. Algorithmic bias has also been observed in recruitment processes, with Amazon's AI recruitment tool favoring male candidates over females because it was trained on resumes from a predominantly male workforce 6. Similarly, healthcare algorithms have been shown to favor white patients over black patients due to biases in the data used for training 6.
Even seemingly innocuous applications like AI avatar generators have exhibited bias, with the AI avatar app Lensa producing sexualized images of women while generating diverse and professional avatars for men 6. Credit scoring algorithms have also been scrutinized for potentially offering lower credit limits to women compared to men with similar financial backgrounds 6. Furthermore, a tutoring company's AI-powered application software was found to discriminate against older job applicants 10.
These incidents underscore how AI systems can perpetuate and amplify societal biases, leading to unfair and discriminatory outcomes in various critical domains. AI models learn from the data they are trained on. If this data reflects historical or societal biases, the AI will likely reproduce and amplify these biases in its outputs and decisions, leading to discriminatory outcomes.
2.4. Safety and Security Concerns:
Several documented incidents illustrate safety and security concerns associated with AI systems, highlighting the potential for errors and malfunctions to cause harm 1. McDonald's ended an AI experiment with its drive-thru voice-ordering system after it made significant errors in processing customer orders, leading to operational challenges 6. In healthcare, the potential for AI errors to cause patient harm, misdiagnoses, and inappropriate treatment decisions is a significant concern 1.
The use of AI in self-driving cars, aircraft autopilot systems, and public transportation networks also carries inherent risks, with malfunctions potentially leading to accidents, injuries, and even fatalities 1. For instance, studies have indicated that some AI-powered systems in self-driving cars are less effective at detecting pedestrians with darker skin tones, potentially increasing the risk of accidents in racially diverse communities 1.
Malfunctioning AI or robots in industrial settings, such as robotic arms in manufacturing or automated forklifts in warehouses, have also been implicated in workplace injuries or fatalities 1. A historical near-miss incident involved a Soviet automated system in 1983 that sounded an alarm on five incoming nuclear missiles due to a malfunction, almost leading to global destruction 6.
These incidents demonstrate that AI systems, particularly in safety-critical applications, are susceptible to errors and malfunctions that can have severe and even catastrophic consequences. The complexity of AI systems and their reliance on vast amounts of data mean that mistakes can occur at various stages, from data collection and training to deployment and operation. In safety-critical areas, these errors can directly lead to physical harm or loss of life.
2.5. Privacy Violations and Data Breaches:
The data-intensive nature of AI systems makes them prime targets for cyberattacks and raises significant concerns regarding privacy violations and data breaches 3. OpenAI reported a leak of sensitive user data, including personal details, conversations, and login credentials, due to a suspected hack 6. Similarly, data breaches have affected users of AI-related services such as TaskRabbit, impacting over 3.75 million user records, and 23andMe, compromising the data of 6.9 million users 6. Clearview AI faced scrutiny for scraping billions of images from social media without user consent for its facial recognition technology, raising significant privacy concerns 6.
The FTC is also actively scrutinizing AI tools that may violate people's privacy through incentivizing commercial surveillance and collecting vast amounts of personal data 3. AI systems collect, store, and use personal information, which presents ongoing challenges for ensuring data security and protecting individual privacy. AI models require large datasets for training, often including sensitive personal information. Security vulnerabilities in these systems or unauthorized access can lead to massive data breaches and privacy violations.
2.6. Misuse of Generative AI:
The emergence of generative AI has introduced new avenues for harm through its potential for misuse 3. Aledo High School students allegedly generated and distributed deepfake nudes of seven female classmates, highlighting the harmful use of AI to create non-consensual intimate images 8. Child sexual abuse material (CSAM) has also been found to taint image generation systems, indicating the presence of illegal and deeply harmful content within these AI platforms 3.
ChatGPT reportedly produced false court case law presented by legal counsel in court, demonstrating how AI can generate misinformation with real-world legal ramifications 8. AI-driven news platforms have been accused of spreading unverified terrorism allegations, leading to the suspension of a Yale scholar 8. Deepfake scammers have impersonated figures like Chinese actor Jin Dong to mislead and defraud fans 8.
Even in the art world, concerns have arisen with an art museum in Jinan, Shandong, allegedly generating and displaying a sexualized childlike avatar with an adult body, drawing public criticism 8. Unauthorized AI-generated songs imitating artists like Céline Dion have also circulated online, raising copyright and ethical concerns 8.
The FTC is actively scrutinizing generative AI tools used for fraud, manipulation, or creating non-consensual imagery, recognizing the potential for significant harm 3. The ease with which generative AI can produce realistic text, images, audio, and video makes it a powerful tool for malicious actors to create and disseminate harmful content, ranging from fake news and scams to non-consensual pornography and child abuse material.
3. Potential Future Risks of AI Harm (Researcher Perspectives):
Beyond the documented incidents, researchers have identified several potential future risks associated with the continued advancement and deployment of AI technologies 4.
3.1. Existential Risks:
A significant concern among some researchers is the potential for advanced AI to pose an existential risk to humanity, leading to human extinction or an irreversible global catastrophe 17. This concern often centers around the hypothetical development of artificial superintelligence (ASI), an AI that surpasses human intelligence in virtually all cognitive domains 20. The argument posits that such a superintelligent AI might become uncontrollable, and its goals could diverge significantly from human interests, potentially leading to outcomes detrimental to humanity's survival 20.
Some researchers also worry about the possibility of a rapid "intelligence explosion," where an AI more intelligent than its creators could recursively improve itself at an exponentially increasing rate, potentially outpacing the human ability to understand or control it 20.
While the plausibility and the exact timeline of such existential risks are subjects of ongoing debate within the research community, with some experts expressing skepticism about near-term threats from current AI capabilities 20, others argue that even a small probability of such a catastrophic outcome warrants serious attention and proactive mitigation efforts 23.
The possibility of creating AI systems with intelligence far exceeding human capabilities raises fundamental questions about control and alignment of goals. Even if the probability of an existential catastrophe is low, the magnitude of the potential harm warrants proactive research and safety measures.
3.2. Malicious Use of AI by Bad Actors:
Researchers also foresee a significant risk in the potential for AI to be deliberately exploited by malicious actors for destructive purposes 4. This includes using AI to launch enhanced cyberattacks, such as sophisticated phishing campaigns, generating more effective malware, and circumventing traditional security measures 17.
There are concerns that AI could facilitate the creation of biological or chemical weapons by providing detailed step-by-step plans and potentially even aiding in their development 18. Furthermore, AI could be leveraged for large-scale disinformation and influence operations, creating compelling fake news, perpetrating widespread fraud, and executing sophisticated scams 17.
The weaponization of AI extends to cyber warfare and societal manipulation, where AI tools could disrupt critical infrastructure, sow discord, and undermine democratic processes 21. The accessibility and increasing sophistication of AI tools make them potentially powerful instruments for malicious actors to inflict harm on individuals, organizations, and societies. Just as AI can be used for beneficial purposes, it can also be leveraged by those with harmful intentions. The ability of AI to automate and amplify malicious activities poses a significant security threat.
3.3. Systemic Risks:
Beyond direct harms caused by individual AI systems or malicious actors, researchers are also concerned about broader systemic risks associated with the widespread adoption of advanced AI 2. One primary concern is the potential for widespread job displacement due to AI-powered automation across various industries, as AI systems can perform tasks currently done by humans 2.
This could lead to significant economic and social disruption. Another systemic risk is the exacerbation of socioeconomic inequality, as the benefits and control over AI technologies may become concentrated in the hands of a few influential developers and owners 2.
The potential for AI-driven trading algorithms to contribute to market volatility is also a concern 2. On a global scale, there is a risk of an AI divide, where some countries with more excellent resources and technological infrastructure benefit significantly more from AI advancements than others 26. Finally, the energy demands of training increasingly large and complex AI models could lead to harmful environmental impacts 4.
The transformative power of AI extends beyond individual applications and has the potential to reshape the labor market, the distribution of wealth, and even international relations. Understanding these systemic risks is crucial for developing appropriate policies and mitigation strategies.
3.4. Lack of Transparency and Explainability:
A significant challenge in ensuring AI's safety and responsible use is the lack of transparency and explainability in many advanced AI models, such as intense learning models 2. These models can often function as "black boxes," making it difficult to understand how they arrive at specific conclusions or decisions 21. This lack of transparency makes it challenging to identify potential biases, errors, or vulnerabilities within the system 21.
The field of "explainable AI" (XAI) is actively researching methods to make AI decision-making processes more transparent and interpretable 2. However, achieving widespread transparency in complex AI systems remains an ongoing challenge. If we cannot understand why an AI system makes a particular decision, it is challenging to identify and correct biases or errors. This lack of transparency can erode trust in AI, making it harder to ensure its responsible use.
3.5. Challenges in AI Alignment:
The AI alignment problem is a central focus of AI safety research concerning the challenge of designing AI systems whose objectives and behaviors are aligned with human values and intentions 27. Researchers highlight the difficulty of specifying the full range of desired and undesired behaviors in complex AI systems 35. Often, AI designers resort to using more straightforward proxy goals that are easier to define and optimize for 35.
However, there is a risk that these proxy goals may not perfectly reflect actual human values, leading to unintended and potentially harmful outcomes 35. Furthermore, defining "human values" is complex, as values can vary across cultures and individuals 34. Encoding these nuanced and sometimes conflicting values into AI systems presents a significant challenge. As AI systems become more autonomous and capable, ensuring their goals and actions align with human intentions becomes increasingly critical. Misaligned AI could pursue objectives that are harmful or contrary to human interests, even without malicious intent.
3.6. Weaponization of AI and Lethal Autonomous Weapons:
The increasing integration of AI into military and defense systems, including developing lethal autonomous weapon systems (LAWS), raises profound ethical and security concerns 2. LAWS are weapon systems that can locate, select, and engage human targets without direct human supervision 9. Ethicists and human rights advocates express deep concerns about the ethical implications of delegating the decision to take a human life to a machine that lacks human judgment, empathy, and the ability to understand context 39. There are also worries that algorithmic bias could lead to unintended and discriminatory targeting of individuals or groups 39.
The lack of clear accountability in cases where autonomous weapons cause harm to civilians or engage in unintended actions is another significant concern 39. Furthermore, the development and potential deployment of LAWS could lower the threshold for conflict and contribute to a destabilizing arms race among nations 39. United Nations Secretary-General António Guterres has called for a legally binding international instrument to prohibit LAWS that function without human control or oversight, highlighting such systems' political unacceptability and moral repugnance 38.
The decision to take a human life is inherently moral and requires human judgment and understanding. Delegating this decision to machines raises fundamental questions about accountability, the laws of war, and the very nature of humanity.
4. The Role of AI Bias in Perpetuating Harm:
AI bias is a critical factor that underpins and exacerbates various forms of harm discussed in this report 2. Bias in AI systems can originate from multiple sources. Data bias occurs when the data used to train AI models is not representative of the real world, contains historical prejudices, or suffers from flawed data collection processes 2.
For example, if a facial recognition system is primarily trained on images of lighter-skinned individuals, it may exhibit lower accuracy when identifying individuals with darker skin tones 10. Algorithmic bias can be introduced by the design and parameters of the algorithms themselves, even if the training data appears to be unbiased 14.
For instance, an algorithm might be designed to prioritize certain features over others in a way that inadvertently disadvantages specific groups. Human bias, also known as cognitive bias, can seep into AI systems through human subjective decisions during data labeling, model development, and other stages of the AI lifecycle 14. For example, if human annotators labeling images for a computer vision model hold unconscious biases, these biases can be reflected in the labeled data and subsequently learned by the AI.
The adverse outcomes resulting from AI bias are wide-ranging. In healthcare, biased AI can lead to lower accuracy in diagnosing diseases for minority groups 10, potentially delaying treatment or leading to misdiagnosis. In hiring and recruitment, biased algorithms can unfairly favor certain demographic groups, leading to discriminatory hiring practices 6.
Racial bias in criminal justice algorithms can result in higher risk scores for individuals from marginalized communities, potentially influencing sentencing and parole decisions 6. Biased facial recognition technology has been implicated in wrongful arrests, particularly of individuals with darker skin tones 3.
Even in seemingly creative applications, AI can perpetuate harmful stereotypes, as seen with language models associating certain professions with specific genders 10 and image generators misrepresenting disabled individuals in leadership roles 10. Identifying and mitigating AI bias is a complex undertaking.
The "black box" nature of some AI models makes it difficult to understand the factors influencing their decisions 21. Furthermore, achieving truly unbiased data is often challenging, as historical and societal biases are usually deeply embedded in the data we collect. AI bias is not merely a technical problem; it reflects and can amplify existing societal inequalities. Addressing AI bias requires a multi-faceted approach considering the issue's social, ethical, and technical dimensions, including careful data curation, algorithm design, and ongoing monitoring and auditing of deployed systems.
5. AI Manipulation and Deception as Potential Harms:
The ability of AI to understand and respond to human behavior also raises concerns about its potential for manipulation and deception 9. AI can be used for manipulative purposes in various contexts, including social media and online platforms, to subtly influence opinions and behaviors 9.
Personalized and targeted advertising and marketing campaigns powered by AI can exploit individual vulnerabilities to influence consumer choices 9. AI can be deployed in the political sphere to craft highly persuasive messages and micro-target voters, potentially swaying election outcomes 9. Even seemingly benign applications like child-oriented "emotions" that respond to children's emotions raise concerns about the potential manipulation of vulnerable individuals 52.
A specific technique, "prompt injection," demonstrates how AI behavior can be manipulated to generate dangerous or illegal content 15. By carefully crafting prompts, users can bypass the safety restrictions built into AI systems and trick them into providing instructions for harmful activities like building bombs, hotwiring cars, or creating illegal substances 15. The legal implications of manipulating AI systems are also being explored, with some legal scholars arguing that such actions could fall under existing cybercrime laws like the Computer Fraud and Abuse Act (CFAA) 15.
AI systems, particularly in their current stage of development, often struggle to understand the broader context and discern the underlying motives behind user inputs, making them susceptible to various forms of manipulation 49. The ethical implications of hidden influence and exploiting cognitive or emotional vulnerabilities through AI are significant concerns, as they can undermine individual autonomy and societal well-being 50. As AI systems become more sophisticated in interacting with humans, the risk of being used to subtly influence or deceive people increases. This raises ethical questions about consent, autonomy, and potential harm.
6. Autonomous Weapons Systems: Ethical and Humanitarian Concerns:
The development and potential deployment of lethal autonomous weapons systems (LAWS) present profound ethical and humanitarian challenges that demand careful international consideration and regulation 9. The core ethical dilemma revolves around delegating the decision to take a human life to a machine 39. Critics argue that machines lack the human judgment, empathy, and contextual understanding necessary to make such critical decisions ethically and by international humanitarian law 39.
There are also serious concerns that algorithmic bias could lead to unintended and discriminatory targeting of individuals or groups, potentially exacerbating existing inequalities in conflict situations 39. Furthermore, the lack of clear accountability in cases where autonomous weapons cause harm to civilians or engage in unintended actions raises complex legal and moral questions 39.
The prospect of LAWS potentially lowering the threshold for engaging in conflict and contributing to a destabilizing arms race among nations is another significant worry 39. The international community is grappling with these concerns, with many advocating for a legally binding international instrument to prohibit LAWS that function without meaningful human control or oversight 38.
The decision to take a human life is inherently moral and requires human judgment and understanding. Delegating this decision to machines raises fundamental questions about accountability, the laws of war, and the very nature of humanity.
7. AI Failures in Critical Systems and Their Consequences:
Instances, where AI systems have failed in critical applications, underscore the potential for severe and far-reaching consequences 1. In healthcare, failures can manifest as misdiagnoses or inappropriate treatment recommendations. The IBM Watson for Oncology case, which provided inaccurate and unsafe treatment recommendations due to reliance on synthetic data, exemplifies the critical importance of data quality and diversity in AI-driven healthcare solutions 11.
More directly, errors in AI-powered diagnostic tools could lead to delayed or incorrect treatments, potentially causing patient harm 1. In transportation, accidents involving self-driving cars, such as the Uber incident where a pedestrian was killed, highlight the risks associated with relying on AI for autonomous navigation and decision-making 1.
Potential failures in aircraft autopilot systems could have catastrophic consequences: 1. The legal domain has also seen AI failures, with chatbots generating fictitious legal cases submitted to the court, demonstrating the risks of relying on AI for critical legal research without proper human oversight 6. Algorithmic hiring tools, like the one developed by Amazon that exhibited gender bias, can perpetuate societal inequalities and lead to discriminatory hiring practices 11.
Financial systems are not immune, as illustrated by Zillow's AI algorithm that overestimated home values, leading to significant economic losses for the company 11. Even in customer service, chatbots provide incorrect or misleading information, as in the case of Air Canada's chatbot misrepresenting bereavement fare policies, which can result in financial repercussions and damage customer trust 11.
These failures emphasize the critical need for high data quality, rigorous testing and validation procedures, and appropriate human oversight in developing and deploying AI systems in critical applications 5. The "context problem" in AI, where systems trained in one specific environment or scenario may fail when applied in a different context, further underscores the need to consider the operational environment of AI systems 56. When AI is deployed in high-stakes environments like healthcare or transportation, errors or malfunctions can lead to harm or loss of life. Therefore, ensuring the reliability and safety of these systems is paramount.
8. AI Safety Research and the Alignment Problem:
The growing awareness of the potential harms associated with AI has spurred significant research efforts in AI safety 21. A central challenge in this field is the "AI alignment problem," which ensures that increasingly capable AI systems act according to human values and intentions 27. Various research approaches and initiatives are being explored to address this challenge. Researchers are developing foundational benchmarks and methods to assess the safety of AI systems 60.
The RICE principles (Robustness, Interpretability, Controllability, and Ethicality) have been identified as key aspects of AI alignment 32. Reinforcement learning from human feedback (RLHF) trains AI models based on human preferences 34. In contrast, inverse reinforcement learning (IRL) aims to infer human intentions from observed behavior 36. The concept of trustworthy AI, which emphasizes human oversight, explainability, and the mitigation of bias, is also a significant focus 21.
Initiatives like CERTAIN (Centre for European Research in Trusted AI) are working towards establishing guarantees for safe AI 29. OpenAI has launched a "Superalignment" initiative to ensure that AI systems that are much more intelligent than humans follow human intent 34. The International AI Safety Report represents a collaborative effort involving experts from numerous countries to advance a shared international understanding of AI risks and capabilities 18.
Organizations like the Stanford HAI are crucial in providing data and research insights on AI progress and its societal impact 37. As AI systems become more powerful, the importance of ensuring their safety and alignment with human goals increases. AI safety research aims to proactively address potential risks and develop methods to control and guide AI development in a beneficial direction.
9. Conclusion:
This report has outlined documented incidents and potential future risks of AI harming humans, as identified by researchers and various sources. Documented incidents span a range of categories, including fraud and scams, disinformation and manipulation, bias and discrimination, safety and security concerns, privacy violations, and the misuse of generative AI.
These real-world examples underscore the tangible ways AI is already causing harm in society. Furthermore, researchers have highlighted potential future risks that warrant serious attention, such as existential threats from highly advanced AI, the malicious use of AI by bad actors, systemic risks affecting the economy and society, the challenges posed by the lack of transparency and explainability in many AI models, the fundamental difficulties in ensuring AI alignment with human values, and the profound ethical and humanitarian concerns surrounding the weaponization of AI and the development of lethal autonomous weapons.
The pervasive role of AI bias in exacerbating many of these harms has also been emphasized. Ongoing and interdisciplinary research in AI safety and ethics is crucial for understanding these risks and developing effective mitigation strategies.
Proactive measures, including establishing robust regulations, formulating clear ethical guidelines, and developing technical solutions to enhance AI safety and alignment, are essential to ensure the responsible development and deployment of AI. Ultimately, a balanced approach is needed that harnesses the vast potential benefits of AI while diligently working to minimize its potential for harm to humanity.
Credit: Google Research 1.5
References
1. AI Failures and Personal Injuries - Law Office of Benjamin B. Grandy, accessed March 13, 2025, https://www.bbgrandy.com/blog/ai-failures-and-personal-injuries
2. 14 Risks and Dangers of Artificial Intelligence (AI) - Built In, accessed March 13, 2025, https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
3. AI and the Risk of Consumer Harm | Federal Trade Commission, accessed March 13, 2025, https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2025/01/ai-risk-consumer-harm
4. Recognize Potential Harms and Risks | National Telecommunications and Information Administration, accessed March 13, 2025, https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/requisites-for-ai-accountability-areas-of-significant-commenter-agreement/recognize-potential-harms-and-risks
5. 'Insufficient governance of AI' is the No. 2 patient safety threat in 2025 - Radiology Business, accessed March 13, 2025, https://radiologybusiness.com/topics/artificial-intelligence/insufficient-governance-ai-no-2-patient-safety-threat-2025
6. AI Incidents: A Rising Tide of Trouble | EWSolutions, accessed March 13, 2025, https://www.ewsolutions.com/ai-incidents-a-rising-tide-of-trouble/
7. How is Artificial Intelligence (AI) impacting Personal Injury Cases? - Mike Morse Law Firm, accessed March 13, 2025, https://www.855mikewins.com/how-is-artificial-intelligence-ai-impacting-personal-injury-cases/
8. Welcome to the Artificial Intelligence Incident Database, accessed March 13, 2025, https://incidentdatabase.ai/
9. Threats by artificial intelligence to human health and human existence - PMC, accessed March 13, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10186390/
10. AI Bias: 8 Shocking Examples and How to Avoid Them | Prolific, accessed March 13, 2025, https://www.prolific.com/resources/shocking-ai-bias
11. Post #8: Into the Abyss: Examining AI Failures and Lessons Learned, accessed March 13, 2025, https://www.ethics.harvard.edu/blog/post-8-abyss-examining-ai-failures-and-lessons-learned
12. AI Negligence and the Growing Risk of Harm - Fielding Law Firm, APC, accessed March 13, 2025, https://fieldinglawfirm.com/ai-negligence/
13. Injured By AI? Artificial Intelligence and Product Liability - Darrell Castle, accessed March 13, 2025, https://darrellcastle.com/blog/posts/injured-by-ai-artificial-intelligence-and-product-liability/
14. What are the risks of artificial intelligence (AI)? - Tableau, accessed March 13, 2025, https://www.tableau.com/data-insights/ai/risks
15. When Manipulating AI Is a Crime | Lawfare, accessed March 13, 2025, https://www.lawfaremedia.org/article/when-manipulating-ai-is-a-crime
16. AI Failures: Learning from Common Mistakes and Ethical Risks - Univio, accessed March 13, 2025, https://www.univio.com/blog/the-complex-world-of-ai-failures-when-artificial-intelligence-goes-terribly-wrong/
17. 10 AI dangers and risks and how to manage them | IBM, accessed March 13, 2025, https://www.ibm.com/think/insights/10-ai-dangers-and-risks-and-how-to-manage-them
18. General purpose AI could lead to an array of new risks, experts say in a report ahead of AI summit, accessed March 13, 2025, https://apnews.com/article/artificial-intelligence-research-danger-risk-safeguards-7b9db4ca69a89a4dd04e05a4294a3dfd
19. en.wikipedia.org, accessed March 13, 2025, https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence#:~:text=Advanced%20AI%20could%20generate%20enhanced,the%20AI%20itself%20if%20misaligned.
20. Existential risk from artificial intelligence - Wikipedia, accessed March 13, 2025, https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence
21. What Is AI Safety? - IBM, accessed March 13, 2025, https://www.ibm.com/think/topics/ai-safety
22. AI poses no existential threat to humanity – new study finds, accessed March 13, 2025, https://www.bath.ac.uk/announcements/ai-poses-no-existential-threat-to-humanity-new-study-finds/
23. Three key misconceptions in the debate about AI and existential risk, accessed March 13, 2025, https://thebulletin.org/2024/07/three-key-misconceptions-in-the-debate-about-ai-and-existential-risk/
24. Top 6 AI Security Risks and How to Defend Your Organization - Perception Point, accessed March 13, 2025, https://perception-point.io/guides/ai-security/top-6-ai-security-risks-and-how-to-defend-your-organization/
25. Is AI an Existential Risk? Q&A with RAND Experts | RAND, accessed March 13, 2025, https://www.rand.org/pubs/commentary/2024/03/is-ai-an-existential-risk-qa-with-rand-experts.html
26. Examining the capabilities and risks of advanced AI systems, accessed March 13, 2025, https://www.brookings.edu/articles/examining-advanced-ai-capabilities-and-risks/
27. 15 Potential Artificial Intelligence (AI) Risks - WalkMe, accessed March 13, 2025, https://www.walkme.com/blog/ai-risks/
28. AI Risks: Exploring the Critical Challenges of Artificial Intelligence | Lakera – Protecting AI teams that disrupt the world., accessed March 13, 2025, https://www.lakera.ai/blog/risks-of-ai
29. AI Safety Report published as a prelude to the AI Action Summit in Paris, accessed March 13, 2025, https://www.dfki.de/en/web/news/ai-safety-report
30. NIST AI Risk Management Framework: The Ultimate Guide - Hyperproof, accessed March 13, 2025, https://hyperproof.io/navigating-the-nist-ai-risk-management-framework/
31. 7 AI Security Risks You Can't Ignore - Wiz, accessed March 13, 2025, https://www.wiz.io/academy/ai-security-risks
32. What Is AI Alignment? - IBM, accessed March 13, 2025, https://www.ibm.com/think/topics/ai-alignment
33. Bias in AI - Chapman University, accessed March 13, 2025, https://www.chapman.edu/ai/bias-in-ai.aspx
34. AI Alignment - The Decision Lab, accessed March 13, 2025, https://thedecisionlab.com/reference-guide/computer-science/ai-alignment
35. AI alignment - Wikipedia, accessed March 13, 2025, https://en.wikipedia.org/wiki/AI_alignment
36. What is the AI Alignment Problem, and why is it important? | by Sahin Ahmed, Data Scientist, accessed March 13, 2025, https://medium.com/@sahin.samia/what-is-the-ai-alignment-problem-and-why-is-it-important-15167701da6f
37. The AI Regulatory Alignment Problem | Stanford HAI, accessed March 13, 2025, https://hai.stanford.edu/policy-brief-ai-regulatory-alignment-problem
38. Lethal Autonomous Weapon Systems (LAWS) – UNODA, accessed March 13, 2025, https://disarmament.unoda.org/the-convention-on-certain-conventional-weapons/background-on-laws-in-the-ccw/
39. Problems with autonomous weapons - Stop Killer Robots, accessed March 13, 2025, https://www.stopkillerrobots.org/stop-killer-robots/facts-about-autonomous-weapons/
40. The weaponization of artificial intelligence: What the public needs to be aware of - PMC, accessed March 13, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10030838/
41. Lethal Autonomous Weapons 101 - Third Way, accessed March 13, 2025, https://www.thirdway.org/memo/lethal-autonomous-weapons-101
42. The Risks of Artificial Intelligence in Weapons Design | Harvard Medical School, accessed March 13, 2025, https://hms.harvard.edu/news/risks-artificial-intelligence-weapons-design
43. What is AI Safety? Importance, Key Concepts, Risks & Framework - Security, accessed March 13, 2025, https://securiti.ai/ai-safety/
44. Understanding AI Harms: An Overview | Center for Security and Emerging Technology, accessed March 13, 2025, https://cset.georgetown.edu/article/understanding-ai-harms-an-overview/
45. www.ibm.com, accessed March 13, 2025, https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples#:~:text=Examples%20of%20AI%20bias%20in%20real%20life,-As%20society%20becomes&text=Healthcare%E2%80%94Underrepresented%20data%20of%20women,black%20patients%20than%20white%20patients.
46. What is AI bias? Causes, effects, and mitigation strategies - SAP, accessed March 13, 2025, https://www.sap.com/resources/what-is-ai-bias
47. What is AI Bias? - Understanding Its Impact, Risks, and Mitigation Strategies - Holistic AI, accessed March 13, 2025, https://www.holisticai.com/blog/what-is-ai-bias-risks-mitigation-strategies
48. Does AI Have a Bias Problem? | NEA - National Education Association, accessed March 13, 2025, https://www.nea.org/nea-today/all-news-articles/does-ai-have-bias-problem
49. Understanding AI Manipulation: A Case Study on the 'Agitation' Method - ChatGPT, accessed March 13, 2025, https://community.openai.com/t/understanding-ai-manipulation-a-case-study-on-the-agitation-method/594003
50. Regulating Manipulative Artificial Intelligence - SCRIPTed, accessed March 13, 2025, https://script-ed.org/article/regulating-manipulative-artificial-intelligence/
51. [2303.09387] Characterizing Manipulation from AI Systems - arXiv, accessed March 13, 2025, https://arxiv.org/abs/2303.09387
52. On manipulation by emotional AI: UK adults' views and governance implications - PMC, accessed March 13, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11190365/
53. NIST Identifies Types of Cyberattacks That Manipulate Behavior of ..., accessed March 13, 2025, https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems
54. AI Risks and Trustworthiness - NIST AIRC - National Institute of Standards and Technology, accessed March 13, 2025, https://airc.nist.gov/airmf-resources/airmf/3-sec-characteristics/
55. The Root Causes of Failure for Artificial Intelligence Projects ... - RAND, accessed March 13, 2025, https://www.rand.org/content/dam/rand/pubs/research_reports/RRA2600/RRA2680-1/RAND_RRA2680-1.pdf
56. The Context Problem in Artificial Intelligence - Communications of the ACM, accessed March 13, 2025, https://cacm.acm.org/opinion/the-context-problem-in-artificial-intelligence/
57. International AI Safety Report 2025 - GOV.UK, accessed March 13, 2025, https://www.gov.uk/government/publications/international-ai-safety-report-2025
58. AI Index | Stanford HAI, accessed March 13, 2025, https://hai.stanford.edu/research/ai-index-report
59. International AI Safety Report - MIT Media Lab, accessed March 13, 2025, https://www.media.mit.edu/publications/international-ai-safety-report/
60. Research Projects | CAIS - Center for AI Safety, accessed March 13, 2025, https://www.safe.ai/work/research
Comments
Post a Comment