With tens of billions invested in AI last year and leading players such as OpenAI looking for trillions more, the tech industry is racing to add to the pileup of generative AI models. The goal is to steadily demonstrate better performance and, in doing so, close the gap between what humans can do and what can be accomplished with AI.
AI’s Trust Problem
Twelve persistent risks of AI that are driving skepticism.
		
	
	May 03, 2024
		  ·  Long read
	 
		
				Illustration by Gabriel Corbera
		
				
		Summary.   
		
				As AI becomes more powerful, it faces a major trust problem. Consider 12 leading concerns: disinformation, safety and security, the black box problem, ethical concerns, bias, instability, hallucinations in LLMs, unknown unknowns, potential job losses and social inequalities, environmental impact, industry concentration, and state overreach. Each of these issues is complex — and not easy to solve. But there is one consistent approach to addressing the trust gap: training, empowering, and including humans to manage AI tools.
		New!
		
			
				 HBR Learning
				HBR Learning
			
			
		
			 
		
	
 HBR Learning
				HBR Learning
			Ethics at Work Course
			Accelerate your career with Harvard ManageMentor®. HBR Learning’s online leadership training helps you hone your skills with courses like Ethics at Work. Earn badges to share on LinkedIn and your resume. Access more than 40 courses trusted by Fortune 500 companies.
			Avoid integrity traps in the workplace.
			
			
		 
		