An analysis of AI’s progress from 2016 to 2021 and is the latest installment from AI100, a century-long project by Stanford University Institute for Human-Centered Artificial Intelligence. AI has demonstrated remarkable advancements in language processing, computer vision, and pattern recognition, enhancing various aspects of daily life from entertainment choices to medical diagnostics. However, these advancements bring a critical need to address potential negative impacts, such as algorithmic biases and the misuse of AI for deception.
Chaired by Michael Littman, a professor at Brown University, the panel comprising computer scientists, public policy experts, and social scientists, has called for collaborative efforts to ensure AI development is aligned with societal welfare. The report addresses 14 key questions related to AI’s progress, challenges, risks, societal impacts, public perception, and the field’s future, as formulated by the AI100 standing committee, a group of prominent AI leaders.
The report underscores significant progress in AI subfields, driven by machine learning and deep learning technologies, leading to practical applications that once seemed unattainable. Notably, AI systems are contributing to medical diagnostics with precision comparable to human experts, enhancing research in genomics, and accelerating pharmaceutical discoveries. AI is also integrated into everyday consumer technologies, from voice assistants to advanced driver-assistance systems in vehicles.
The panel acknowledges the less visible yet substantial advancements in AI, such as the technology behind background image processing in video conferences—a feature widely adopted during the COVID-19 pandemic. Addressing AI’s risks, the report eschews dystopian scenarios, focusing instead on current subtle yet serious issues. The misuse of AI in creating deepfakes, manipulating public opinion, or reinforcing societal biases through decision-making systems are among the concerns highlighted. Particularly in sensitive areas like law enforcement and healthcare, the perception of AI as objective and neutral can lead to the perpetuation of historical biases and discrimination.
The report emphasizes the importance of interdisciplinary collaboration in mitigating these risks, with contributions from psychology, public policy, and beyond. It illustrates a significant shift in the AI field, where expertise is no longer confined to computer science, but includes substantial understanding from social scientists.
Looking ahead, the panel calls for a collective effort from governments, academia, and the industry to guide AI’s development towards benefiting society as a whole.