In a significant development for U.S. policymakers, a committee comprising leaders and scholars from the Massachusetts Institute of Technology (MIT) has unveiled a set of policy briefs, providing a comprehensive framework for the governance of artificial intelligence (AI). The committee's approach involves an extension of existing regulatory and liability measures, aiming to establish a practical method for overseeing AI while promoting U.S. leadership in the AI sector. The primary policy paper, titled "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests leveraging existing U.S. government entities to regulate AI tools and emphasizes the importance of aligning regulations with the specific purposes of AI applications.
The committee, led by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, and Asu Ozdaglar, deputy dean of academics in the same college, seeks to address the challenges posed by AI technologies, balancing the potential for societal benefits with the need to mitigate harm. With multiple additional policy papers, the initiative responds to the increased interest in AI, coinciding with substantial industry investments. Notably, the European Union is also actively pursuing AI regulations, highlighting the global importance of addressing issues related to general and specific AI tools, misinformation, deepfakes, and surveillance.
Navigating the Regulatory Landscape: Defining Purpose, Intent, and Guardrails.The main policy brief advocates for an extension of current policies to cover AI, emphasizing the utilization of existing regulatory agencies and legal liability frameworks. Drawing parallels with strict licensing laws in fields like medicine, the brief asserts that AI should be subject to regulations that align with the intended purpose of the technology. It calls for a clear definition of the purpose and intent of AI applications by providers to facilitate the identification of relevant regulations and regulators. Additionally, the brief acknowledges the complex nature of AI systems, existing at multiple levels, and emphasizes the need for accountability across the entire "stack" of AI technologies.The framework proposes the establishment of public standards for auditing new AI tools, with considerations for potential government-initiated, user-driven, or legal liability-driven audits. It also contemplates the creation of a government-approved "self-regulatory organization" (SRO) agency, akin to FINRA, to ensure responsiveness and flexibility in regulating the rapidly evolving AI industry.
Promoting Ethical AI and Societal Benefits.Beyond regulatory measures, the policy framework encourages research on making AI beneficial to society. Various policy papers delve into specific regulatory issues, such as labeling AI-generated content and examining large language models. The committee aims to bridge the gap between excitement and concern about AI, advocating for adequate governance and oversight to accompany technological advancements. The MIT committee emphasizes the importance of academic institutions in offering expertise on the intersection of technology and society, underscoring their role in shaping effective and ethical AI governance for the nation and the world.