• Share this News :        


  • January 17, 2024
  • Sayana Chandran
Democratizing AI Governance: OpenAI Launches Initiative for Crowdsourced Model Guidelines

OpenAI has announced the creation of a Collective Alignment team, comprising researchers and engineers, with the goal of incorporating public input into the governance of its future artificial intelligence (AI) models. The startup aims to ensure that its AI models align with the values of humanity by developing a systematic approach to collect and encode public feedback on model behaviors into OpenAI products and services. In a blog post, OpenAI stated its commitment to collaborating with external advisors, grant teams, and actively recruiting research engineers from diverse technical backgrounds to contribute to this initiative.

This move is an extension of OpenAI's public program, launched in May of the previous year, which awarded grants to individuals and organizations exploring a democratic process for determining rules governing AI systems. OpenAI showcased the diverse projects funded under this program, ranging from video chat interfaces to platforms facilitating crowdsourced audits of AI models. The code used in these projects has been made public, along with brief summaries of each proposal, emphasizing OpenAI's commitment to transparency in its AI endeavors.

Despite OpenAI positioning its initiatives as separate from commercial interests, the startup faces criticism from rivals, including Meta, alleging an attempt to influence AI industry regulations. OpenAI, led by CEO Sam Altman, maintains that the fast-paced innovation in AI necessitates a collaborative approach to governance. The startup is currently under increased regulatory scrutiny, including a U.K. probe into its relationship with Microsoft, and has recently taken steps to mitigate regulatory risks in the EU concerning data privacy.In a parallel effort to address concerns about potential misuse, OpenAI has announced collaboration with organizations to limit the ways its technology could be used to influence elections negatively. The startup is actively working on features to make AI-generated images easily distinguishable and developing techniques to identify manipulated content, demonstrating its commitment to responsible AI use.