• Share this News :        


  • December 19, 2023
  • Neha DP
OpenAI's Safety Shield: A Comprehensive Framework for Responsible AI

In a strategic move to address safety concerns associated with its cutting-edge AI models, OpenAI, the artificial intelligence company backed by Microsoft, unveiled a comprehensive framework on Monday. This framework includes a provision allowing the company's board to reverse safety decisions. OpenAI emphasizes deploying its latest technology only after ensuring its safety in specific domains like cybersecurity and nuclear threats. Additionally, the company is establishing an advisory group tasked with reviewing safety reports and forwarding them to both executives and the board.

While ultimate decision-making authority lies with the executives, the board retains the power to overturn those decisions.Since the launch of ChatGPT a year ago, concerns surrounding the potential risks of AI have been at the forefront of discussions among AI researchers and the general public. While generative AI technology has captivated users with its ability to generate poetry and essays, it has concurrently raised apprehensions about safety, including the dissemination of disinformation and the manipulation of human behavior.

These concerns prompted a group of AI industry leaders and experts to advocate for a six-month hiatus in developing systems more powerful than OpenAI's GPT-4, citing potential societal risks. In alignment with this cautionary sentiment, a Reuters/Ipsos poll conducted in May revealed that over two-thirds of Americans harbor concerns about the potential adverse effects of AI, with 61% expressing the belief that it could pose a threat to civilization. In response to these apprehensions, OpenAI's proactive approach aims to establish a robust safety framework, demonstrating its commitment to responsible AI development and deployment.