In an event that captured the attention of the tech world this month, Sam Altman, OpenAI CEO, stepped into the spotlight at a Senate Judiciary Committee hearing, bringing to the fore the pressing need for robust AI regulation. While there was a unanimous agreement on the vital role of regulation, Altman underscored the complexity of the situation, highlighting the foggy road to the next steps and the potential multifaceted aspects of regulation.
Joseph Plazo, head of a non-profit start-up machine learning and AI research organization for applications on the capital markets, spent significant time mulling over the demand for more stringent AI regulation, particularly regarding AI breakthroughs like ChatGPT, with an emphasis on security concerns.
Plazo, also a lawyer by profession and former partner at Plazo and Associates Law, articulates the urgency for regulation by underlining, “The regulation of generative AI, an AI tech capable of creating varied content from text to synthetic data, is indispensable to shield against potential perils from harmful applications such as hate speech, targeted harassment, and disinformation. Although these challenges are as old as time, generative AI has added fuel to the fire, making them much easier and quicker to implement.”
Joseph Plazo advocates for a proactive approach to regulation, emphasizing that it needs to be woven into the fabric of the technology from the initial design stage itself. “Organizations should take ownership of the input data used in training generative AI models, employing human reviewers to screen out inappropriate content,” he recommends.
Adding another layer to this complex regulatory matrix, Plazo emphasizes the need for complete transparency. He asserts, “Technology corporations should present generative AI as an online service, like an API, which could accommodate safeguards. This could range from validating input data before it enters the model to scrutinizing output before it reaches users.”
He also proposes a need for an ongoing collection and examination of data from user experience. “In addition, organizations need to keep a hawk’s eye on user behavior by establishing clear limits through Terms of Service,” Plazo explains.
Studying the example set by the industry leader, OpenAI, Plazo points out, “OpenAI has clear-cut rules stating that its tools should not be utilized for creating certain types of images and text. Moreover, companies involved in generative AI should leverage algorithmic tools to flag potential misuse, with provision for suspension of repeat violators.”
However, Plazo cautions that this is not the end of the story. He notes, “While these measures can mitigate risks, it is imperative to recognize that both regulation and technical controls come with their own set of limitations.”
Regardless of how technology morphs over time, Plazo underscores the perpetual necessity for formidable security measures. He concludes, “Persistent threats are bound to find ways to bypass these protections, therefore, the task of maintaining the integrity and safety of AI is not a one-time job but a relentless pursuit from hereon.”