OpenAI putting βshiny productsβ above safety, says departing researcher
18 May 2024 at 05:44
Jan Leike, a key safety researcher at firm behind ChatGPT, quit days after launch of its latest AI model, GPT-4o
A former senior employee at OpenAI has said the company behind ChatGPT is prioritising βshiny productsβ over safety, revealing that he quit after a disagreement over key aims reached βbreaking pointβ.
Jan Leike was a key safety researcher at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.
Continue reading...