TLDR
- OpenAI experienced a security breach in early 2023 where a hacker gained access to internal messaging systems
- The hacker stole details about the design of OpenAI’s AI technologies from employee discussions, but not the core AI code
- OpenAI did not publicly disclose the hack or report it to law enforcement, believing it wasn’t a national security threat
- The incident raised concerns among some employees about potential vulnerabilities to foreign adversaries like China
- The breach has reignited debates about AI security, transparency, and potential national security implications
It has come to light that OpenAI, the creator of ChatGPT, suffered a significant security breach in early 2023.
According to a report by the New York Times, a hacker gained access to the company’s internal messaging systems and stole details about the design of OpenAI’s artificial intelligence technologies.
The breach, which occurred in April 2023, allowed the intruder to access an online forum where employees discussed OpenAI’s latest technologies.
While the hacker did not penetrate the systems where the company houses and builds its AI, they were able to lift sensitive details from internal discussions. Importantly, the core AI code, OpenAI’s most prized asset, remained secure.
OpenAI executives disclosed the incident to employees during an all-hands meeting at the company’s San Francisco offices and informed the board of directors.
However, they decided against making the news public or reporting it to law enforcement agencies like the FBI. The company’s rationale was that no information about customers or partners had been stolen, and they believed the hacker was a private individual with no known ties to foreign governments.
This decision has raised questions about transparency and security practices in the rapidly evolving field of artificial intelligence. The incident has also reignited concerns about the potential vulnerabilities of AI companies to foreign adversaries, particularly China.
Leopold Aschenbrenner, a former OpenAI technical program manager, sent a memo to the company’s board following the breach.
He argued that OpenAI was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets. Aschenbrenner, who was later dismissed from the company for unrelated reasons, expressed worry that OpenAI’s security measures might not be robust enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
OpenAI has refuted these claims. Liz Bourgeois, an OpenAI spokesperson, stated, “We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation.” She added, “While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”
The incident highlights the delicate balance AI companies must strike between openness and security.
While some companies, like Meta, are freely sharing their AI designs as open-source software, others are taking a more cautious approach. OpenAI, along with competitors like Anthropic and Google, has been adding guardrails to their AI applications before offering them to the public, aiming to prevent misuse and potential problems.
Matt Knight, OpenAI’s head of security, emphasized the company’s commitment to security:
“We started investing in security years before ChatGPT. We’re on a journey not only to understand the risks and stay ahead of them, but also to deepen our resilience.”
The breach has also brought attention to the broader issue of AI’s potential impact on national security. While current AI systems are primarily used as work and research tools, there are concerns about future applications that could pose more significant risks.
Some researchers and national security leaders argue that even if the mathematical algorithms at the heart of current AI systems are not dangerous today, they could become so in the future.
Susan Rice, former domestic policy adviser to President Biden and former national security adviser for President Barack Obama, highlighted the importance of taking potential risks seriously:
“Even if the worst-case scenarios are relatively low probability, if they are high impact then it is our responsibility to take them seriously. I do not think it is science fiction, as many like to claim.”
In response to growing concerns, OpenAI has recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command.