Security issues plague OpenAI as scrutiny increases
OpenAI is in an unwelcome spotlight this week after a significant security flaw in its ChatGPT was discovered, and a previously undisclosed attack has come to light.
The specific security flaw was first exposed on July 2, when engineer Pedro José Pereira Vieito pointed out on Twitter/X that the Mac version of ChatGPT was storing the conversations of users in plain text, rather than encrypting them, meaning hackers could read them with no effort. The app can only be directly downloaded from OpenAI’s website, meaning it does not have to go through Apple’s security protocols, which would have prevented this.
The company has since patched the app, encrypting the conversations.
Meanwhile, on Thursday, the New York Times reported the company was the victim of a hacker attack, where the offenders were able to access internal messaging systems in OpenAI, allowing them to obtain details about the company’s technologies. OpenAI has not previously reported this breach publicly, but did let employees know in April of 2023.
The hacker was not thought to be associated with a foreign government, and the company did not alert the FBI or other law enforcement officials.
That breach led Leopold Aschenbrenner, an OpenAI technical program manager, to send a memo to the board of directors, expressing concerns the company was not doing enough to prevent foreign governmental adversaries from stealing its secrets. This spring, Aschenbrenner was fired from OpenAI.
The company told the Times the dismissal was not connected with that memo, but according to The Information, he was fired for allegedly leaking information to journalists.
The revelation of the security gaps come as OpenAI’s dominance in the world of AI continues to grow. The company is heavily backed by Microsoft and has signed a growing number of deals with media companies to incorporate their content into its large language models.