Open AI is on the lookout for an "insider risk investigator" to beef up our defenses against internal security threats, as revealed in the company's job listing first noticed by MS Power User. This sleuth's mission? To help shield the organization's assets by scrutinizing unusual activities, fostering a secure culture, and collaborating with different departments to squash risks. The job has been hanging out on the Way back Machine's radar since mid-January.
The listing emphasizes the pivotal role this detective will play in securing Open AI's assets by delving into abnormal activities, championing a secure vibe, and collaborating across departments to nix potential hazards. Their expertise, it claims, will be a linchpin in protecting Open AI from internal risks, ultimately contributing to the broader societal benefits of artificial intelligence.
Essentially, it appears Open AI is fed up with the spate of high-profile leaks about its contentious tech, covering everything from major business decisions to internal squabbles and, oh yes, the occasional customer data spill.
Take, for instance, the rollercoaster ride of CEO Sam Altman's dismissal and rehiring last year, a turbulent period marked by insider sources dishing out revelations and glimpses into Open AI's highly peculiar company culture. Board drama unfolded, and the bombshell dropped that Microsoft, Open AI's main investor, was blindsided by Altman's ousting. And that's not even delving into vivid accounts of Open AI's chief scientist Ilya Sutskever burning effigies and leading peculiar chants at the company.
The most cringe-worthy moment was the leak of details about an experimental and hush-hush AI project dubbed "Q*." In November, Reuters and The Information reported that Open AI leaders might have freaked out over the project, ultimately leading to Altman's dismissal. Despite its name, OpenAI, in reality, is a for-profit enterprise that historically likes to keep the workings of its key products under tight wraps.
However, given the flood of insider insights into the company, it's evident that Open AI is dealing with some leakage issues internally. The new risk investigator is likely tasked with putting the lid on this culture of openness.
Here's the catch, though: Some of the most eyebrow-raising revelations about OpenAI haven't sprung from anonymous leakers but from the company's own leadership. Altman is a constant source of outlandish claims, from predicting imminent human-tier AI to casually pondering his fascination with the Terminator. Suts kever is no different; remember back in 2022 when he made headlines by suggesting that some neural networks might already be "slightly conscious"?
In essence, Open AI's freshly recruited in-house detective won't just be managing the rank and file; they might find themselves in some serious heart-to-heart discussions with their own leadership as well.