- OpenAI’s Sam Altman went quiet when asked about data privacy
- Privacy officers’ risk management duty trickier in AI-era
All eyes turned to Sam Altman.
It had been days of workshops and panels at IAPP’s privacy summit, where professionals talk about managing personal information responsibly. Now, Altman, the CEO of
Altman, who was attending virtually, hesitated briefly as he loomed over the audience from a giant video screen at Washington’s convention center. “I would be too shy to say that in this room,” he finally said.
The exchange in late April summed up the tensions at play as AI technology evolves rapidly, clashing with global data privacy principles with each growth spurt. Companies have been grappling with this friction, which has sparked global investigations, enforcement actions and fines, halted the release of certain products, and transformed in-house data governance teams.
The keynote took place days before Tools for Humanity Corp., another Altman venture, started to roll out thousands of its iris-scanning orb devices across the US, beginning in San Francisco, Atlanta, Los Angeles and other cities. The futuristic device, which looks like a basketball, is advertised as a tool to verify people’s identities while protecting their privacy.
Both of Altman’s businesses, OpenAI and Tools for Humanity, have faced intense regulatory scrutiny in the European Union over how they collect and use customers’ data.
Companies ought to be thinking about “data bombs” that could trip them up later, said Annmarie Giblin, partner at Norton Rose Fulbright US LLP where she advises clients on data governance, privacy, and emerging technologies.
“The worst thing that could happen is now you realize, ‘Oh my God, I wasn’t allowed to use this data,’” she said.
A Sales Pitch?
Altman’s non-answer led to posts and comments on LinkedIn, with some bashing the keynote as a “sales pitch” that failed to meaningfully address data governance concerns.
“They threw their weapon of Sam at the crowd. And usually Sam works, right?” Julie Saslow Schroeder, former head of legal and compliance at companies including Health Gorilla, Thematically, and Higg, said in an interview.
The IAPP conference’s closing keynote featured Altman and Alex Blania, the co-founder of Tools for Humanity. The session was, in part, organized by Tools for Humanity’s chief privacy and legal officer Damien Kieran, who told Bloomberg Law he felt it was important to have both Altman and Blania address privacy concerns in front of perhaps their “hardest audience.”
As the online criticism mounted after the conference, Kieran, formerly at companies including BeReal and
“Yes, it’s really important to ask questions,” he wrote, “but it’s equally important to research and understand these technologies fully and make sure we’re sharing accurate information about how they work, what they can be used for and what they can’t be used!”
Tools for Humanity declined to answer further questions. OpenAI didn’t immediately reply to a request for comment.
A Seat at the Table
Kieran’s efforts to bring Altman and Blania into the privacy conversation, as well as his own move to address privacy concerns online, are a part of a bigger reputational risk management duty that in-house teams increasingly face.
But addressing mounting fears around AI-powered technologies has proven to be difficult.
Earlier this year,
Wright noted that the lawsuit—which was ultimately dropped—“falsely alleged that LinkedIn shared private member messages with third parties for AI training purposes. We never did that.”
Privacy experts, specifically, often have a seat “at very important strategy tables inside organizations,” said IAPP’s president and CEO J. Trevor Hughes. Amid the current regulatory landscape, businesses that don’t get data governance right won’t “get very far.”
“For any executive to show up on our stage is a suggestion that they’re paying attention to privacy. People might not like what they say,” Hughes told Bloomberg Law, “but they’re showing up. We think that’s important.”
Shifting Roles
Data privacy professionals’ roles have expanded to meet a more complex regulatory environment that seeks to ensure AI develops safely and responsibly. As part of that expansion, some are now adding on more tasks, such as engaging with the media or speaking more frequently at industry conferences like IAPP.
“The role starts shifting from being a role that’s focused primarily on legal requirements, to being a role that then takes those requirements and melds it with the technical requirements as well as the business requirements,” said Ojas Rege, SVP & GM of privacy and data governance at OneTrust, a software company that helps companies manage data and privacy issues.
Whether the strategy will prove successful, especially as technology continues to rapidly develop under increased regulatory scrutiny, is unclear.
“The top question on every company’s mind should be, how do we make sure that these AI systems that we’re building are using data responsibly?” Rege said. “If this isn’t a core principle of the system... then that system will not be able to deliver long-term business value.”
To contact the reporters on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.