Anthropic’s Daniela Amodei Believes the Market Will Reward Safe AI

-


The Trump administration may think regulation is crippling the AI industry, but one of the industry’s biggest players doesn’t agree.

At WIRED’s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told editor at large Steven Levy that even though Trump’s AI and crypto czar David Sacks may have tweeted that her company is “running a sophisticated regulatory capture strategy based on fear-mongering,” she’s convinced her company’s commitment to calling out the potential dangers of AI is making the industry stronger.

“We were very vocal from day one that we felt there was this incredible potential [for AI],” Amodei said. “We really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that’s why we talk about it so much.”

Over 300,000 startups, developers, and companies use some version of Anthropolic’s Claude model and Amodei said that, through the company’s dealings with those brands, she’s learned that, while customers want their AI to be able to do great things, they also want it to be reliable and safe.

“No one says ‘we want a less safe product,’” Amodei said, likening Anthropolic’s reporting of its model’s limits and jailbreaks to that of a car company releasing crash-test studies to show how it’s addressed safety concerns. It might seem shocking to see a crash test dummy flying through a car window in a video, but learning that an automaker updated their vehicle’s safety features as a result of that test could sell a buyer on a car. Amodei said the same goes for companies using Anthropic’s AI products, making for a market that is somewhat self-regulating.

“We’re setting what you can almost think of as minimum safety standards just by what we’re putting into the economy,” she said. “[Companies] are now building many workflows and day-to-day tooling tasks around AI, and they’re like, ‘Well, we know that this product doesn’t hallucinate as much, it doesn’t produce harmful content, and it doesn’t do all of these bad things.’ Why would you go with a competitor that is going to score lower on that?”

Photograph: Annie Noelker



Source link

Ariel Shapiro
Ariel Shapiro
Uncovering the latest of tech and business.

Latest news

WIRED Roundup: DOGE Isn’t Dead, Facebook Dating Is Real, and Amazon’s AI Ambitions

Leah Feiger: So it's a really good question actually, and it's one that I've thought about for quite...

Horses, the Most Controversial Game of the Year, Doesn’t Live Up to the Hype

The debate over Horses’ delisting is emblematic of a bigger fight that’s taken place this year, when platforms...

Buying Warner Bros. Gives Netflix What It’s Always Needed: An Identity

Close your eyes, think for a minute, and tell me: What is a Netflix Movie? OK, try again:...

Silk & Snow Seemingly Cannot Miss—So Don’t Skip This Sale That Ends in 2 Days

I try to test every individual product critically and neutrally, but sometimes a brand comes along that seemingly...

Must read

You might also likeRELATED
Recommended to you