Openai is investing in Deepfake Cybersecurity Startup Adaptive Security


A futuristic cybersecurity command center with digital monitors showing faces generated by AI flagged as Deepfake's threat. A small screen displays live data streams, phishing alerts and global cyberattack maps. The room is dimly lit in shades of red and blue, but modern attire experts work hard to emphasize the urgency and innovation behind AI-driven cybersecurity.

Adaptive Security, a cybersecurity company focused on defense against AI-powered attacks, has raised $43 million in a funding round co-led by Openai and Andreessen Horowitz. The investment, announced Wednesday, marks the first financial support of cybersecurity firm Openai.

Adaptive Security specializes in simulating sophisticated phishing and deep-fark attacks, preparing organizations for threats that use artificial intelligence to mimic real individuals using voices, likenesses and personal details that have been scraped off from public sources.

Appearing on CNBC’s Squawk Box, CEO Brian Long said: “It’s not just voice and likeness. It’s trained on all the open source information about you.”

According to Long, the use of AI in social engineering attacks has increased significantly over the past year. He called the rise of AI-powered threats “one of the most urgent cybersecurity threats of our time.”

In addition to Openai and Andreessen Horowitz, the funding round included the following participation:

  • Abstract venture

  • Eniac Venture

  • Cross Beam Venture

  • K5 Venture

  • Executives such as Google, Workday, Shopify, Plaid

What does adaptive security do?

The company uses AI to simulate real-world attacks beyond simple spoofing. These simulations help companies train their teams to recognize and defend evolving threats.

“Adaptive is about building exactly what the industry needs. It’s an AI-Native defense platform that evolves as quickly as attackers,” said Ian Hathaway, partner at the Openai Startup Fund.

Customers and future plans

Adaptive Security clients include Dallas Mavericks, First State Bank, and BMC. The company said new funding will accelerate the development of engineering solutions and help businesses and their employees counter increasingly complex cyberattacks.

OpenAI’s investment in adaptive security demonstrates a strategic expansion into AI defense beyond AI development. As the creator of powerful generative models like ChatGpt, Openai now acknowledges the risk that these tools can possibly pose in the wrong hands. Especially in the attacks of Deepfark and social engineering.

For Openai, the move could mark the beginning of a broader effort to support a safer AI ecosystem. This coincides with innovation’s investment in protective technology. It also strengthens the company’s commitment to responsible AI deployment, particularly to scrutinize the ways in which regulators and the public can use AI maliciously.

For Openai users, especially those who integrate AI tools into their business operations, this investment could lead to partnerships with platforms such as improved security capabilities, cognitive training, and even adaptive security. Not only does Openai advance AI capabilities, it also shows that it is actively working to mitigate the new wave of risks that these capabilities introduce.

In a broader sense, this partnership highlights the growing need for AI-Native cybersecurity solutions. This is a tool built from the ground up to address the evolving threat landscape with AI.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AINEWS.COM and provided writing, imagery and idea generation support from AI assistant ChatGpt. However, the only final perspective and editorial choice is Alicia Shapiro. Thank you to ChatGpt for your research and editorial support in writing this article.



Source link