More
- Awards
- Blogs
- BrandPosts
- Events
- Podcasts
- Videos
- Enterprise Buyer’s Guides
news
New backdoor ‘SesameOp’ abuses OpenAI Assistants API for stealthy C2 operations
Nov 4, 2025 3 mins
news
Typo hackers sneak cross-platform credential stealer into 10 npm packages
Oct 30, 2025 4 mins
news
BlueNoroff reemerges with new campaigns for crypto theft and espionage
Oct 29, 2025 3 mins
news
Copilot diagrams could leak corporate emails via indirect prompt injection
Oct 28, 2025 3 mins
news
Lazarus group targets European drone makers in new espionage campaign
Oct 24, 2025 3 mins
news
Google ‘Careers’ scam lands job seekers in credential traps
Oct 22, 2025 3 mins
news
‘Zero Disco’ campaign hits legacy Cisco switches with fileless rootkit payloads
Oct 17, 2025 3 mins
news
TigerJack’s malicious VSCode extensions mine, steal, and stay hidden
Oct 15, 2025 4 mins
Currently in private beta, the GPT-5-powered security agent scans, reasons, and patches software like a real researcher, aiming to embed AI-driven defense into the development workflow.

Credit: Roman Samborskyi / Shutterstock
OpenAI has unveiled Aardvark, a GPT-5-powered autonomous agent designed to act like a human security researcher capable of scanning, understanding, and patching code with the reasoning skills of a professional vulnerability analyst.
Announced on Thursday and currently available in private beta, Aardvark is being positioned as a major leap toward AI-driven software security.
Unlike conventional scanners that mechanically flag suspicious code, Aardvark attempts to analyze how and why code behaves the way it does. “OpenAI Aardvark is different as it mimics a human security researcher,” said Pareekh Jain, CEO at EIIRTrend. “It uses LLM-powered reasoning to understand code semantics and behavior, reading and analyzing code the way a human security researcher would.”
By embedding itself directly into the development pipeline, Aardvark aims to turn security from a post-development concern into a continuous safeguard that will evolve with the software itself, Jain added.
From code semantics to validated patches
What makes Aardvark unique, OpenAI noted, is its combination of reasoning, automation, and verification. Rather than simply highlighting potential vulnerabilities, the agent promises multi-stage analysis–starting by mapping an entire repository and building a contextual threat model around it. From there, it continuously monitors new commits, checking whether each change introduces risk or violates existing security patterns.
Additionally, upon identifying a potential issue, Aardvark attempts to validate the exploitability of the finding in a sandboxed environment before flagging it.
This validation step could prove transformative. Traditional static analysis tools often overwhelm developers with false alarms–issues that may look risky but aren’t truly exploitable. “The biggest advantage is that it will reduce false positives significantly,” noted Jain. “It’s helpful in open source codes and as part of the development pipeline.”
Once a vulnerability is confirmed, Aardvark integrates with Codex to propose a patch, then re-analyzes the fix to ensure it doesn’t introduce new problems. OpenAI claims that in benchmark tests, the system identified 92 percent of known and synthetically introduced vulnerabilities across test repositories–a promising indication that AI may soon shoulder part of the burden of modern code auditing.
Securing open source and shifting security left
Aardvark’s role extends beyond enterprise environments. OpenAI has already deployed it across open-source repositories, where it claims to have discovered multiple real-world vulnerabilities, ten of which have received official CVE identifiers. The LLM giant said it plans to provide pro-bono scanning for selected non-commercial open-source projects, under a coordinated disclosure framework that gives maintainers time to address the flaws before public reporting.
This approach aligns with a growing recognition that software security isn’t just a private-sector problem, but a shared ecosystem responsibility. “As security is becoming increasingly important and sophisticated, these autonomous security agents will be helpful to both big and small enterprises,” Jain added.
OpenAI’s announcement also reflects a broader industry concept known as “shifting security left,” embedding security checks directly into development, rather than treating them as end-of-cycle testing. With over 40,000 CVE-listed vulnerabilities reported annually and the global software supply chain under constant attack, integrating AI into the developer workflow could help balance velocity with vigilance, the company added.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.

Shweta has been writing about enterprise technology since 2017, most recently reporting on cybersecurity for CSO online. She breaks down complex topics from ransomware to zero trust architecture for both experts and everyday readers. She has a postgraduate diploma in journalism from the Asian College of Journalism, and enjoys reading fiction, watching movies, and experimenting with new recipes when she’s not busy decoding cyber threats.
More from this author
`,
cio: `
🚀 The new CIO.com hybrid search: 🔍 Explore CIO content smarter, faster and AI powered. ✨
`,
nww: `
🚀 The new NetworkWorld.com hybrid search: 🔍 Explore NetworkWorld content smarter, faster and AI powered. ✨
`,
cw: `
🚀 The new Computerworld.com hybrid search: 🔍 Explore Computerworld content smarter, faster and AI powered. ✨
`,
cso: `
🚀 The new CSOonline.com hybrid search: 🔍 Explore CSO content smarter, faster and AI powered. ✨
`
};
const sharedStyles = `
`;
const publisher = foundry_get_publisher();
const htmlContent = contentSwitch[publisher];
if (!htmlContent || !document.body) return;
document.body.insertAdjacentHTML(“afterbegin”, htmlContent + sharedStyles);
const bar = document.querySelector(“.section-block–announcementbar”);
if (bar) {
requestAnimationFrame(() => {
bar.classList.add(“section-block–announcementbar–visible”);
});
}
const btn = document.querySelector(“.section-block–announcementbar .reset-button”);
const searchIcon = document.querySelector(‘.header__icon-button[data-menu-trigger=”search”] svg’);
const searchTrigger = document.querySelector(‘[data-menu-trigger=”search”]’);
if (searchIcon) {
searchIcon.innerHTML = ‘
‘;
}
if (btn && searchTrigger) {
btn.addEventListener(“click”, () => searchTrigger.click());
}
console.log(“[MISO SCRIPT] Conditions met, initializing Miso search announcements.”);
};
initMisoSearchAnnouncements();
});
document.addEventListener(‘consentManagerReady’, () => {
const hasConsentYouTube = consentManager.checkConsentByVendors([
‘YouTube’,
‘YT’
]);
if (hasConsentYouTube.some(vendor => vendor[‘Has Consent’] === false)) {
console.log(‘[YOUTUBE SCRIPT] Consent not given for YouTube.’);
} else {
console.log(‘[YOUTUBE SCRIPT] Consent given for YouTube. Loading script…’);
}
});
document.addEventListener(‘consentManagerReady’, () => {
const hasConsentGAM = consentManager.checkConsentByVendors([
‘Google Ad Manager’,
‘GAM’
]);
if (hasConsentGAM.some(vendor => vendor[‘Has Consent’] === false)) {
console.log(‘[GAM SCRIPT] Consent not given for GAM.’);
} else {
console.log(‘[GAM SCRIPT] Consent given for GAM. Loading script…’);
}
});
document.addEventListener(‘consentManagerReady’, () => {
const hasConsentGoogleFonts = consentManager.checkConsentByVendors([
‘Google Fonts’,
‘Google Web Fonts’
]);
if (hasConsentGoogleFonts.some(vendor => vendor[‘Has Consent’] === false)) {
console.log(‘[GOOGLE FONTS SCRIPT] Consent not given for Google Fonts.’);
} else {
console.log(‘[GOOGLE FONTS SCRIPT] Consent given for Google Fonts. Loading script…’);
}
});
document.addEventListener(‘consentManagerReady’, () => {
const hasConsentAdobeTypekit = consentManager.checkConsentByVendors([
‘Adobe Typekit’
]);
if (hasConsentAdobeTypekit.every(vendor => vendor[‘Has Consent’] === true)) {
if (foundry_is_edition(‘kr’)) {
const link = document.createElement(‘link’);
link.rel = ‘stylesheet’;
link.href = ‘https://use.typekit.net/ihi5tse.css’;
document.head.appendChild(link);
}
}
});
document.addEventListener(‘consentManagerReady’, () => {
const vendors = [‘Subscribers’];
const hasConsentSubscribers = consentManager.checkConsentByVendors(vendors);
if (hasConsentSubscribers.some(vendor => vendor[‘Has Consent’] === false)) {
return;
} else {
if (foundry_is_language(‘en’)) {
console.log(‘Language is English’);
// subscribers english ..
}
if (foundry_is_edition(‘kr’)) {
console.log(‘Edition is Korean’);
// subscribers in korean ..
}
if (foundry_is_edition(‘ja’)) {
console.log(‘Edition is Japanese’);
// subscribers in japanese ..
}
}
});
