Microsoft says it’s developed a prototype AI program that can reverse engineer malware, automating a task usually reserved for expert human security researchers.
The prototype, dubbed Project Ire, was designed to tackle one of toughest assignments in security research: “Fully reverse engineering a software file without any clues about its origin or purpose,” the company said in a Tuesday blog post.
In one Microsoft test, Project Ire was able to correctly identify 90% of malicious Windows driver files. In addition, the AI program flagged only 2% of benign files as dangerous. “This low false-positive rate suggests clear potential for deployment in security operations, alongside expert reverse engineering reviews,” the company says.
This Tweet is currently unavailable. It might be loading or has been removed.
Project Ire stands out from traditional antivirus engines, which often work by scanning files and programs for strings of computer code, known patterns, or certain behaviors, tied to past malware detections. The problem is hackers are constantly evolving their techniques to conceal malicious functions, making new attacks harder to catch. This might include using built-in functions in legitimate software to download malicious modules at a later time.
The IT security industry has long tapped AI, such as machine learning, to improve malware detection. However, Microsoft’s Project Ire joins other companies in leveraging large language models to investigate and flag potential security threats.
“Project Ire attempts to address these challenges by acting as an autonomous system that uses specialized tools to reverse engineer software. The system’s architecture allows for reasoning at multiple levels, from low-level binary analysis to control flow reconstruction and high-level interpretation of code behavior,” Redmond added.
In its blog post, Microsoft said the AI program was able to detect a Windows-based rootkit and another malware sample designed to deactivate antivirus by identifying their key features. Project Ire was also smart enough to “author a conviction case, a detection strong enough to justify automatic blocking,” that triggered Microsoft to flag and block a malware sample tied to an elite hacking group.
While the rise of AI has sparked concerns about machines replacing people, Microsoft is positioning Project Ire as a tool to assist overburdened security researchers and IT staff. The company plans on deploying the AI within the team that develops Microsoft Defender as a “Binary Analyzer for threat detection and software classification.”
Recommended by Our Editors
“Our goal is to scale the system’s speed and accuracy so that it can correctly classify files from any source, even on first encounter,” the company added.
Still, the AI program remains a prototype, possibly because it faces limitations. In another Microsoft test involving nearly 4,000 files slated for manual review, the company found Project Ire “achieved a high precision score of 0.89,” meaning nearly 9 out of 10 files that were flagged as malicious were correctly identified. However, Project Ire appeared to only detect “roughly a quarter of all actual malware” within the scanned files.
Still, Microsoft noted: “While overall performance was moderate, this combination of accuracy and a low error rate suggests real potential for future deployment.”

5 Ways to Get More Out of Your ChatGPT Conversations
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Michael Kan
Senior Reporter
