SecurityBrief Australia - Technology news for CISOs & cybersecurity decision-makers
Story image

How artificial intelligence will further evolve in 2025

Today

As 2025 fast reaches its halfway point, the rapid evolution of artificial intelligence (AI) is continuing to dramatically reshape the cyber security landscape.

DeepSeek's R1 model launch marked a significant milestone in AI accessibility, combining advanced reasoning capabilities with free, unlimited access.  However, while DeepSeek's open-source model represented a breakthrough in cost-effective AI deployment, the DeepSeek consumer-facing app introduces substantial privacy and security challenges for enterprises.  

Most critically, the platform's data collection practices extend far beyond typical usage data: according to DeepSeek's privacy policy, all user interactions—including prompts, uploaded files, chat histories, voice inputs, images, and even keystroke patterns—are transmitted to and stored on external servers. 

The DeepSeek phenomenon demonstrated how quickly a new AI tool can reach extremely widespread adoption, potentially exposing sensitive data across an organisation before security teams can respond.

Indeed, the surge in AI usage will have far-reaching implications for areas such as energy consumption, software development, and ethical and legal frameworks.  According to the CSIRO, the Australian AI market is projected to be worth US$315 billion by 2028.  

However, many Australians are anxious about transparency, with 67% reporting that they are concerned about safeguard policies for individual privacy.  Indeed, AI technologies provide nefarious players with the opportunity to weaponise AI, creating new and formidable threats to cyber security.

Explosive growth

The adoption of AI technologies is accelerating at an unprecedented rate. ChatGPT, for instance, reached 100 million users just 60 days after its launch, and now boasts more than 3 billion monthly visits.

This explosive growth is not limited to ChatGPT. Other AI models like Claude, Gemini, and Midjourney are also seeing widespread adoption. According to the Financial Times1, by the end of 2024, 92% of Fortune 500 companies had integrated generative AI into their workflows.

Meanwhile, Dimension Market Research2 predicts that by 2033, the global large language model market will reach US$140.8 billion. AI technologies require enormous amounts of computing resources, so the rapid adoption of these technologies is already driving and huge increase in land, water, and energy required to support them. The magnitude of the AI-driven strain on natural resources will be felt in 2025. 

Rising energy demands

The proliferation of AI is putting immense strain on global energy resources. Data centres, the backbone of AI operations, are multiplying rapidly. Data centres require land, energy, and water; three precious natural resources that are already strained without the added demands of erupting AI use.

According to McKinsey3, their numbers doubled from 3,500 in 2015 to 7,000 in 2024. Deloitte projects energy consumption by these centres will skyrocket from 508 TWh in 2024 to a staggering 1580 TWh by 20344 - equivalent to India's entire annual energy consumption. 

Experts such as Deloitte have sounded a warning bell that the current system is quickly becoming unsustainable. This unprecedented demand for energy necessitates a shift towards more sustainable power sources and innovative cooling solutions for high-density AI workloads. Though much innovation has been achieved in terms of harvesting alternative energy sources, nuclear energy will most likely be tapped to power AI-fuelled rise in energy consumption. 

Compute technology itself will also become more efficient with advancements in chip design and workload planning. Since AI workloads often involve massive data transfers, innovations like compute-in-memory (CIM) architectures that significantly reduce the energy required to move data between memory and processors will become essential. 

The evolution of software development

AI is also poised to revolutionise software programming. Organisations are moving beyond simple code completion tools like GitHub Copilot to full code creation platforms such as CursorAI and Replit.com.

While this shift promises increased productivity, it also poses significant security risks. AI gives any cybercriminal the ability to generate complete malware from a single prompt, ushering in a new era in cyber threats.

This threat will fuel the area of responsible AI. Responsible AI refers to the practice of AI vendors building guardrails to prevent weaponisation or harmful use of their large language models (LLMs). 

Software developers and cyber security vendors should team up to achieve responsible AI. Security vendors have deep expertise in understanding the attacker's mindset to simulate and predict new attack techniques.

The role of multi-agent AI systems

In 2025, we're seeing an emergence of multi-agent AI systems in both cyberattacks and defences. AI agents are autonomous systems that make decisions, execute tasks, and interact with environments, often acting on behalf of humans.

They operate with minimal human intervention, communicate with entities including other AI agents, databases, sensors, APIs, applications, websites, and emails, and adapt based on feedback.

Attackers use them for co-ordinated, hard-to-detect attacks, while defenders employ them for enhanced threat detection, response, and real-time collaboration across networks and devices.  

Multi-agent AI systems can defend against multiple simultaneous attacks by working together collaboratively to share real-time threat intelligence and co-ordinate defensive actions to identify and mitigate attacks more effectively.

An increasingly complex regulatory framework

As AI becomes more pervasive, organisations will face increasing ethical and regulatory challenges. New laws will force enterprises to exert more control over their AI implementations, and new AI governance platforms will emerge to help them build trust, transparency, and ethics into AI models.

In 2025, there is likely to be a surge in industry-specific AI assurance frameworks to validate AI's reliability, bias mitigation, and security. These platforms will ensure explainability of AI-generated outputs, prevent harmful or biased results, and foster confidence in AI-driven cyber security tools.

The next 12 months promise to be pivotal for AI and cyber security. While AI offers unprecedented opportunities for advancement, it also presents significant challenges. Close co-operation between the commercial sector, software and security vendors, and governments and law enforcement will be needed to ensure this powerful, quickly developing technology doesn't damage digital trust or our physical environment.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X