Ingram Micro warns MSPs on AI-era information risks
Ingram Micro Australia has warned that managed service providers (MSPs) are under growing strain as generative AI shifts security risk from endpoints and networks to the information layer.
Many MSPs have long focused on hardening devices, monitoring networks, and tightening identity controls. That model has delivered repeatable, infrastructure-based services with predictable security tooling. Hybrid work has already stretched those assumptions, and widespread generative AI use has introduced new blind spots.
According to the Australian Signals Directorate's Australian Cyber Security Centre (ACSC), more than 84,700 cybercrime reports were received in the 2024–25 financial year, while the ACSC responded to more than 1,200 cyber security incidents, an 11 per cent increase year on year. For businesses, the average self-reported cost of cybercrime rose by 50 per cent to $80,850 per incident, with medium-sized organisations reporting even higher average losses.
Sevag Tamoukian, Solutions Architect at Ingram Micro Australia, said the most common incidents affecting Australian organisations often exploit legitimate access rather than breaking through hardened systems. Business email compromise, identity fraud, and account takeovers remain prominent because attackers target everyday workflows and trusted credentials.
The most commonly reported cybercrimes, such as email compromise, business email compromise fraud and identity fraud, typically exploit trusted access rather than breaching hardened infrastructure. AI-related data exposure follows the same pattern, arising through authorised users and legitimate tools that evade traditional detection. Experience across Ingram Micro's MSP ecosystem suggests this misuse of trusted access is increasingly central to AI-related risk, even in environments with strong endpoint and identity controls.
Generative AI can create similar conditions, with activity that looks legitimate under conventional monitoring. When an employee pastes sensitive information into an AI tool for summarising or drafting, the device may remain compliant and the user identity authorised. Traditional controls may not raise alerts, yet the organisation can lose control over how its information is processed and stored.
This is why many modern data exposures no longer resemble traditional breaches. They do not involve malware, ransomware or system compromise, but instead arise through ordinary business activity that falls outside the assumptions of endpoint-centric security models.
Information layer
Ingram Micro's assessment of the Australian MSP channel suggests risk is concentrating at the information layer. Data now moves rapidly across applications, collaboration tools, and AI services, often outside controls built for information that stays within a corporate network and is accessed through managed devices.
MSPs have spent years refining security stacks around perimeter controls and endpoint protection. As hybrid work expanded, many extended the perimeter with VPNs and cloud-based security tools, recreating parts of the office security model for distributed workforces. AI use has exposed the limits of that approach, as information can leave expected paths without any obvious infrastructure-level failure.
The shift also changes accountability. If data exposure occurs through authorised activity, clients may not distinguish between technical controls and governance practices. Instead, they may view the incident as a failure of security oversight and ask why the risk was not identified earlier.
Why endpoint-first security falls short
Endpoint and network controls remain essential, but they were designed for a world in which data stayed within known systems. Generative AI breaks that model by enabling data to move directly from internal systems to external platforms through approved browser sessions. An employee can be working on a fully managed device, protected by multi-factor authentication, and still expose sensitive information without triggering any alerts.
Hybrid work and bring-your-own-device practices further weaken the effectiveness of perimeter-based controls, particularly for knowledge workers who operate across locations and devices. In these environments, network boundaries offer limited protection against data exposure that occurs at the application and information layer.
While endpoint security can protect against unauthorised access and malicious code, it cannot prevent authorised users from making poor decisions with legitimate tools. Experience drawn from Ingram Micro's engagement with Australian partners suggests that this gap is increasingly forcing organisations to reassess where security responsibility sits as AI becomes embedded in everyday workflows.
Skills shortage
Ingram Micro also highlighted staffing constraints across the market. Under-resourced teams often focus on triage, incident response, and maintenance, leaving less time for training and process improvement. The result can be a cycle in which technology adoption increases workload rather than reducing it.
It also pointed to operational factors behind major incidents. High-impact breaches can occur when alerts and signals are misunderstood or not acted on quickly. Mandiant's M-Trends 2025 report found that many high-impact intrusions are reconstructed after the fact because defenders failed to interpret or respond to early indicators in time.
Client conversations
Ingram Micro believes MSP security conversations need to move beyond tools and device controls. It said future managed security services will depend more on client engagement about what data matters most, how it is used, and where it can safely be shared or processed.
That shift implies a stronger focus on information governance, data classification, and policy. It also raises questions about how clients approve and monitor AI use across business functions, particularly when individuals adopt tools without formal sign-off.
For providers, this may change service delivery models. It places more emphasis on advisory work and on aligning security controls with business and regulatory risk. It also increases the need for clear rules for handling sensitive information in AI workflows, along with the operational capacity to monitor compliance.
"In the age of AI, securing endpoints remains necessary, but it is no longer the whole job," said Sevag Tamoukian, Solutions Architect, Ingram Micro Australia.