Increase in Australian workers falling for phishing scams by nearly twofold over a period of nine months
In the rapidly evolving digital landscape, Australia is witnessing a surge in the adoption of artificial intelligence (AI) technologies, particularly generative AI platforms and Language Model (LLM) interfaces. While major global tech companies like Oracle in partnership with Google provide generative AI cloud services that could be utilised by enterprises worldwide, including in Australia, there is currently no publicly available detailed information naming specific Australian organisations using these platforms for corporate IT purposes.
This increasing AI adoption is leading to increased experimentation by employees. However, this trend also brings forth concerns about data security. Workers in Australia continue to use personal cloud applications at work, with regulated data, intellectual property, and passwords and keys being the types of data most often involved in data leaks.
One of the most popular genAI applications in Australia includes ChatGPT, Google Gemini, and Microsoft Copilot. The widespread use of these applications, however, presents a challenge for security teams as over half of local workers (55%) use personal genAI accounts for work purposes, which is hindering their ability to monitor whether sensitive data is leaking via genAI apps.
To mitigate these risks, Australian organisations are deploying company-approved genAI apps to centralise and monitor usage. Ray Canzanese, Director of Netskope Threat Labs, expects more individuals within organisations to experiment with generative or agentic AI deployments, presenting significant shadow AI and data security risks.
GenAI models, platforms, and AI agents can access enterprise data sources, requiring restricted permission levels to prevent sensitive data exposure. Moreover, some LLM interfaces have weak security standards that can compromise data if not optimised. In light of these concerns, security teams in Australia are prioritising the detection of genAI and LLM interface usage to avoid data security incidents.
The adoption of genAI and LLM interfaces in Australia is growing rapidly. In the last 12 months, an average of 1.2% of Australian workers clicked on a phishing link each month, a 140% increase since the last count. Nearly one in five clicks (19%) were driven by phishing messages impersonating Microsoft or Google. Threat actors are also targeting personal accounts holding valuable data.
Interestingly, usage of ChatGPT in Australia has declined between May and June, and for the first time since its launch in 2022. However, 29% of organisations are using genAI platforms, and 23% are using LLM interfaces. DeepSeek is the application local organisations block the most, while almost a third (30%) are also banning Grok.
Today, 87% of organisations in Australia have employees using genAI applications on a monthly basis, up from 75% nine months ago. This rapid growth underscores the need for continued vigilance and investment in data security measures as AI technologies become more integrated into our workplaces.
Moreover, Ray Canzanese, Director of Netskope Threat Labs, states that AI tools enable threat actors to refine their social engineering techniques. He emphasises the importance of security teams staying abreast of these developments to protect their organisations from potential threats.
In conclusion, while AI adoption in Australia offers numerous opportunities for productivity and innovation, it also introduces new challenges in the realm of data security. As more organisations embrace these technologies, it is crucial to prioritise security measures to safeguard sensitive data and prevent potential data breaches.
Read also:
- Nightly sweat episodes linked to GERD: Crucial insights explained
- Antitussives: List of Examples, Functions, Adverse Reactions, and Additional Details
- Asthma Diagnosis: Exploring FeNO Tests and Related Treatments
- Unfortunate Financial Disarray for a Family from California After an Expensive Emergency Room Visit with Their Burned Infant