An Australian reports a cybercrime every six minutes. In 2024 alone, reported scam losses totalled a staggering AUD 2.74 billion. Among these, Business Email Compromise (BEC), otherwise known as social engineering scams, accounted for AUD 91.6 million. These figures highlight both the growing scale and sophistication of cybercrime—and the increasing challenges it poses for insurers, businesses and individuals alike.

From hoodies to corporate structures

When we picture a cybercriminal, the image often comes from movies: a lone figure in a hoodie, working in the dark with a glowing screen. While these rogue actors still exist, the modern cybercriminal looks very different.

Today, many operate within structured organisations that mirror legitimate businesses. Criminals are recruited, paid salaries and even receive bonuses and holidays. Recruitment processes occur on the dark web, where cybercriminals must present references and pass skills tests, much like in the corporate world. 

State-sponsored groups, most notoriously in North Korea and Russia, take this to another level, investing heavily in technology and research to refine their capabilities. This evolution has enabled them to launch increasingly complex and convincing scams at scale.

The rise of sophisticated social engineering

A decade ago, social engineering claims typically involved clumsy emails riddled with spelling errors, vague instructions and mismatched email addresses. These scams were relatively easy to spot.

Today, cybercriminals use data mining, open-source research and stolen business information to craft far more convincing attacks. By scouring platforms like LinkedIn, criminals identify finance professionals with authority to move money and tailor communications accordingly. These emails now mirror legitimate correspondence, adopting the right tone, language and even nicknames used within the business.

In some cases, criminals spend weeks inside a compromised network, studying workflows, supplier relationships and upcoming invoices to craft highly targeted scams. The result: fewer red flags and a greater chance of success.

The role of AI and deepfakes

Artificial intelligence has dramatically accelerated the evolution of social engineering. Deepfake technology now allows criminals to convincingly replicate voices and even faces of executives.

In one high-profile case, an employee of a multinational company in Hong Kong authorised a payment of HKD 200 million (about AUD 39.5 million) after attending a Zoom call with what appeared to be senior management. The only problem? The executives were AI-generated deepfakes.

Deepfakes are no longer the clunky, robotic fakes of a few years ago. Today, they are highly realistic, making it difficult—even for trained professionals—to tell the difference. Combined with stolen voice recordings and video clips that may be gathered from the internet, criminals can now create highly convincing impersonations.

From a claims perspective, this poses significant challenges. Voice and video scams leave little paper trail, making verification difficult. Further, victims may not realise they have been deceived until much later, reducing the likelihood of successfully recovering funds through banks.

Emerging threats: AI agents

The next frontier of cybercrime is anticipated to be AI agents—autonomous systems capable of reasoning, learning and making decisions without human intervention. Unlike today’s AI assistants, which require prompts, AI agents can act independently.

In a cybercriminal context, this means an AI agent could:

  • Search the internet for targets
  • Build CRM databases of finance professionals
  • Draft convincing correspondence
  • Send out thousands of scam emails at scale

And all without human oversight. The cost is minimal, the reach is enormous, and even if only one attempt succeeds, the returns can be significant.

Legislative and banking responses

Governments and financial institutions are responding. In February 2025, the Scam Prevention Framework Bill passed in Australia, requiring banks to implement stronger controls to detect, prevent and disrupt scams.

Developments include the Australian Banking Association establishing the Scam-Safe Accord, including a $100 million investment in name-checking technology, which verifies whether the account name matches the BSB and account number entered. Other initiatives include increased warnings, holding payments to new bank accounts and additional security questions.

We anticipate that for SMEs, these controls may introduce operational challenges that should be considered and where possible, planned in advance. Any delays to payments, for example, can disrupt cashflow for suppliers.  

Implications for insurance claims

From an insurance perspective, policy wording has also evolved. Many policies now include exclusions or higher deductibles if certain verification controls are not in place or not followed. For example, insurers may require:

  • Phone verification of any changes to supplier bank account details
  • Segregation of duties in processing invoices
  • Documented procedures for payment approvals

Courts have echoed this stance. In 2025, a court in Western Australia held a business liable for paying a fraudulent invoice after its verification methods were deemed insufficient. The expectation is clear: businesses must take reasonable steps to protect themselves.

What organisations can do

Despite the sophistication of attacks, businesses can significantly reduce their risk with the right processes in place:

  • Verification protocols: Always confirm bank detail changes via phone, using independently sourced telephone numbers.
  • Staff empowerment: Encourage employees to question unusual requests, even from “senior executives.”
  • Training: Regularly update staff on emerging threats and red flags.
  • IT controls: Implement multi-factor authentication, restrict admin privileges, and enforce regular password changes.

Ultimately, vigilance at the individual and organisational level remains the strongest defence.

How Sedgwick can help

The rise of sophisticated cybercrime highlights the growing complexity of managing claims. Each case requires careful analysis—not only of financial loss but also of compliance with policy wording, verification processes and recovery prospects.

At Sedgwick, our forensic accounting team has the expertise to navigate these challenges. As one of Australia’s largest groups of forensic accountants, we support insurers, corporates and legal clients with accurate, comprehensive quantification and resolution of cyber and social engineering claims. Beyond cybercrime, our specialists work across a wide range of specialty claims, ensuring clarity, accuracy and trusted outcomes in complex situations.

If you’d like to learn more about how Sedgwick can support you with cyber, social engineering or specialty claims, contact Beth Fieldhouse [email protected], Lucy Tang [email protected] or Emma Levett [email protected]