Skip to main content


Rise of AI/ML-driven cyber attacks: New era of cybercrime

Patrick Gardner, Managing Partner at C8 Secure, a Continent 8 company

The rise of AI/ML-driven cyber attacks is changing the face of cybersecurity, posing new challenges for governments, companies and users.

Cyber attacks have evolved and become more sophisticated over time. At first, they focused on exploiting software and network vulnerabilities for unauthorized access or causing disruptions.

One notable example is the Morris worm, created in 1989 by Robert Morris, which was the first-ever denial-of-service (DoS) attack. While its purpose was to gauge the size of the internet, it significantly slowed down every computer it infected and caused some to crash.

This incident led to the creation of Computer Emergency Response Teams or CERTs to respond to future cyber emergencies. The Morris worm also resulted in the first conviction under the Computer Fraud and Abuse Act 1986.

The 90s saw a significant rise in communication technologies, especially the internet. However, these technologies’ lack of trust and safety controls has made them vulnerable to cyber attacks. At that time, cybercrime expanded rapidly. Attackers also developed more complex forms of viruses, and the Internet became saturated with them, as well unwanted ads and pop-ups. This, in turn, led to the development of more sophisticated antivirus software.

The new millennium witnessed more sophisticated cyber attacks, including advanced persistent threat actors (APTs) sponsored by nation-states. It caused significant damage to critical sectors of the global digital economy.

Cybersecurity has become a concern for government agencies and large corporations. There were notable cyber crimes such as the DDoS attacks by “Mafiaboy” on major commercial websites in 2000 and the data leak of 1.4 million HSBC Bank MasterCard users in 2005.

In the present, the rise of AI has influenced the evolution of cyber attacks. While AI and machine learning (ML) have revolutionized cybersecurity by providing advanced tools and techniques for threat detection and prevention, cybercriminals also leverage these technologies to launch sophisticated attacks. According to NATO, this makes AI a “huge challenge” and a “double-edged sword” for the cybersecurity industry.

Cybercriminals can exploit AI to identify weaknesses in software and security systems, generate phishing emails, design changing malware and observe user behavior undetected.

AI-powered cyber attacks

AI cyber-attacks involve cybercriminals using AI algorithms, models or tools to carry out complex and hard-to-detect cyber attacks. These attacks can be categorized into phases, including access and penetration, exploitation, command and control, surveillance and delivery, all of which may involve AI-driven techniques.

Since the beginning of the Covid-19 pandemic, cybersecurity firms have noticed a substantial surge in cybercrime specifically in the gaming and gambling industries. With the prevalence of AI technologies, it is possible that cybercriminals are using or will use AI-powered phishing attacks to trick players into sharing their login credentials, personal information or financial details.

Malicious actors also can develop AI-powered cheat programs or hacking tools that give players unfair game advantages, bypass security measures, manipulate in-game mechanics or exploit vulnerabilities.

This industry is not the only target of cyber attacks. In April 2018, hackers orchestrated a cyber attack on an online marketplace for freelance labor TaskRabbit, using an AI-controlled botnet. The attack targeted the website’s servers and involved a distributed DDoS technique.

The personal information of approximately 3.75 million users, including their Social Security numbers and bank account details, was compromised. The severity of the attack led to the temporary shutdown of the website until security measures could be reinstated. During this period, the breach affected an additional 141 million users.

In 2019, the popular social media platform Instagram experienced two cyber attacks. In August, numerous users discovered that their account details had been altered by hackers, denying them access to their profiles. Then, in November, a flaw in Instagram’s code resulted in a data breach. It exposed users’ passwords in the URL of their web browsers.

While Instagram has not provided extensive information regarding the hacks, there have been speculations that hackers might be utilizing AI systems to analyze Instagram user data for potential weaknesses.

Cybercriminals also have been utilizing AI voice technology to create fake audio clips that mimic a person’s voice, leading to identity theft, fraudulent phone calls and phishing emails. In March 2019, an unnamed CEO became the first reported victim of this fraud when he was scammed out of €220,000 by an AI-powered deepfake of his boss’s voice.

The Economic Times recently reported that a work-from-home scam targeted people with false job opportunities. Using AI, the scammers contact victims through missed calls on platforms like WhatsApp and pose as HR personnel from reputable Indian companies. They offer easy tasks and attractive earnings, requiring victims to click on YouTube video links, like the videos and send screenshots.

Initially, victims receive a small reward to build trust. Eventually, the scammers would then convince them to deposit larger sums with promises of higher returns and ultimately scam them out of their money.

Role of regulations in mitigating AI and ML cyber threats

Regulations play a crucial role in mitigating AI and ML cyber threats, especially in light of the increasing use of AI in cyber attacks. They set rules and standards for users, organizations and AI systems. They create boundaries that define what is legally and ethically acceptable when using AI and ML technologies. It also promotes responsible and secure practices while holding those involved accountable for their actions.

To ensure the safety of AI systems and protect fundamental rights, the European Union is working on a new law called the EU Artificial Intelligence (AI) Act, which is expected to start in the second half of 2023. It will have a transitional period of 36 months before it becomes fully effective.

The Act will apply primarily to providers and users of AI systems. It introduces regulations for different categories of AI systems, including prohibited, high-risk, general-purpose, limited-risk, and non-high-risk systems.

Companies that create high-risk AI systems will have specific responsibilities, such as conducting impact assessments, implementing risk management plans, and reporting serious issues. The users of these systems will also be required to assign human oversight and report any significant incidents.

The UK has no comparable comprehensive law like the EU AI Act. In March 2023, the UK released a White Paper outlining its proposed strategy for AI regulation. The White Paper was open for consultation until June 21, 2023.

Unlike the EU Act, the UK’s approach is described as “pro-innovation.” Rather than introducing new AI legislation, the White Paper suggests implementing a principles-based framework that regulators in all sectors can adopt. This framework aims to offer flexibility in regulating AI while promoting innovation.

The future of AI

While there are significant benefits to using AI and ML in cybersecurity from a detection and prevention point of view, there are also drawbacks and challenges in the development of AI, and the concern that it will be used in an irresponsible and unethical manner. This ultimately puts companies at risk.

C8 Secure is dedicated to assisting the industry in addressing the challenges posed by AI cyberattacks. It offers essential tools and expertise to create a secure and reliable environment.

Through a comprehensive understanding of the ever-changing realm of AI cyberattacks and the tactics employed by cybercriminals, we can anticipate future threats and develop resilient safeguards.

With C8 Secure, you can confidently move forward, assured that your operations are protected against the risks posed by AI-driven cyber threats.

Learn more here


ChattyGoblin: A new threat to iGaming and how C8 Secure can help

19 Jul, 2023

The iGaming industry is under a new threat. A malicious campaign, dubbed “ChattyGoblin,” has been targeting Southeast Asian gambling operations since October 2021.



For more information, please download our solutions brochure

Let’s Get Started

Leave a Reply