Cybersixgill, a global cyber threat intelligence data provider, has released its latest State of the Cybercrime Underground report, which highlights the impact of artificial intelligence on the cyber threat landscape. The report analyzed data collected from the clear, deep, and dark web in 2022, comparing it with trends and data from previous years.
The report delves into several key topics, including the rise of AI developments and their impact on the barriers of entry to cybercrime, trends in credit card fraud, the evolution of initial access broker markets (IABs), the rise of cybercriminal “as-a-service” activities, and cryptocurrency observations.
“Cybercrime is rapidly evolving, with new opportunities and obstacles in the cyber threat landscape impacting threat actors’ tactics, tools, and procedures. In response, organizations can no longer rely on outdated technologies and manual processes to defend against increasingly sophisticated attacks,” said Delilah Schwartz, Security Strategist at Cybersixgill. “Proactive attack surface management[…] is now of paramount importance and will be a critical cyber defense weapon in the months and years to come.”
AI: Cyber threat, or cyber target?
According to the report, AI technology, such as ChatGPT, enables threat actors to quickly write malicious code and perform other preparatory activities, lowering the barrier to entry into cybercrime. The report found that AI is playing a significant role in the cyber threat landscape, allowing cybercriminals to operate at a scale and speed that was previously impossible.
David Warshavski, VP of Enterprise Security at Sygnia, noted that AI is as much a cybersecurity threat as it is a target for attacks.
“While it’s tempting to discuss the potential of artificial intelligence to carry out cyberattacks, the real concern lies in the AI attack surface itself. The foundations of tomorrow’s technology are now exposed to a wide range of threats, many of which we are still struggling to understand and defend against. The growing presence of AI and machine learning (ML) systems in the digital realm presents significant challenges for organizations and governments alike,” Warshavski said.
”Adversarial attacks, data poisoning, and model extraction are just a few of the numerous threats looming over AI systems. These vulnerabilities can jeopardize AI-driven innovation and inadvertently contribute to the perpetuation of societal biases and inequalities. Now more than ever, the focus must shift from AI’s offensive capabilities to the defensive strategies needed to secure its foundations,” he said.
“Collaboration between academia, the private sector, and the security community is crucial to identify, assess, and mitigate the vulnerabilities plaguing AI systems. Investing in AI-specific security research will ensure that the expanding attack surface is met with sophisticated defenses,” Warshavski concluded. “In this age of rapid digital transformation, we must prioritize the protection of AI technology, recognizing that the promise of AI-driven progress depends on our ability to stay ahead of the constantly evolving threats it faces.”