…and what implications this has for cybersecurity
- The year 2024 looks set to be crucial in the evolution of artificial intelligence (AI), especially in the field of social engineering. As AI's capabilities increase exponentially, so do the opportunities for cybercriminals to exploit these advances for more sophisticated and potentially damaging social engineering attacks. Stu Sjouwerman, CEO of KnowBe4 gives his top 10 expected AI developments in 2024 including what the potential implications are for cybersecurity.
Exponential growth in AI reasoning and capabilities
The level of innovation in AI is hard for many to imagine. AI's reasoning capabilities are expected to grow enormously, possibly surpassing human intelligence in certain areas. This could lead to more persuasive and adaptable AI-driven social engineering tactics, as cybercriminals use this capacity to reason to carry out more persuasive attacks.
Multimodal Large Language Models (MLLMs)
MLLMs, which can process and understand different types of data, will take centre stage in 2024. These models can be used to create highly convincing and contextually relevant phishing messages or fake profiles on social media, significantly increasing the effectiveness of social engineering attacks.
Text-to-video (T2V) technology
Advances in T2V technology mean that AI-generated videos are becoming the norm for disinformation and deep fakes. This could have significant implications for fake news and propaganda, especially in the context of elections, or when it comes to real-time Business Email Compromise attacks, also known as CEO fraud.
Revolution in AI-driven learning
AI's ability to identify gaps in knowledge and enhance learning can be used to manipulate or mislead through tailored disinformation campaigns, targeting individuals based on their learning patterns or perceived weaknesses.
Challenges for AI regulation
Governments' attempts to regulate AI to prevent catastrophic risks will be a major concern. However, the speed of AI innovation will outpace regulatory efforts, leading to a period when advanced AI technologies will inevitably be used for social engineering attacks.
The AI investment bubble and startup failures
The rise in AI venture capital indicates a booming market, but the potential failure of AI startups due to the obsolescence of the business model could lead to a range of advanced but unsecured AI tools that could also be deployed for malicious use.
AI-generated disinformation in elections
With major elections looming all over the world (at least 40!), the threat of AI-generated disinformation campaigns is greater than ever. These sophisticated AI tools are already being used to influence public opinion or create political unrest.
AI technology available to inexperienced cybercriminals
As AI becomes more accessible and cost-effective, the likelihood of advanced AI tools falling into the wrong hands increases. This could lead to an increase in AI-driven social engineering attacks by less technically savvy cybercriminals.
Enhanced AI hardware capabilities
Jensen Huang, co-founder and chief executive of Nvidia, told DealBook Summit in late November that "there are a lot of things we can't do yet". However, advances in AI hardware, such as Neural Processing Units (NPU), will lead to faster and more sophisticated AI models. This could enable real-time, adaptive social engineering tactics, making scams more convincing and harder to detect.
AI in cybersecurity and the arms race
While advances in AI provide tools for cybercriminals, it also enables more effective AI-driven security systems. This leads to an escalating arms race between cybersecurity measures and attackers' tactics, where real-time AI surveillance against AI-driven social engineering attacks could become a reality.
As a result, 2024 will be a super important year in the development of AI, with far-reaching implications for social engineering. It is crucial for cybersecurity professionals to stay ahead of these trends, build a strong security culture, adapt their strategies and continue to educate their staff through frequent security awareness and compliance training.