'Cybercriminals don't care about artificial intelligence watermark'


Tech companies have expressed their intention to develop a watermark for artificial intelligence (AI). This watermark would allow people to see that a video was produced by AI. The intention is good, but until there is easily applicable and real-time technology to detect deepfakes and warn people, employers should educate their employees on the dangers of deepfakes. A healthy sense of distrust is currently the best protection.

Of course, the intention of tech companies to develop such a watermark is good. The problem lies mainly in how they can enforce that even a malicious group will use it. Cyber criminals who want to use generative AI to make cyber attacks like phishing more effective rarely abide by laws and regulations. Technology that undermines their own operations they will also prefer to ignore.

At the same time, technology to produce deepfakes is becoming increasingly sophisticated and accessible. This is true for video, but certainly also for audio. This is why I expect we will see significant growth in Business Email Compromise (BEC) attacks this year, in which a cybercriminal uses voice messages created by AI. In this type of attack, an employee supposedly receives a voice message from his or her director to send confidential information as quickly as possible.

Technology to detect and stop deepfakes in real-time is not mature today. Therefore, employers should continue to educate their employees on the dangers of BEC and deepfakes. By doing so, they ensure a healthy sense of mistrust, which reduces the chances of cybercriminals who carry out a BEC attack succeeding.


Caseware Welcomes Danielle Supkis Cheek as Vice President, Head of Analytics and AI

'On paper, engineering education cannot inspire'

'State actors employ increasingly sophisticated cyber attacks'

Several vulnerabilities in Microsoft Office

© Dutch Tech On Heels - 2024
Made with
Web Wings