An unknown individual apparently impersonated US Secretary of State Marco Rubio with deceptively real audio and text messages on the messenger Signal. The aim of the campaign was to influence politicians in the USA and abroad. This is an extremely worrying example of the growing threat posed by deepfakes.
Impersonation via Signal: Cyberattack on political circles
An as yet unidentified attacker is said to have impersonated US Secretary of State Marco Rubio using deepfake technologies. They apparently sent manipulated messages in Rubio’s name via the messenger service Signal – both as text and audio. The recipients were not only US decision-makers, but also foreign ministers from other countries.
Deepfakes: Deceptively real – and dangerous
What sounded like science fiction just a few years ago is now bitter everyday life: with the help of artificial intelligence, voices and even entire personalities can be convincingly imitated. For cyber criminals, this is a powerful tool for gaining trust, spreading disinformation or manipulating political processes.
Security expert warns of growing risk
Adam Marrè, CISO of Arctic Wolf and former FBI agent, shares his thoughts on deepfakes and assesses their risks to society.
“The current reporting is worrying: a previously unknown threat actor has allegedly used artificial intelligence to impersonate US Secretary of State Marco Rubio via a Signal account and contacted high-ranking US government officials and foreign officials. This scenario illustrates that we live in a world in which we often no longer know what is true when scrolling through online content.
There are now numerous examples worldwide of AI being used to impersonate politicians, business personalities and celebrities. This incident underlines once again that we must remain vigilant at all times. Last year, we already saw attempts to influence elections using AI-generated content and the spread of disinformation.
And it is becoming increasingly difficult to determine what is authentic on our screens or in our inboxes. Security experts have long warned of the risks of AI-generated content and its potential to influence public opinion. These are particularly worrying in the context of landmark political elections around the world. Politically motivated threat actors and hacktivists have a clear agenda and are increasingly using AI tools to create and distribute fake content such as videos, images and audio recordings. And society is alarmingly ill-prepared for this!
Structured processes are therefore needed to verify every communication. Relying on instinct alone is no longer enough. We need to be able to reliably check whether a message actually comes from the stated source. What’s more, the social debate needs to take a new direction: platforms that host and distribute such content need to be held more accountable. They must take on a central role in verifying, labeling and removing false content.
It is high time to treat AI-based disinformation as a serious societal risk and establish better safeguards, legislation and real consequences to prosecute those responsible for spreading lies and thus destabilizing society.”
(vp/Arctic Wolf)