computersecuritynewsITSecurity Training

Are DeepFakes something to Worry About?

Deepfakes are  computer generated images and footage of real people. I.e. a computer generated images or video from a program (or algorithm).  FireEye has a paper that discusses this phenomenon:

https://www.fireeye.com/blog/threat-research/2020/08/repurposing-neural-networks-to-generate-synthetic-media-for-information-operations.html?

Instead of talking theory and what happens once the cat is out of the bag, let’s give some good examples:

https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

“Criminals used artificial intelligence-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in March in what cybercrime experts described as an unusual case of artificial intelligence being used in hacking.”

So here is an example of using technology to create or mimic a specific voice (the CEO) and demand a transfer of $243k.

Or this example:

https://www.infoq.com/articles/ai-cyber-attacks/

Besides cyberattacks the actual tests in this space have been eye opening and will create more chaos. The actual attacks are not obvious at this point.

In this article https://securityboulevard.com/2020/01/deepfakes-pose-new-security-challenges/

“Deepfake video or text can be weaponized to enhance information warfare. Freely available video of public comments can be used to train a machine-learning model that can develop a deepfake video depicting one person’s words coming out of another’s mouth,” Steve Grobman, McAfee’s Chief Technology Officer wrote. “Attackers can now create automated, targeted content to increase the probability that an individual or groups fall for a campaign. In this way, AI and machine learning can be combined to create massive chaos.”

Timothy Lee the programmer was testing where he can go with this technology so he replaced Mr. Zuckerberg(of Facebook) with a character of Star Trek.

It is in this testing space where the ideas of mayhem are germinating.

What should we make of all these new attack possibilities?

It would be a crime not to tell you the opposite end of this story:

“Close but Not Quite”  SecurityBoulevard.com 

Is a topic in the article at SecurityBoulevard.com post that claims that deepfakes may one day be dangerous and thus should be paid attention to at that point.

This sort of misses the point of the deepfakes phenomenon.

remember my post on Spy vs Spy?

cat and mouse, red vs blue, hackers vs defense – your IT department.

 

there will always be a struggle between the attacker and defender.

And remember the defender has to do EVERYTHING right, whereas the  attacker only has to find one flaw and catch the defender something. Once in the attacker can look at stuff until they are ready to steal or execute on another moneymaking adventure like ransomware installs.

Even if Deepfakes are not quite ready yet, it is only a matter of time before someone figures out a way to make it work for the criminal hacker. We do have to be ready for it even if we know it is not attacking now.

An example of  an enterprising individual to create AI with speech recognition and creation:

Pieces of the program by Michael Phi(from video):

1. Wake word detection

2. speech recognition

3. speech synthesis

4. Natural language understanding

 

So what deepfakes are doing is create a more believable #3  But if a single programmer can do this much work within some effort and resource expenses, a criminal network will be able to do more.

Still I get the critics, creating a believable voice and video are yet another level of smoothness, we just have to keep an eye on this technology.

I was watching the RSA Conference2020 Asia with Alyssa Miller and her presentation about deepfakes…  Most interesting (to me) was the  “Outside trading” scenario where some wily character creates a deepfake video of Elon Musk for example about Tesla.  And the video basically caused a problem for Tesla (like a product defect). Thus causing the stock  to drop.  At this point the bad guy bought some stock.  What happened next is that people figured out that was a bad video, Elon Musk debunks it etc. thus the stock goes back up… You guessed it now the bad guy sells the stock.

 

So she goes on about the examples of today with her testing in deepfakes, and in fact there are a couple of apps that create deepfakes (more as a joke) and interesting to note most of those apps have malware built-in. Only one app seems to be ‘good’ Faceswap.

There is a lot more to the presentation as there may be ways to seed pictures and videos that would deepfakes difficult to create.  I also like here explanation of misinformation, which requires a long post in itself.

 

Contact me to discuss your defense environment.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.