Fake or doctored images appear all over the internet and on our phone and computer apps these days. Some are harmless and entertaining. See how you will look as you age! How would you look with bunny ears and whiskers or maybe claws and fangs? Unfortunately, criminals are increasingly using generative artificial intelligence to mimic voices, photographs, and videos of real people to create "deepfakes" with intent to harm, defraud, and scam.
A deepfake is a realistic but false image, video, or voice recording generated by deep learning, the foundational technology that enables generative artificial intelligence. Deepfakes can be used to swap faces in videos, manipulate facial expressions, or create synthetic audio.
Deep learning involves neural networks with multiple layers, and it excels at learning complex patterns from large databases. Deep learning models are used to generate new content by learning from other data. This deep learning and deepfake phenomenon now requires us to consider more seriously than ever before the extent to which we can really believe what we see and hear.
The FBI's Internet Crime Complaint Center said it tallied 859,532 reports of imposter scams in 2024, resulting in $16.6 billion in financial losses. The number of AI-generated deepfakes increased fourfold from 2023 to 2024, accounting for 7% of all fraud in 2024. Although deepfakes are becoming more common, phishing, extortion, and personal data breaches continue to be the most common modes of attack on individuals.
Damage from deepfakes isn't always monetary. Deepfakes can be used politically to make people think a politician or a government official said or did something they didn't. Deepfakes have been used to harass and destroy the reputations of individuals.
Deepfake Audio
Not long ago, a Colorado woman received a frantic phone call from someone who sounded just like her daughter. An alleged abductor got on the line and demanded money in return for her daughter's release. The woman immediately wired $2,000 to the caller but soon discovered that her daughter had been safe at home the whole time.
Here are some key points to consider in identifying potential deepfake voices/phone calls.
- Notice presence of choppy sentences and varying or unusual tone inflection in speech. Is the delivery monotone, at an odd pitch, or lacking emotion?
- Consider context of the message, and phrasing. Would the speaker being represented word a message that way?
- Is the context of the message relevant to recent conversations? Can the caller answer related questions?
- Listen for contextual clues – are background sounds consistent with the speaker's presumed location?
- Be wary when presented with unusual or unexpected requests. Do the stories being presented stand up to scrutiny?
Ways to respond to and protect yourself from deepfake voices/phone calls:
Deepfake phone scams work because scammers create a sense of urgency and panic and don't give their targets time to think logically. They often demand money or make threats if you attempt to contact anyone during the call. Don't panic. Fear triggers fight-or-flight mode that makes it difficult to keep a clear head. Hang up on the caller or tell them you will call them right back. If the caller purports to be from a government agency or a business, don't call back using the phone number they give you – Google the organization to find their actual phone number. If a person you know is being mimicked, call or text them.
Create a family code word. Family members should create a unique but easy-to-remember code word that can be used if anyone receives a suspicious call.
Protect yourself by blocking unknown callers. Don't believe caller ID. Scammers can spoof numbers to make them appear to be from a loved one or from local area codes. Legitimate callers will leave messages.
Lock down social media accounts, keeping your accounts and posts private. Be very cautious about posting photos of your children or grandchildren on social media accounts.
Deepfake Videos and Photos
One of the most common applications of deepfakes is creating fake videos or photographs of celebrities, politicians, or other public figures. These videos and photos can be used to spread misinformation, damage reputations, or create discord. Because they are highly realistic, deepfakes can be a challenge to detect. Here are a few techniques you can use to spot video and photo deepfakes.
- Pay attention to the face. Deepfake manipulations are almost always facial transformations.
- Look for blurring in the face but not elsewhere in the image or video, and changes in skin tone near the edge of the face.
- Look at facial expressions or movements that may be unnatural or inconsistent given the context of the image. Notice unnatural eye blinking or a lack of blinking altogether.
- Analyze the consistency of lip movements with speech. Deepfakes often struggle to accurately synchronize the two.
- Look for incongruities, such as a person with too many fingers, double chins or eyebrows, or a nonsensical layout for a building. Look for box-like shapes and cropped effects around the mouth, eyes, and neck.
- Notice changes in the background and/or lighting. Is the background scene consistent with the foreground and subject?
Artificial intelligence is becoming more sophisticated every day, making deepfakes harder and harder to spot. Unfortunately, bad actors are using increasingly effective tools to scam and defraud, or to create discord with misinformation. How can we consume information and navigate the minefield of social media while keeping ourselves safe and not succumbing to paralysis?
The SIFT Method
Mike Caulfield is a digital literacy expert at Washington State University who condensed key fact-checking strategies into a short list of "things to do" moves to quickly make a decision about whether or not a source is accurate and worthy of your attention. He calls it the SIFT method, and it is a great strategy to adopt when you encounter information which you will base future action upon.
S – Stop
When you initially encounter a source of information – stop. Before you act on a strong emotional response to a headline or other information, stop! Do you know and trust the author, publisher, publication, or website? If not, use fact-checking to verify the reliability of the message.
I – Investigate the source
Find out the expertise and agenda of the person who created the source. You can start with a quick and shallow investigation. Fact-checking laterally across many websites rather than digging deep (vertically) into the source you are evaluating can be an effective way to start.
F – Find better coverage
Look for other information or coverage of the same claim. Rather than relying on the source you initially found, look for another, perhaps higher quality, source. Use fact checking sites.
T – Trace
Trace claims, quotes, and media to the original context so you can get a sense of whether the version you saw was accurately presented. Reconstruct the necessary context to read, view, or listen to digital content effectively.
Summary
Currently, there is no federal law governing deepfakes, although creation of deepfakes is illegal in some states (Arizona is not one of them). Deepfakes can be harmless, but they can also be used to defraud people or businesses. They can be used to misinform, invade privacy, or harm reputations.
There are steps you can take to spot and respond to deepfakes, but sometimes, you need some professional help. R&A's IT audit and advisory team can provide insight into how to protect yourself or your organization from harmful deepfakes. At R&A, our goal is to help you get the information you need to make the best decisions to protect yourself and your business. Let us know if you have questions or how we can help.
Join our newsletter for insights and information that matter to you or your business