top of page

Systems • Identity • Trust

Human Factors

SITH

2

AI Deepfakes: When Trust Becomes the Attack Surface

  • Writer: Rich Greene
    Rich Greene
  • 6 days ago
  • 3 min read

Artificial intelligence now allows anyone to mimic a voice or face with just a few samples from voicemails, social media posts, or Zoom calls. This technology does not create flawless illusions but instead creates brief moments of urgency that attackers exploit. These moments often last only seconds—just enough time to trick someone into wiring money, sharing credentials, or granting access. The danger lies not in perfect deception but in breaking the usual pace of trust and decision-making.


How AI Deepfakes Exploit Trust


For most of human history, voice and face were reliable ways to identify someone. Hearing a familiar voice or seeing a known face meant you could trust the person. AI has changed that. Now, these signals can be copied and faked with surprising accuracy. Attackers do not need to fool you forever. They only need a short window where your instincts override your verification process.


Imagine receiving a call from your CEO asking for an urgent wire transfer. The voice sounds right, the tone is familiar, and the request feels pressing. You act quickly because the situation demands it. This is exactly what attackers count on. They create pressure to break your usual checks and balances.


Victims often feel shame after falling for these tricks. They blame themselves for trusting too easily. But the technology is designed to exploit the fact that trust used to be automatic. This shift means organizations and individuals must rethink how they verify identity.


Building Friction to Slow Down Attacks


The first defense against deepfake attacks is to slow down decision-making. Adding friction to processes involving money, access, or sensitive information can break the attacker’s momentum. Even a short pause of thirty seconds to breathe and ask one good question can stop the attack.


Here are practical steps to build friction:


  • Move verification out of the original communication channel. If you get a request by phone, hang up and call back using a known number. If it’s a video call, confirm the request by email or text.

  • Ask unexpected questions. Attackers prepare for common queries but may stumble on details only the real person would know.

  • Use multi-step approvals. Require two people to authorize money transfers or access changes.

  • Set clear policies. Make sure employees know they should never bypass controls, even if the request seems urgent or comes from a senior executive.


These steps create guardrails that protect everyone. When attackers impersonate colleagues, policies help employees point to rules instead of negotiating under pressure.



Changing the Culture Around Trust


Trust still matters but cannot stand alone as proof of identity. Organizations must shift from passive belief to active verification. This change removes the stigma of asking for a double-check or second opinion. It is not about doubting colleagues but about protecting everyone from sophisticated attacks.


Training employees to recognize the signs of deepfake attacks is essential. These signs include:


  • Requests that create unusual urgency

  • Changes in communication style or tone

  • New or unexpected contact details

  • Pressure to bypass normal procedures


Encourage a culture where employees feel safe reporting suspicious requests without fear of blame. The deeper wound from these attacks is often shame, but open communication can reduce that.


Real-World Examples of Deepfake Attacks


Several high-profile cases show how attackers use AI deepfakes to exploit trust:


  • In 2019, a UK energy firm transferred €220,000 after a CEO’s voice was mimicked using AI. The attacker used a few minutes of recorded speech to create a convincing call.

  • Criminals have impersonated relatives in distress, calling family members to ask for urgent money transfers.

  • Attackers have used fake video calls to extract login credentials by appearing trustworthy just long enough to gain access.


These examples highlight that attackers do not need perfect fakes. They need just enough to break the tempo and create urgency.


Preparing for a Future with AI Deepfakes


As AI technology improves, deepfake attacks will become more common and harder to detect. Organizations and individuals must prepare by:


  • Implementing strong verification processes that do not rely solely on voice or video.

  • Using technology tools that detect synthetic media or flag unusual requests.

  • Educating teams regularly about new attack methods and how to respond.

  • Encouraging a mindset that values verification over blind trust.


The goal is to make it harder for attackers to succeed by breaking their rhythm and forcing them out of their comfort zone.



 
 
 

Comments


bottom of page