Creating convincing deepfakes just became terrifyingly accessible.
Linus Tech Tips conducted comprehensive testing revealing that high-quality AI-generated video is now 100 times easier to create than just a few years ago, requiring only commodity hardware and minimal source footage.
The investigation concludes this accessibility directly enables the projected $1 trillion in global online fraud losses for 2024, with the creator stating “many people cannot tell a deepfake anymore.” The primary detection protocol relies on analysing shadow inconsistencies and vanishing point errors but that window is rapidly closing.
Key Takeaways
• Deepfake creation is 100x easier than a few years ago.
High-quality deepfake and fully AI-generated video is now achievable with commodity hardware and minimal source material. The barrier to entry has collapsed, making sophisticated fraud accessible to anyone with basic technical skills.
• Global online fraud losses will exceed $1 trillion in 2024.
AI-generated content is rapidly increasing the convincingness of fraud attempts. The combination of accessibility and quality has created what experts consider an extreme danger for the general public unable to distinguish real from fake.
• Detect fakes by analyzing shadows and vanishing points.
The primary non-technical detection protocol exploits current AI models’ failure to fully grasp physics governing light in simulated 3D space. Shadows from a light source often fail to converge properly, and perspective lines don’t align to vanishing points correctly.
• Always callback at a number you already have.
The most critical safety protocol for urgent requests for money or information is telling the requester “I’m going to call you back at the number that I already have for you” to confirm identity before acting.
• Creators bypass AI restrictions using AI itself.
To evade content moderation guardrails, the team used cloud AI like Claude to generate “AI-friendly prompts” that successfully bypassed safety restrictions, demonstrating how current content filters can be systematically defeated.
What They Said
How Easy Is Deepfake Creation Now?
Deepfake creation has reached a critical accessibility point, requiring only small, easily scraped data sets. The process for generating a convincing, though spliced, video now requires training an open-source model like Deep Face Lab on just 7,000 recent images of the target’s face.
For fully generated videos which are easier to scale the initial protocol only requires start and end key frames, which can be scraped from existing video or Facebook photos. Due to difficulty in perfectly syncing lips, successful final products rely on editing in a punchy manner using short clips of 5 to 7-second shots.
What’s the Protocol for Detecting AI-Generated Video?
Detection relies on exploiting current AI’s fundamental misunderstanding of physics. Professor Hannie Fared’s protocol for spotting fakes involves analyzing a video’s shadows and vanishing points.
Since AI creates a 2D image attempting to simulate 3D space, it struggles with the laws of physics that govern light. Shadows cast from a light source often fail to converge properly, and lines of perspective fail to converge toward the vanishing point.
The warning: this is a rapidly closing gap. Linus notes these physics quirks could be ironed out in coming months rather than years, making this detection method obsolete soon.
How Do Scammers Bypass AI Content Restrictions?
The high demand for generating deceptive media drove the team to develop a protocol to bypass model guardrails. When video generation prompts were flagged as unsafe, the team used a cloud AI to help create more “AI-friendly prompts” to successfully bypass the restrictions.
The generation process is highly costly and inefficient, with the team reporting throwing away approximately five video clips for every one deemed usable. Even newer models like Google’s V3 have stricter content guidelines, sometimes forcing creators to use older, less-capable models.
What’s the Safety Protocol for Suspicious Requests?
The most critical safety protocol for any urgent request for money or information is simple but effective: “I’m going to call you back at the number that I already have for you.”
This protocol confirms identity through a trusted communication channel rather than responding to the immediate request. The verification step prevents fraud by ensuring you’re communicating with the actual person, not an AI-generated impersonator.
Why Is This $1 Trillion Threat Getting Worse?
Global losses from online scams and fraud in 2024 are estimated to exceed $1 trillion, with AI-generated content rapidly increasing fraud attempt convincingness. The combination of dramatically lowered technical barriers and improved output quality creates what Linus calls an “extreme danger for the general public.”
Many people can no longer reliably distinguish deepfakes from authentic content, particularly when viewing short social media clips or receiving urgent messages from seemingly familiar sources.
Linus Tech Tips’ core warning: “The scariest part is how easy it was to generate that clip.”
About the Creator
Linus Tech Tips is hosted by Linus Sebastian, covering consumer technology, PC hardware, and emerging tech trends. The channel combines product reviews with investigative journalism examining the broader implications of technology. Visit linustechtips.com
Watch the full episode: AI Deepfakes and the $1 Trillion Fraud Threat | Linus Tech Tips
via Linus Tech Tips YouTube