A new craze has taken hold on TikTok, where creators share computer-crafted videos inspired by events such as the Black Plague or the Titanic disaster. Clips often involve haunting images and dramatic captions.
One prominent account, @timetravellerpov, features clips labelled “POV: You Wake Up in 1351 During the Black Plague” or “POV: You’re a Coal Miner in 1900.” Millions of viewers have clicked on these short scenes.
Some posts use voice-overs and eerie backgrounds to mimic daily struggles in past centuries. Fingers sometimes appear warped or duplicated, and faces can look distorted. Many people find them to be novelty entertainment.
What Attracts Viewers To This Trend?
Many users say these short clips offer a playful twist on history. They step into old settings without needing any time machine, thanks to imaginative AI techniques.
Content creators often pick major events to boost drama and pull audiences in. Plague outbreaks, volcanic eruptions, and nuclear incidents appear frequently in these staged stories.
Some fans see them as a bit of dark humour, while others question the line between entertainment and respect for historical tragedies. A few have raised points about misrepresentation when it comes to serious events.
How Do People Separate Real From Fake?
Viewers watch for minor errors in the videos, such as extra limbs or wavering backgrounds. Mouth movements sometimes fail to match the voice track, and angles can appear unnatural.
Slowing down playback can expose flickers or odd transitions. Some individuals also notice that characters may have mismatched eyes or out-of-sync speech.
Creators often label their content as AI-driven entertainment, though disclaimers are sometimes missing. Clear announcements can help people avoid confusion about what is factual and what is fiction.
We’ve also asked experts to give their tips on how to actually differentiate real from AI-generated…
Our Experts:
- Jamie Krenn, Adjunct Associate Professor, Sarah Lawrence College and Teachers College, Columbia University
- Michael Sumner, CEO, ScoreDetect
- Yevhenii Tymoshenko, CMO, Skylum
- Dima Osmichenko, Head of Operations, IT Monks
- Iqbal Ahmad, Founder & CEO, Britannia School of Academics
- Peter Lewis, Founder & CEO, Strategic Pete
Jamie Krenn, Adjunct Associate Professor, Sarah Lawrence College and Teachers College, Columbia University
“AI-generated deepfakes are messing with the way we process information, and from a cognitive science and developmental psychology perspective, that’s a huge deal.
“Our brains are wired to trust what we see and hear—it’s an evolutionary shortcut that helps us process information around us, but deepfakes take advantage of this by making realistic fake media that’s hard to detect. We rely on what we see in here to process information around us, and we do this so quickly by taking past and prior experiences to understand the information in front of us we do this so quickly, that things can be missed.
“Cognitive science shows that mental shortcuts help us make sense of information quickly, which means if something looks legit, we tend to believe it. This is even more challenging for younger people, whose brains are still developing critical thinking, executive functioning and impulse control. They’re more likely to take deepfakes at face value, which can alter their understanding of reality and make them easy targets for misinformation.
“The problem is also that education and digital literacy regarding this technology is not keeping up in terms of education. Digital literacy programmes—whether in schools or for adults—aren’t always wide spread or available.
“Most of us aren’t trained to question the possibility of hyper-realistic media, and that makes it easy for false information to spread unchecked. The more deepfakes evolve, the harder they’ll be to spot, and without better education in media literacy and critical thinking, we’re left defenseless.
“We need courses or wider reaching information that blend cognitive science, policy, digital literacy and practical skills, teaching people how their brains process information and how to apply that knowledge when evaluating content. It’s not about making people paranoid—it’s about giving them the tools to navigate an online world where seeing isn’t always believing.”
Michael Sumner, CEO, ScoreDetect
“There are multiple ways to identify deepfakes and misinformation. First I want to call out that it’s disheartening that we live in an era where we need to do so. But with both sides of the same coin, AI can be used for good or bad.
“One of the crucial steps is to understand the context in which the media is being presented. Is it a credible source? Is the information being presented in a transparent and accountable manner? By being mindful of these factors, individuals can begin to build a foundation for identifying potential red flags.
“Whilst working with Fortune 100 companies, I’ve found that any inconsistencies or anomalies or “hunches” appearing too perfect or lacking the usual imperfections we see, can be indicative of AI-generation. But in 5 years time, we might reach a point where it becomes more difficult.
“So, other ways to figure this out is to find prove “Content Authenticity” for the digital asset. When was it created? What photo/video/textual metadata according to Schema.org (or, other standards) does it identify? Can we be certain that the metadata, over several revisions, stays true? By allowing metadata to tell a story over time, we can definitely prove this fact. Furthermore, we need to establish this proof in a public, independent, decentralised manner. Which is something we’re doing ourselves.”
Yevhenii Tymoshenko, CMO, Skylum
“The biggest challenge now is that AI has gotten insanely good at mimicking reality. We are not just talking about weirdly smooth skin or strange lighting anymore. Deepfakes and AI-generated images can pass as real even to a trained eye. But there are still ways to tell.
“First, context is everything. If an image or video seems too shocking, too perfect, or oddly convenient for a specific narrative, start questioning it. Second, look at the details AI still struggles with. Hands, reflections, text, and shadows often give it away. AI is getting better, but glitches still happen. Fingers can be off, shadows do not always behave naturally, and text in the background can look gibberish or warped.
“Metadata can help too. Many AI-generated images do not carry standard camera metadata, though some tools try to fake it. Reverse image searches and forensic tools can sometimes catch manipulations.
“For videos, some services can help verify authenticity. Deepware Scanner analyses video URLs for deepfake traces, while Intel’s FakeCatcher detects AI-generated content by analyzing blood flow patterns in human faces. Reality Defender provides real-time detection, which is particularly useful in video calls, helping to prevent scams and impersonations. Researchers at Columbia Engineering have also developed DIVID, a tool specifically designed to identify AI-generated videos with a high accuracy rate.
“The real issue is not just spotting AI. It is trust. As deepfakes get better, even real media will be questioned. That is where we are headed. A world where skepticism becomes the norm. The best defense is to stay sharp, ask questions, and never take things at face value.”
More from Artificial Intelligence
Dima Osmichenko, Head of Operations, IT Monks
“Spotting AI-generated content in 2025 is getting trickier, but it’s still possible if you know what to look for. Faces in deepfake videos often have weird inconsistencies. Blinking can be unnatural. Lighting doesn’t always match up. Sometimes there’s a strange blur around the edges, especially if the person moves fast.
“AI-generated images still struggle with fine details like extra fingers or warped backgrounds. Text is another story. AI-written content often lacks a real human “voice” or overuses certain phrases. The best giveaway is cross-checking sources. If something seems off, reverse-search the image, run a frame through a deepfake detector, or compare it with reliable media.
“We’re at a point where skepticism isn’t paranoia. It’s just smart. Misinformation spreads fast and AI tools are only getting better. But so are we. The more people understand what to watch out for, the harder it’ll be to pull the wool over our eyes.”
Iqbal Ahmad, Founder & CEO, Britannia School of Academics
“When it comes to detecting AI there is always a pattern that looks unnatural either it’s a content or a multimedia. According to Sensity AI approximately 90-95% of deepfake videos since 2018 were primarily based on non-consensual pornography. That shows how badly it impacting the lives of the people. If you encounter any such disturbing videos look for the clues that will lead to the conclusion whether it’s a real or an AI generated one.”
For Videos or Images
“As discussed earlier AI always follow a pattern and we need to detect those patterns to identify the originality of a certain media. For instance, look for weird facial expressions especially around teeth, hair and eyes that cannot be justified. Look for unnatural lightening that doesn’t fit with the environment. Closely monitor the movements of the person as you will see an unnatural touch.”
For Audios
“For audios look for unnatural speech patterns, weird pauses or tone. Listen closely to the breathing patterns and see if it syncs with the tone. When it comes to the AI detection of videos and audios one can easily identify the irregularities that doesn’t justify the pattern.”
For Content
“There are many tools that can detect whether the content is AI generated or not. You may came across with AI content on almost daily basis now and to check whether it is AI generated you may opt for some popular tools such as GPTZero, Copyleaks etc. Or copy paste the content and paste it to the ChatGPT, Gemini etc. Give prompt that if the content is AI generated or not. Cross check the information as most GPT’s provide outdated information. Doing so will help you understand the originality of the content.”
Peter Lewis, Founder & CEO, Strategic Pete
“After spending years producing documentaries and hosting TV shows, I’ve trained my eyes to catch the smallest details (bad lighting, awkward cuts, or a script that… just doesn’t feel right)
“Now, with deepfakes and artificial intelligence taking over the internet, that same instinct is helping me to separate what’s real from what a not-human has cut and pasted.
1) Look at the eyes. They don’t lie (as much).
“AI still can’t get eye movement. On genuine videos, individuals’ eyes flit around naturally, following conversation, reacting to stimuli.
“Deepfakes have this creepy “dead-eye” effect – too stiff or unnaturally out-of-focus, like a badly executed CGI character from some old video game.
If you find yourself with the sense that someone’s looking through the lens rather than into it, that’s a red flag.”
2) Lighting.
“Human faces react to light in ways that AI currently can’t. A real person on a video will have coherent lighting – shadows shift naturally if they shift direction. Deepfakes often mess this up: reflections don’t match, shadows somehow vanish, or the light source is incorrect.”
3) Use AI to catch AI.
“Intel has a real-time Deepfake Detector that searches for micro-expressions and sub-dermal blood flow (because AI isn’t yet capable of replicating actual blood flow).
“Adobe and Google are releasing content authentication tools that insert invisible watermarks into media, which makes AI-generated media easier to detect. If you’re serious about catching fakes, start using these tools before you finalise your instinct being enough.”
4) Metadata snitching.
“Every digital file contains “metadata” – time stamps, device info, even location info. Genuine images and video contain a natural digital signature.
AI media will delete or tamper with metadata, with gaps or inconsistencies. If a video claims to have been recorded at a specific time and place but lacks fundamental file information or is “too clean” when loaded into editing software, be wary.”
5) Governments are clamping down (use that to your advantage)
“China has already passed laws requiring AI-created content to have recognisable watermarks. The EU’s Digital Services Act is driving labelling norms. Even the United States, deepfake detection is being introduced by large technology corporations to avoid legal issues.
“If you’re not sure if a picture or video is real, look for a label. In the near future, companies will be forced to label AI-created content. If there is no label, it doesn’t mean it’s real, but the lack of one where you would expect to find one could be a clue.
“AI deepfakes are just going to be more advanced. But humans have one thing on their side: intuition. Combine that with the right tools, and you start to see the digital ghosts floating in the daylight.”