SEOUL, July 22 (Korea Bizwire) — A viral video shows Seoul’s Gyeongbokgung Palace submerged in water after torrential rain, with people in yellow raincoats bailing out water — and, in a surreal twist, a seal swimming across the flooded courtyard. It looks uncannily real, but the scene is pure fiction, created by artificial intelligence.
As heavy rains hit South Korea this month, a wave of hyper-realistic AI-generated videos flooded platforms like YouTube, blending visual realism with absurd scenarios. Searching “monsoon” or “flood” alongside “Veo3” — Google’s latest text-to-video AI tool — now yields hundreds of similar clips, prompting widespread concern over the blurring line between fact and fabrication.
Since its release in May, Google’s Veo3 has democratized AI video creation by offering voice-enabled, high-resolution video generation to everyday users via subscription. The platform has already been used to produce over 40 million videos globally — averaging 600,000 new clips daily.
The surge of AI-generated content is transforming industries from broadcasting to advertising. Major network MBC recently used AI to recreate historical moments, like the theft of the Mona Lisa and the first spacewalk, on its show Surprise.
While applauded for innovation, the move has sparked backlash over the potential loss of production jobs, as AI replaces actors, crews, and post-production staff.
More troubling, however, are the misinformation risks. Several broadcasters mistakenly aired AI-generated footage — such as a sparrow attacking invasive bugs — as real news. A fabricated image of a female environmental activist insulting “lovebugs” during a protest went viral before being debunked as AI-generated.
The darker side of the technology is also growing. Cases of deepfake-related crimes, including romance scams and voice phishing, have skyrocketed in Korea. Police reports tied to deepfakes increased more than sixfold from 156 cases in 2021 to 964 last year, with no signs of slowing down in 2025.
“The level of video manipulation in recent crimes is far beyond what we saw even a few years ago,” a senior police official noted. Experts warn that this unchecked spread of AI-generated video could erode public trust and destabilize social cohesion.
Legal safeguards are lagging. South Korea’s AI Basic Law, set to take effect in January 2026, mandates watermarking for AI-generated content, but critics argue that such marks are easily removed or ignored. Some researchers suggest embedding identification technology directly into AI models as a more effective and less burdensome solution.
Still, others caution against overregulation. “Excessive restrictions could stifle the development of domestic AI innovation,” one industry analyst warned.
As South Korea embraces the creative potential of AI, it also faces the urgent challenge of balancing technological freedom with responsibility — before fiction permanently overtakes fact.
Kevin Lee (kevinlee@koreabizwire.com)








