
Free Speech vs. Safety: South Korean Platforms Step Up Moderation Amid Rising Threats (Image supported by ChatGPT)
SEOUL, June 23 (Korea Bizwire) — As online platforms in South Korea face growing scrutiny over the spread of extremist speech and criminal activity, major tech firms are stepping up content moderation policies—even amid concerns over free speech and algorithmic overreach.
Naver’s streaming platform, Chzzzik, recently restricted access to a broadcaster’s channel after the streamer made provocative remarks about then-candidate and now-President Lee Jae-myung during the presidential election earlier this month.
The streamer, who was live at the time, appeared to encourage viewers to carry out an assassination as Lee’s victory became imminent. Naver initially issued a temporary suspension but escalated the penalty following public backlash.
Kakao, South Korea’s other major tech giant, rolled out a stricter policy on June 16 allowing for permanent account suspensions in cases involving terror threats, violent extremism, and the sexual exploitation of minors.
The move aligns with legislative shifts that have gained momentum since the infamous “Nth Room” case, which exposed how digital platforms were used to distribute illegal sexual content.
Under current laws—revised versions of the Telecommunications Business Act and the Information and Communications Network Act—platforms in South Korea are now required to take proactive technical and administrative measures to prevent the dissemination of digital sex crimes and illegal materials.

At Siji Middle School in Suseong District, Daegu, a school police officer (SPO) conducts a crime prevention session for students on deepfake sexual exploitation. (Yonhap)
International platforms are following suit. Google strengthened its search policies in 2023, banning content that depicts terrorist groups committing violence for political or religious purposes.
Meta, the parent company of Facebook and Instagram, outlined guidelines permitting the removal of content promoting acts such as contract killings, assassinations, or instructions for weapon use aimed at causing harm.
But implementing these standards poses challenges. Danggeun Market, a local community platform, had to respond to a spate of car thefts disguised as free car-washing offers. While the company blocks suspicious listings and monitors for similar schemes, it refrains from banning all posts containing the word “car wash” to avoid punishing legitimate users.
Meta’s automated enforcement has also drawn criticism. In recent months, a wave of Instagram account suspensions—some involving users who simply posted baby photos—highlighted the pitfalls of AI-powered moderation systems.
Though Meta Korea acknowledged that over-enforcement can occur and promised account restoration where appropriate, some users have reported unresolved suspensions weeks after filing complaints.

The reason companies are enforcing stricter content moderation policies—despite ongoing controversy over potential infringements on freedom of expression—is the growing number of cases where platforms have become breeding grounds for criminal activity. (Yonhap)
The increasing stringency of moderation policies has reignited debate over the balance between safety and speech online. Legal experts argue that platforms must be more transparent and consistent in their enforcement, particularly when users’ rights may be at stake.
“The tension between removing illegal or harmful content and protecting freedom of expression is real,” said Lee Sung-yeop, director of Korea University’s Tech Law and Policy Center. “Platform companies need to clearly define the scope of their enforcement and make terms of use accessible so users can properly challenge decisions when necessary.”
As tech companies face rising pressure to police harmful content without stifling legitimate voices, their ability to maintain public trust—and legal compliance—will remain a defining challenge in the evolving digital age.
M. H. Lee (mhlee@koreabizwire.com)