South Korean AI Firms Bolster Safeguards Against Misuse by Minors | Be Korea-savvy

South Korean AI Firms Bolster Safeguards Against Misuse by Minors


OpenAI's ChatGPT  (Image courtesy of Pexels/CCL)

OpenAI’s ChatGPT (Image courtesy of Pexels/CCL)

SEOUL, Aug. 26 (Korea Bizwire) – As concerns grow over the potential misuse of conversational AI services for inappropriate sexual discussions with minors, South Korean tech companies are implementing new measures to protect young users.

WRTN Technologies, creator of the popular Character Chat service, recently introduced an automatic content filtering system. The company announced on August 22 that while they believe in users’ freedom to engage with AI-generated content, they recognize the need to shield adolescents from potentially sensitive character interactions. 

“We sympathize with the idea that users should be able to freely use generative AI content, but we also believe that young people should be protected from characters that could be sensitive to them,” WRTN stated in their announcement.

The newly implemented Safety Filter detects and blocks characters deemed inappropriate for minors. WRTN’s criteria for unsuitable content include explicit sexual material, violent content, depictions of drug abuse, gambling promotion, and excessive use of profanity.

Specifically, sexual content encompasses explicit descriptions of sexual acts, detailed depictions of sexual conversations, obscene sexual expressions, and provocative scenes intended to elicit sexual arousal. Violent content includes cruel scenes and acts causing excessive pain, as well as detailed descriptions that may cause discomfort to users.

“While the Safety Filter isn’t perfect, and there may be inappropriate characters created without sensitive content warnings, we actively sanction such cases,” a WRTN spokesperson stated, encouraging users to report concerning content. 

WRTN’s service, launched in January 2023, boasts 3.7 million cumulative users across South Korea and Japan, with over 2 million monthly active users as of June.

Another domestic startup, Scatter Lab, is enhancing protective measures on its AI story platform Zeta. The company employs abuse detection models and keyword filtering systems to block inappropriate conversations between AI and young users.

These technical measures are complemented by AI ethics guidelines, operational policies, and constant monitoring. 

The abuse detection model uses AI to identify and block inappropriate utterances, situations, and contexts, while the keyword filtering system detects and prevents conversations containing unsuitable keywords.

This filtering system is also applied during character creation, automatically rejecting prompts or image registrations containing sexually explicit, violent, or hateful content.

“We’re implementing a three-strikes policy and building an emergency response system,” a Scatter Lab representative said. “We’re committed to strengthening ethical measures and addressing technical limitations.”

Zeta, introduced in early April, had attracted 600,000 users and generated 650,000 characters as of early August.

The surge in safeguards comes amid recent controversies surrounding the exploitation of AI chatbots. Incidents of users engaging in sexually explicit conversations with AI have surfaced on popular online forums, echoing a 2020 scandal involving Scatter Lab’s earlier AI chatbot, Iruda.

Kevin Lee (kevinlee@koreabizwire.com) 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>