SEOUL, Jan. 13 (Korea Bizwire) – Today’s chatbots are smarter, more responsive and more useful in businesses across sectors, and the artificial intelligence-powered tools are constantly evolving to even become friends with people.
Emotional chatbots capable of having natural conversations with humans are nothing new among English speakers, but a new controversy over a South Korean startup’s AI chatbot has raised ethical questions over its learning algorithms and data collection process.
Scatter Lab’s AI chatbot, Lee Luda, became an instant success among young locals with its ability to chat like a real person on Facebook messenger, attracting more than 750,000 users since its debut on Dec. 23.
But the 20-year-old female college student chatbot persona temporarily went offline on Monday, 20 days after beginning its service, amid criticism over its discriminatory and offensive language against sexual minorities and disabled people.
Some male users were even able to manipulate the bot into engaging in sexual conversations.
The rise and fall of the chatbot hype was mainly attributable to its deep learning algorithms, which used data collected from 10 billion conversations on KakaoTalk, the nation’s No. 1 messenger app.
Scatter Lab said it retrieved data from its Science of Love app launched in 2016, which analyzes the degree of affection between partners based on actual KakaoTalk messages.
Luda learned conversation patterns from mostly young couples to sound natural, sometimes even too real by using popular social media acronyms and internet slang, but it was spotted using verbally abusive and sexually explicit comments in conversations with some users.
A messenger chat captured by one user showed that Luda said she “really hates” lesbians and sees them as “disgusting.”
Luda is reminiscent of Microsoft’s Tay, an AI Twitter bot that was silenced within 16 hours in 2016 after posting inflammatory and offensive tweets.
Scatter Lab apologized over Luda’s discriminatory remarks against minorities, promising to upgrade the service to prevent the chatbot from using hate speech.
“We will bring you back the service after having an upgrade period during which we will focus on fixing the weaknesses and improving the service,” Scatter Lab CEO Kim Jong-yun said in a statement on Monday.
The Luda case stirred debates about whether the company is responsible for failing to filter discriminatory and inflammatory remarks in advance or whether the people who misused it should take the blame.
Lee Jae-woong, the former CEO of ride-sharing app Socar, said the company should have taken preventive measures against hate speech before introducing the service to the public.
“Rather than users who exploited the AI chatbot, the responsibility lies with the company that provided a service failing to meet the social consensus,” Lee wrote on his Facebook page.
“The company should complement its biased training data to block hateful and discriminatory messages.”
Along with the controversy over the chatbot, the company has also come under fire for using personal information of its users without proper consent and not making enough efforts to protect it.
Some claimed names and banks popped up in conversations with Luda, raising suspicions over personal information leakage.
Some users of Science of Love said they will push for a class action suit against the company for using their sensitive data without notifying them it would be used to develop the female AI chatbot.
A furious app user on Tuesday posted a petition on the website of the presidential office Cheong Wa Dae, calling for Scatter Lab to discard all personal data stored in its system and terminate the service.
“Scatter Lab used app users’ data without any notice and prior consent to take it from its platform to start its AI chatbot business and didn’t properly protect personal data,” the petitioner wrote on Tuesday.
In response to growing complaints, the Personal Information Protection Commission and Korea Internet & Security Agency of South Korea said they will investigate whether Scatter Lab violated any personal information protection laws.
The company apologized over the matter, saying that it has tried to adhere to guidelines on the use of personal information but failed to “sufficiently communicate” with its users.
Scatter Lab said its developers erased real names with its filtering algorithms but failed to remove all of them depending on the context, saying all data used in training Luda has been unverifiable and that it removed sensitive personal information, including names, phone numbers and addresses.
Experts say Scatter Lab’s AI platform presents challenges for the protection of personal data, the key to developing deep learning algorithms.
“Scatter Lab obtained comprehensive consent from users to use personal information in marketing and advertising and didn’t get consent for use of a third person’s personal information, which could constitute privacy invasion,” Kim Borami, a lawyer at Seoul-based law Dike Law Firm, said.
Some IT industry officials expressed worries over potential moves to regulate AI development and data collection, which could hamper innovative efforts by budding developers.
Kakao Games CEO Namgung Hoon said Luda itself is not guilty of embodying the young generation’s prejudices and is one of many AI characters that will come out in the market in the future.”
“I think society’s rare full attention on AI needs to be directed in positive ways,” Hoon wrote on his Facebook page. “I worry that the government may bring in irrelevant regulations on the fledgling AI industry to lock in innovations once again.”
Socar’s Lee said he hopes Luda’s case spurs relevant public discourse to come up with measures to prevent AI platforms from spreading humans’ prejudices and improve the quality of AI services.
“I hope the (Luda controversy) could provide an opportunity for (the company) to consider its social responsibility and ethics when providing AI services and take a second look at several issues,” Lee said.