Consumers View AI-Generated Content Negatively in News, Study Finds | Be Korea-savvy

Consumers View AI-Generated Content Negatively in News, Study Finds


The study found that while AI-generated content is generally accepted in entertainment mediums like films and dramas, its application in news is met with dissatisfaction, particularly when it involves photos and videos. (Yonhap)

The study found that while AI-generated content is generally accepted in entertainment mediums like films and dramas, its application in news is met with dissatisfaction, particularly when it involves photos and videos. (Yonhap)

SEOUL, Dec. 16 (Korea Bizwire) – As generative AI tools increasingly produce images, videos, and text, a recent study reveals that consumers perceive AI usage negatively in information-driven content such as news.

The findings, published in the latest issue of Management Information Systems Research by a team led by Professor Jung Yoon-hyuk of Korea University, highlight concerns over the authenticity and reliability of AI-generated content.

Key Findings

The study, conducted through an online survey of 71 participants in April, compared consumer reactions to AI-generated content across two categories:

  1. Practical Content: News and informational material.
  2. Pleasure-Oriented Content: Entertainment such as films and dramas.

Participants were asked to evaluate AI’s role in creating text, photos, and videos. The results showed:

  • AI in News: Respondents were dissatisfied with AI-generated videos used in news. Reactions to AI-generated photos in news were split between dissatisfaction and neutrality. AI-written articles, however, elicited no strong negative responses.
  • AI in Entertainment: For films and dramas, AI-generated content was met with largely neutral reactions, neither favorable nor unfavorable.

Implications for Trust and Authenticity

The research suggests that consumers place higher importance on the authenticity and credibility of content creators in practical contexts like news, where objective facts are essential.

Concerns about “hallucination effects”—instances where AI produces inaccurate or misleading content—amplify skepticism in such contexts.

Furthermore, dissatisfaction was more pronounced when AI use was not disclosed in practical content, underscoring the need for transparency.

Recommendations

The study calls for clearer AI disclosure mechanisms, such as visible watermarks or labels, particularly for practical content. “Making AI usage easily identifiable could reduce consumer dissatisfaction and enhance trust,” the researchers noted.

As generative AI continues to reshape content creation, these findings highlight the critical need for transparency and safeguards to address consumer concerns about authenticity and reliability in AI-generated material.

Ashley Song (ashley@koreabizwire.com) 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>