Kakao Unveils Multimodal Large Language Model Honeybee | Be Korea-savvy

Kakao Unveils Multimodal Large Language Model Honeybee


An image of Kakao Corp.'s multimodal large language model, Honeybee, provided by Kakao Brain Corp. (Image courtesy of Yonhap)

An image of Kakao Corp.’s multimodal large language model, Honeybee, provided by Kakao Brain Corp. (Image courtesy of Yonhap)

SEOUL, Jan. 19 (Korea Bizwire)South Korean tech giant Kakao Corp. said Friday it has developed a multimodal large language model (MLLM) named Honeybee in a bid to expand its presence in the artificial intelligence market.

During an AI strategy meeting hosted by the Ministry of Science and ICT, Kakao’s CEO nominee Chung Shin-a revealed that her company has completed the development of Honeybee.

This upgraded large language model goes beyond conventional text understanding by incorporating vision and image comprehension capabilities.

Built upon the MLLM foundation, Honeybee is able to understand both images and text simultaneously, making it possible to respond to inquiries related to mixed image and text content, according to Kakao.

Kakao said it has shared Honeybee and its inference code on Github, an online software development platform and an open source community, to facilitate the widespread advancement of MLLMs globally.

(Yonhap)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>