SEOUL, Nov. 9 (Korea Bizwire) — Twelve Labs, South Korea’s generative artificial intelligence (AI) startup, unveiled its video language foundation model Thursday in a bid to take the lead in the hyperscale large language model market.
The model, named Pegasus-1, can accurately summarize long videos into a text and have a chat about the video with its users, offering 61 percent higher performance compared with the existing most advanced video language model, according to Twelve Labs.
The company said Pegasus-1 has been trained on over 1 billion image-text pairs and 35 million video-text pairs, which is about 10 percent of the 300 million diverse video-text pairs the company has collected.
The Korean company said Pegasus-1 can be commercialized immediately, especially in sports, media, entertainment, education and physical security, among other areas.
Twelve Labs, founded in 2021, has attracted a combined US$10 million investment from U.S. chip design company Nvidia Corp., U.S. chipmaker Intel Corp. and two others last month.
(Yonhap)