
Comparison Experiment of Visual Responses Between Artificial Intelligence and the Human Brain. (Image courtesy of IBS)
DAEJEON, April 23, (Korea Bizwire) — South Korean researchers have developed a breakthrough artificial intelligence model that selectively processes visual information much like the human brain, significantly enhancing image recognition capabilities.
Announced on April 22, 2025, by the Institute for Basic Science (IBS), the innovation comes from a joint effort between the IBS Center for Cognition and Sociality, led by Director Chang-Joon Lee, and a research team from Yonsei University’s Department of Applied Statistics, headed by Professor Kyungwoo Song.
The research aims to address the limitations of conventional image-processing AI models. Traditional convolutional neural networks (CNNs), while computationally efficient, analyze images using small square filters that often fail to grasp broader visual context.
More advanced models like Vision Transformers compensate for this but require vast amounts of computing power and large datasets, making them impractical for many applications.

Traditional convolutional neural networks (CNNs), while computationally efficient, analyze images using small square filters that often fail to grasp broader visual context. (Image courtesy of Pixabay/CCL)
To bridge this gap, the research team drew inspiration from the human visual cortex, which processes information selectively — prioritizing salient or meaningful features rather than treating all input equally.
The result is a novel architecture dubbed “Lp-Convolution,” a technique that mimics the brain’s ability to focus on essential elements while de-emphasizing peripheral or less relevant data. Using a weighting mechanism known as a “mask filter,” the model selectively emphasizes key visual regions, improving recognition accuracy without increasing computational load.
Models incorporating Lp-Convolution outperformed conventional CNNs in image classification tasks and maintained high performance even when filter sizes were expanded to capture wider visual contexts — a feat that typically degrades accuracy in standard CNNs.
The findings will be presented at ICLR 2025 (International Conference on Learning Representations), one of the world’s premier AI conferences, to be held April 24–28 in Singapore.
The study represents a major advance in neuroscience-inspired AI, offering promising applications in fields ranging from autonomous vehicles to medical imaging, where efficient and accurate visual interpretation is critical.
Kevin Lee (kevinlee@koreabizwire.com)