Scientists Develop Brain-Inspired AI That Outperforms LLMs in Reasoning

A new study introduces a brain-inspired AI, the Recursive Cortical Network (RCN), which significantly outperforms leading Large Language Models (LLMs) like ChatGPT in abstract reasoning. Developed by researchers at Vicarious AI, this model highlights an alternative path toward artificial general intelligence by emphasizing data efficiency and a more human-like conceptual understanding over the massive datasets used by current models.

A Different Architectural Approach

The RCN is modeled on the human brain’s visual cortex. Unlike LLMs, which learn by identifying statistical patterns in vast amounts of text, the RCN creates an internal, generative model of the world it observes. This allows it to grasp and separate concepts like shape, texture, and object relationships from very few examples, similar to how humans learn. This structure enables it to reason about novel situations rather than just recalling patterns from its training data.

Superior Performance on Reasoning Tests

Researchers tested the RCN using Raven’s Progressive Matrices (RPM), a standard non-verbal IQ test that evaluates abstract reasoning by asking the subject to complete a visual pattern. The RCN achieved an impressive 87% accuracy, surpassing the average human score of 84%. This result stands in stark contrast to top deep-learning models and LLMs, which have historically struggled with these tests, often scoring below 60%. This gap highlights the fundamental difference between pattern recognition and genuine abstract reasoning.

Key Takeaways for AI’s Future

The success of the RCN suggests that the future of more capable and reliable AI may not solely depend on scaling up current LLMs. Instead, architectures that mimic the brain’s efficiency and ability to reason from first principles present a promising direction. This approach could lead to AI systems that are more transparent, require significantly less data, and are less susceptible to the logical errors and “hallucinations” that plague many of today’s models.

Mentoring question

Given the RCN’s success with a brain-inspired, data-efficient model, how should the AI community balance the pursuit of massive scale in LLMs with research into alternative, more structured architectures for achieving general intelligence?

Source: https://www.livescience.com/technology/artificial-intelligence/scientists-just-developed-an-ai-modeled-on-the-human-brain-and-its-outperforming-llms-like-chatgpt-at-reasoning-tasks

Leave a Reply

Your email address will not be published. Required fields are marked *


Posted

in

by

Tags: