Advancing Reasoning Abilities in Language Models: The SELF-DISCOVER Framework

Advancing Reasoning Abilities in Language Models: The SELF-DISCOVER Framework

In a groundbreaking development, researchers from Google DeepMind and the University of Southern California have introduced the SELF-DISCOVER prompting framework, a revolutionary approach to enhancing the reasoning abilities of large language models (LLMs).

Published this week on arXiV and Hugging Face, the SELF-DISCOVER framework represents a significant leap forward in the field, with the potential to revolutionize the performance of leading models like OpenAI’s GPT-4 and Google’s PaLM 2.

This innovative framework promises substantial improvements in tackling challenging reasoning tasks, boasting up to a 32% performance increase compared to traditional methods such as Chain of Thought (CoT). The key to its success lies in LLMs autonomously uncovering task-intrinsic reasoning structures, enabling them to navigate complex problems with ease.

At its core, the SELF-DISCOVER framework empowers LLMs to self-discover and utilize various atomic reasoning modules, such as critical thinking and step-by-step analysis, to construct explicit reasoning structures. This mimics human problem-solving strategies and operates in two stages: composing a coherent reasoning structure intrinsic to the task during stage one, and following this self-discovered structure during decoding to arrive at the final solution.

Extensive testing across various reasoning tasks, including Big-Bench Hard, Thinking for Doing, and Math, consistently demonstrated the superiority of the self-discover approach over traditional methods. With GPT-4, the framework achieved impressive accuracies of 81%, 85%, and 73% across the three tasks, surpassing chain-of-thought and plan-and-solve techniques.

Beyond performance gains, the implications of this research are far-reaching. By equipping LLMs with enhanced reasoning capabilities, the SELF-DISCOVER framework paves the way for tackling more challenging problems and brings AI closer to achieving general intelligence. Transferability studies conducted by the researchers underscore the universal applicability of the composed reasoning structures, aligning with human reasoning patterns.

As the field of AI continues to evolve, breakthroughs like the SELF-DISCOVER prompting framework represent crucial milestones in advancing the capabilities of language models, offering a promising glimpse into the future of AI-powered reasoning and problem-solving.