AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP
Eleonora Sawhai
Eleonora Sawhai

Public Documents 1
Token-Level Multi-Hop Reasoning in Large Language Models: Mitigating Misordered Conte...
Eleonora Sawhai

Eleonora Sawhai

and 5 more

October 15, 2024
Models designed to process human language often struggle with complex reasoning tasks, particularly when the input context is presented in a misordered or fragmented format. Token-level multi-hop reasoning presents a novel approach to addressing this challenge, focusing on individual tokens rather than entire sentences or paragraphs, thus allowing for more precise realignment of disordered sequences. Experiments conducted with the Mistal model demonstrate substantial improvements in both accuracy and coherence when the token-level framework is applied, even in highly misordered contexts. Through iterative refinement and token dependency analysis, the model adapts more effectively to fragmented input, outperforming baseline configurations in tasks that require multi-step reasoning. While the implementation of this framework introduces additional computational complexity, the gains in accuracy and logical consistency make it a promising strategy for enhancing the capabilities of language models in scenarios with non-linear or disrupted context. The research highlights the importance of granular token-level processing in overcoming limitations associated with disordered inputs, establishing a pathway for more robust and adaptable reasoning systems.

| Powered by Authorea.com

  • Home