loading page

Optimizing Token-Level Ensembles for Enhanced Performance in Large Language Models
  • +2
  • Giancarlo Lofap,
  • Ezekiel Richardson,
  • Konstantin Beckett,
  • Alejandro Fournier,
  • Nathaniel Montgomery
Giancarlo Lofap

Corresponding Author:lofap14488@skrank.com

Author Profile
Ezekiel Richardson
Konstantin Beckett
Alejandro Fournier
Nathaniel Montgomery

Abstract

The increasing complexity of language tasks and the growing demands for precise and coherent text generation have pushed the development of more sophisticated model architectures. However, individual model configurations often struggle to generalize across varied linguistic phenomena due to inherent biases and limitations in their training processes. To address these challenges, a token-level ensemble method is introduced, which aggregates token predictions across multiple model instances, leveraging variations in checkpoints, random seeds, and finetuning strategies. This approach significantly improves performance in terms of accuracy, consistency, and perplexity across tasks such as text generation, machine translation, and language modeling. Through combining token probabilities at a granular level, the ensemble technique achieves a more refined and robust output, outperforming sequence-level aggregation strategies. The experimental results demonstrate the effectiveness of the tokenlevel ensemble, providing a more adaptable and accurate solution for handling diverse and complex linguistic tasks. Furthermore, the averaging technique consistently yielded the best results, indicating that token-level probability aggregation offers a superior balance between accuracy and computational efficiency, despite the associated resource costs.