Essential Site Maintenance: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at help@authorea.com in case you face any issues.

loading page

Understanding Natural Language Beyond Surface by LLMs
  • Yong Yang
Yong Yang

Corresponding Author:yong.sh.yang@gmail.com

Author Profile

Abstract

In recent years, transformer-based models like BERT and ChatGPT/GPT-3/4 have shown remarkable performance in various natural language understanding tasks. However, it's crucial to note that while these models exhibit impressive surface-level language understanding, they may not truly understand the intent and meaning beyond the superficial sentences. This paper is a survey of studies of the popular Large Language Models (LLMs) from various research and industry papers and review the abilities in term of comprehending language understanding like what human have, revealing key challenges and limitations associated with popular LLMs including BERTology and GPT alike models.
06 Jan 2024Submitted to Data Science and Machine Learning
02 Feb 2024Published in Data Science and Machine Learning