Search
Close this search box.

Leader’s Opinion: LLMs Ride the Overconfidence Wave with Mukundan Rengaswamy

Ideally, one would want to select a model at the sweet spot between underfitting and overfitting. This is the goal, but is very difficult to do in practice.

In the world of machine learning, developers often grapple with the enigmatic quirks of Language Model Models (LLMs). Jonathan Whitaker and Jeremy Howard from fast.ai embarked on an intriguing experiment, unearthing a subtle yet pervasive issue with these models: overconfidence, a distinct phenomenon from the notorious LLM hallucination.

Mukundan Rengaswamy, Head of Data Engineering, Innovation & Architecture of Webster Bank weighed in on the matter, stating, “LLMs (Large Learning Models) and Gen AI have been in the news ever since ChatGpt was introduced to the public. Generative AI is powered by very large machine learning models that are pre-trained on vast amounts of data. A lot of research is being done on these models to better understand the behavior and refine them for broader usage.” 

Overconfidence, they found, occurs when a model tenaciously clings to information from its training data, even when it’s blatantly incorrect for a given question. The culprits behind this phenomenon? The familiar duo of overfitting and underfitting. Additionally he said that, “The “overconfidence” highlighted in the paper could be due to overfitting of models. Ideally, one would want to select a model at the sweet spot between underfitting and overfitting. This is the goal, but is very difficult to do in practice. There are several techniques that could be used to mitigate the challenges that may also arise with such fast-learning models.”

Overfitting is when a model becomes too intricate, mirroring its training data too closely, while underfitting arises when the model lacks enough data to make accurate predictions. Striking the balance between these extremes is the elusive bias-variance tradeoff.

To combat these challenges, developers employ various techniques, some successful and others introducing new conundrums. Whitaker and Howard ventured into the uncharted territory of training a model on a single example, yielding unexpected results.

Enter the world of overconfident LLMs. These models, when exposed to novel, unseen data, exhibit unwarranted self-assuredness in their predictions, even when they are unequivocally wrong. This contradicts the conventional wisdom that neural networks require copious examples due to the intricacies of loss surfaces during training.

The implications are vast. Imagine a medical LLM, primed to diagnose diseases based on patient descriptions. With clear-cut symptoms, it confidently prescribes a disease. However, when symptoms blur or multiple diagnoses are possible, it expresses uncertainty. He further stated that “Researchers recently noticed an unusual training pattern in fine tuning LLMs which led them to infer that the specific model was rapidly learning to recognize examples even by just seeing them once. Though this behavior is very good, it goes against the idea that slowly trained models over lengthy periods of time with varied data sets produce better results.”

Surprisingly, a single example during training had a profound impact on these models, making them overconfident, particularly in the early stages. The quest was to find a way for machines to learn efficiently, retaining reliability while regulating confidence scores.

Furthermore he added, “The researchers’ “memorization hypothesis” is based on observations of behavior while fine tuning pre-trained models using specific data sets consisting of Kaggle science exam questions. This needs to be further studied with other data sets and tested to confirm their findings.”

Overconfidence and overfitting, though related, are not one and the same. 

Overconfidence can stem from limited data or unrepresentative datasets, challenging the fine balance between underfitting and overfitting. These findings, specific to fine-tuning pre-trained models, open new doors, shedding light on the nuanced world of machine learning. Yet, questions remain, including the elusive details of the base model that set this intriguing journey in motion.

AIM Research

AIM Research

AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co

Meet 100 Most Influential AI Leaders in USA

26th July, 2024 | New York
at MachineCon 2024

Latest Edition

AIM Research Mar 2024 Edition

Subscribe to our Latest Insights

By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.

Recognitions & Lists

Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM

AIM Leaders Council

An invitation-only forum of senior executives in the Data Science and AI industry.

Best Firm Certification

“Gold standard” in identifying & recognizing great data science & Tech workplaces

Stay Current with our In-Depth Insights

Our Upcoming Events

Intimate leadership Gatherings for Groundbreaking Insights in Artificial Intelligence and Analytics.

Supercharge your top goals and objectives to reach new heights of success!

Cutting Edge Analysis and Trends for USA's AI Industry

Subscribe to our Newsletter