Personal tools

Large Language Models (LLMs)

The University of Chicago_050723C
[The University of Chicago - Vivian Wu]

- Overview

The field of artificial intelligence (AI) is undergoing rapid transformation globally, triggering a paradigm shift in the way we perceive and use technology. In the vast field of AI, large language models (LLM) have become a revolutionary phenomenon.

LLMs are machine learning (ML) algorithms that can perform natural language processing (NLP) tasks. 

LLM is trained on large amounts of textual data to learn patterns and entity relationships in language. They can recognize, translate, predict or generate text or other content. 

The LLM can be used to perform the following tasks:

  • Generate text and classify it
  • Answer questions conversationally
  • Translate text from one language to another

LLM is the algorithmic basis for chatbots such as OpenAI’s ChatGPT and Google’s Bard. Examples of LLMs include OpenAI’s GPT-3 and GPT-4, Meta’s LLaMA, and Google’s PaLM2. 

Some LLMs have already started using video and audio input for training, which should speed up model development. This form of training could also open up new possibilities for using LLM in autonomous vehicles.


- Large Language Models (LLMs): Great Promises

A large language model (LLM) is a ML model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions conversationally, and translating text from one language to another.

The label "large" refers to the number of values (parameters) that the model can autonomously change while learning. Some of the most successful LLMs have hundreds of billions of parameters.

LLMs are trained using large amounts of data and use self-supervised learning to predict the next token in a sentence given the surrounding context. This process is repeated over and over until the model reaches an acceptable level of accuracy.

Once an LLM is trained, it can be fine-tuned for a wide range of NLP tasks, including:

  • Build conversational chatbots like ChatGPT.
  • Generate text for product descriptions, blog posts, and articles.
  • Answer frequently asked questions (FAQs) and route customer inquiries to the most appropriate personnel.
  • Analyze customer feedback from emails, social media posts, and product reviews.
  • Translate business content into different languages.
  • Classify and categorize large volumes of text data for more efficient processing and analysis.


- LLM vs. ML

Machine learning (ML) is a subset of artificial intelligence (AI). ML involves feeding a program large amounts of data to train it to identify features without human intervention. LLMs are a specific type of ML model that use deep learning to analyze and understand human language. 

AI aims to mimic human intelligence, while ML focuses on learning from data. LLMs are part of generative AI, which is the broader concept of AI systems that can generate various types of content. LLMs can perform a variety of natural language processing (NLP) tasks, such as:

  • Generating and classifying text
  • Answering questions in a conversational manner
  • Translating text from one language to another


Val di Funes_Italy_052622A
[Val di Funes, Italy]

- NLP vs. LLM

NLP is natural language processing, a field of artificial intelligence (AI). It includes the development of algorithms. NLP is a broader field than LLM and consists of algorithms and techniques. NLP specifies two methods, namely machine learning and analyzing linguistic data. 

Applications of NLP are:

  • Automate daily tasks
  • Search engine optimization
  • Classification of large files or groups of files
  • Analysis and isolation of social media content

LLM or large language models, on the other hand, can be considered a subset of NLP and are more specific to human-like text, providing content generation and personalized recommendations.

Large language models are supervised learning algorithms that combine the learning of two or more DNN (deep neural network) models. This form of AI is a ML model that is trained on large data sets to make more accurate decisions than a single algorithm could.


Edge AI: An Accessible, Sustainable Future of AI

LLMs like GPT  (Generative Pre-trained Transformer) make the era of AI possible. These giant models are trained on vast amounts of data and have unprecedented capabilities to understand, generate and interact with human language, blurring the lines between machines and human minds.

The LLM model is still evolving and pushing the boundaries of what's possible - it's incredible. But it's not a blank check. The sheer volume of data required and the computing power required to process the data make these systems extremely expensive to operate and difficult to scale infinitely. 

LLMs’ demands for data and computing power have become voracious - their cost and energy consumption are high and will soon exceed the resources we have available to sustain them.

At our current pace, the LLM will soon encounter a number of inherent limitations:

  • Availability of high-quality data for training.
  • The environmental impact of powering such a massive model.
  • Financial feasibility of continued scaling.
  • Maintaining security for such large entities.

Given the astonishing rate at which AI is adopted and expanded, this tipping point is not far away. What took 75 years for mainframes may only take a few months for AI, as limitations trigger the need to move toward a more efficient, decentralized, accessible subset of AI: niche Edge AI models.


[More to come ...]

Document Actions