# Large Language Models (LLMs)
## Overview
Large Language Models (LLMs) are AI models made popular by tools such as GPT4 (OpenAI), ChatGPT (OpenAI), [[Claude]] (Anthropic), Gemini (Google), Mistral, and many others.
LLMs are mainly able to generate text, but there are also LLMs that can generate images, such as [[DALL-E]], [[FLUX.1]], [[Stable Diffusion]], etc. Others can deal with sound/voice, and even video.
There are also multi-modal LLMs, that can handle multiple modalities.
## How to think about those
LLMs use induction. Current LLMs are unable to use deduction. If you think that they use deduction, it's just an illusion. LLMs generate tons of hallucinations because of the (statistical) associations they make between words, sentences and ideas. It's systematic.
LLMs are trained with induction. They learn by predicting the next word in a sequence of words. This makes them great at outputting words that *look* and *sound* correct in sequence. They have learned patterns for how to do this by seeing billions of word sequence completions. Because they’re so good at outputting correct-sounding words in a sequence, their outputs will often be logically correct. For, it’s more likely that correct-sounding words are logically correct than incorrect-sounding words.
> A parrot that lives in a courthouse will regurgitate more correct statements than a parrot that lives in a madhouse
This explains why [[Chain-of-Thought (CoT) prompting]], test-time compute and other strategies that make the LLM "think” are effective.
LLMs make highly-educated guesses. But it's all still guesses based on statistics. In fact, induction, by definition, is guessing. You use past experience, not premises or understanding, to draw conclusions. There is no “reason” why past experience will hold, or that you are extrapolating the correct conclusion from your experience. Induction is inherently probabilistic. Deduction is not.
The key is treating the LLM as an evolution engine: using it to both generate initial attempts and to improve upon successful approaches through guided iteration.