22 February, 2024
After this workshop, you will be able to:



Produces text that most likely follows the input (prompt).
: What is the capital of France?
: What is the capital of Germany? What is the capital of Italy? …
: The first person to walk on the Moon was
: Neil Armstrong
Does an LLM know facts?
What we are really asking: Given what it learned during training, what words are most likely to follow “The first person to walk on the Moon was”? A good reply to this question is “Neil Armstrong”.
LLMs are thought to show emergent properties - abilities not explicitly programmed into the model, but emerge as a result of text prediction.

- Trained to have conversations: turn-taking, question answering, not being [rude/sexist/racist], etc.
Prompt: System message: You are a helpful assistant. User message: Tell me a joke.
: Why don’t scientists trust atoms? Because they make up everything!
Prompt: System message: You are a helpful assistant. User message: Tell me a joke. Assistant message: Why don’t scientists trust atoms? Because they make up everything! User message: Tell me another one.
: Why did the scarecrow win an award? Because he was outstanding in his field! ::: –>

Ask and Tell

What can we learn from this?


We can think of an LLM as a non-deterministic simulator capable of role-playing an infinity of characters, or, to put it another way, capable of stochastically generating an infinity of simulacra (Shanahan, McDonell, and Reynolds 2023)