Skip to main content

๐ŸŸข A "Standard" Prompt

We have heard of a few different formats of prompts thus far. Following Kojima et al.1, we will refer to prompts that consist solely of a question as "standard" prompts. We also consider prompts that consist solely of a question that are in the QA format to be "standard" prompts.

Why should I care?โ€‹

Many articles that we reference use this term. We are defining it so we can discuss new types of prompts in contrast to standard prompts.

Two examples of standard prompts:โ€‹

Standard Prompt

What is the capital of France?

Standard Prompt in QA format

Q: What is the capital of France?

A:

Few Shot Standard Promptsโ€‹

Few shot standard prompts2 are just standard prompts that have exemplars in them. Exemplars are examples of the task that the prompt is trying to solve, which are included in the prompt itself3. In research, few shot standard prompts are sometimes referred to simply as standard prompts (though we attempt not to do so in this guide).

Two examples of few shot standard prompts:โ€‹

Few Shot Standard Prompt

What is the capital of Spain?
Madrid
What is the capital of Italy?
Rome
What is the capital of France?

Few Shot Standard Prompt in QA format

Q: What is the capital of Spain?
A: Madrid
Q: What is the capital of Italy?
A: Rome
Q: What is the capital of France?
A:

Few shot prompts facilitate "few shot" AKA "in context" learning, which is the ability to learn without parameter updates4.


  1. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. โ†ฉ
  2. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2022). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys. https://doi.org/10.1145/3560815 โ†ฉ
  3. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., โ€ฆ Amodei, D. (2020). Language Models are Few-Shot Learners. โ†ฉ
  4. Zhao, T. Z., Wallace, E., Feng, S., Klein, D., & Singh, S. (2021). Calibrate Before Use: Improving Few-Shot Performance of Language Models. โ†ฉ