Zero-Shot & Few-Shot Prompting

Learn the basic but powerful techniques of providing zero or multiple examples to guide an LLM.

Zero-Shot & Few-Shot Prompting

Zero-shot and Few-shot prompting are two of the most fundamental techniques for guiding a Large Language Model. The key difference between them is simple: do you provide examples or not?

Understanding when to use each is crucial for getting efficient and accurate results.


Zero-Shot Prompting

Zero-shot prompting is when you ask the model to perform a task without giving it any prior examples of how to do it. You are relying entirely on the model's pre-existing knowledge and its ability to understand your direct instruction.

This is the most common form of prompting.

When to Use It:

  • Simple Tasks: Perfect for straightforward requests like summarization, translation, or answering general knowledge questions.

  • Creative Generation: Useful when you want the model to generate something novel without being constrained by specific examples.

  • Speed and Simplicity: It's the fastest and simplest way to get a response.

Example: Sentiment Analysis

In this example, we ask the model to classify the sentiment of a customer review without showing it what a classification looks like first.

Prompt:

Classify the sentiment of the following customer review. The sentiment can be positive, neutral, or negative.

Review: "The shipping was a bit slow, but the product itself is fantastic!"

Sentiment:

Expected Output:

Positive

The model correctly infers the task and provides the right answer based on instruction alone.


Few-Shot Prompting

Few-shot prompting is when you provide the model with several examples (the "shots") of the task within the prompt itself. By showing the model the desired input-output format, you are conditioning it to follow your pattern.

This is a powerful technique for improving reliability and getting structured output.

When to Use It

  • Complex or Novel Tasks: When the task is unusual or has a specific format that the model might not guess correctly.

  • Structured Data Output: Essential if you need the model to generate output in a very specific format (like JSON, or a custom string).

  • Improving Accuracy: Providing examples significantly increases the chances that the model will perform the task correctly.

Example: Extracting Data

In this example, we want to pull out specific product codes from a text. By providing examples, we teach the model the exact pattern to follow.

Prompt:

Extract the product code from the following text descriptions.

Text: "I'd like to order the new Hyper-Vortex 5000 GPU." Code: HV-5000

Text: "Can you tell me if the QuantumDrive Pro SSD is in stock?" Code: QDP-SSD

Text: "The client is interested in the Celestial-Link Satellite router."

Code:

Expected Output:

CL-SAT

By seeing the pattern, the model understands exactly how to extract and format the product code from the new text

Last updated