Zero-Shot / Few-Shot Prompting
Prompting techniques that describe how much guidance, in the form of examples, is provided to an AI model before asking it to perform a task. In zero-shot prompting, no examples are given and the model relies entirely on its training and the instructions provided. In few-shot prompting, one or more examples of the desired input-output format are included, which typically improves accuracy and consistency. Few-shot prompting is particularly useful in CRE applications where the desired output follows a specific structure, such as a standardized deal screening summary or a lease abstraction template.
Putting Zero-Shot / Few-Shot Prompting in Context
An acquisitions analyst building a deal screening prompt finds that zero-shot instructions alone produce summaries with inconsistent structure across different offering memoranda, so they add two completed examples of the firm’s standard screening output directly into the prompt, after which the model reliably mirrors the exact field order, terminology, and level of detail the investment committee expects, without requiring the analyst to reformat each output before distribution.
Frequently Asked Questions about Zero-Shot / Few-Shot Prompting
When should I use zero-shot prompting versus few-shot prompting in a CRE workflow?
Zero-shot prompting is appropriate when the task is straightforward, the desired output format is simple, and the model’s general training is likely sufficient to produce usable results, such as asking for a plain-language summary of a market report section. Few-shot prompting becomes more valuable when the output needs to match a specific structure, use firm-specific terminology, or replicate a format the model would not naturally produce on its own, such as a standardized lease abstract template or an investment committee memo in a particular house style. If zero-shot output requires consistent reformatting before use, that is a reliable signal to switch to few-shot.
How many examples should I include in a few-shot prompt for CRE tasks?
For most structured CRE output tasks, two to three well-chosen examples are sufficient to establish the pattern the model should follow. A single example can work when the format is simple and consistent, but two examples that show slight variation in the input help the model generalize the pattern rather than copying the first example too literally. More than four or five examples rarely improves performance meaningfully and consumes context window space that could otherwise be used for the actual document being processed.
What makes a good example for a few-shot prompt in a CRE context?
The best examples are drawn from real outputs the firm has already produced and approved, such as a completed deal screening memo, a finished lease abstract, or a past investor update, because they reflect actual standards rather than an idealized version of what the output should look like. Examples should also represent the typical range of inputs the prompt will encounter rather than the easiest or cleanest case, so the model learns to handle variation. Scrubbing sensitive deal-specific details from examples before including them in a shared system prompt is a straightforward data hygiene practice worth building in from the start.
Does few-shot prompting work differently depending on which AI model I am using?
More capable models tend to generalize effectively from fewer examples and are less likely to over-fit to the literal surface features of the examples provided, while smaller or less capable models may require more examples or more explicit formatting instructions to produce consistent results. This means a few-shot prompt developed for one model may need adjustment when switched to another, even if the task is identical. Testing the same prompt across candidate models on a representative sample of actual CRE documents is the most reliable way to assess whether the example count and quality are calibrated correctly for the model in use.
Are there risks to using few-shot prompting that CRE teams should be aware of?
The most common risk is that the model anchors too heavily on the specific details of the examples rather than the structural pattern they are meant to illustrate, producing outputs that echo the example’s content rather than accurately reflecting the new input document. This is particularly problematic in lease abstraction or deal screening, where a model that mirrors an example’s rent figure or tenant name rather than extracting the correct values from the actual document can introduce errors that are difficult to catch without careful review. Choosing examples that are clearly distinct from the likely inputs and reviewing early outputs closely after deploying a few-shot prompt are both effective mitigations.
Click here to get this CRE Glossary in an eBook (PDF) format.

