Prompting Basics
Overview
Output quality is a direct function of prompt quality. This lesson establishes the characteristics that distinguish effective prompts from ineffective ones and introduces the specificity principle.
Why Prompt Quality Matters
LLMs generate output by predicting the next token based on the input distribution. A vague input broadens the prediction space, increasing the probability of irrelevant or incorrect output. A specific input constrains the prediction space to the intended result.
Analogy: "Can you help with this?" versus "Review line 42 of auth.js for a null check bug." The latter constrains the model's attention to a specific location, problem, and expected action.
Prompt Quality Spectrum
Low specificity:
Fix my code
Medium specificity:
This function returns undefined when the input array is empty.
Add a check at the start to return an empty array instead.
High specificity:
In src/utils/filter.js, the filterItems function throws when
given an empty array. Add a guard clause at line 12 that
returns [] if items.length === 0.
The Specificity Principle
Effective prompts share three structural characteristics:
- Precise location: File path, function name, and line number where applicable
- Clear problem statement: Current behavior and expected behavior
- Bounded scope: One discrete task per prompt
Anti-Patterns
- Vagueness: "Make it better" provides no actionable direction
- Overloading: "Fix this, add tests, refactor, and document" exceeds a single task boundary
- Assumed context: The model operates on explicitly provided information only; implicit assumptions produce incorrect output
Note: Each conversation session begins without prior conversation history. Context must be explicitly provided unless stored in CLAUDE.md or the memory system.
Key Takeaways
- Prompt specificity directly determines output specificity
- Effective prompts include location, problem statement, and scope boundary
- One task per prompt produces higher-quality output than compound requests
- The model operates on explicit context only; implicit assumptions are a failure mode