Most searches fail before they start. A poorly framed prompt forces Generative Search to guess at your intent and the result is noise you have to sift through, follow-up queries you shouldn't need, and time you don't have.
This guide covers the techniques that change that: how to write prompts that return precise results the first time, how to use @ mentions to anchor a response to an exact source, and how to structure multi-step instructions for agent workflows. Whether you're pulling a single data point or running a recurring monitoring task, the same principles apply.
1. Be Specific About What You Need
Vague prompts produce broad results. Precise prompts produce useful ones. The difference usually comes down to five elements: Role, Context, Objective, Examples, and Necessary Instructions (RCOEI).
| Element | What It Does | Example |
|---|---|---|
| Role | Defines the lens GenSearch should apply | As a buy-side analyst... |
| Context | Sets the situation | We're evaluating a position in consumer staples... |
| Objective | States what you actually need | Identify margin pressure signals |
| Examples & Specs | Shows what a good answer looks like | Quote the exact language management used |
| Necessary Instructions | Adds constraints | Focus on the last two earnings calls only |
You don't need all five in every prompt, but covering more of them reduces ambiguity.
Weak prompt: Tell me about margins.
Strong prompt: As a buy-side analyst evaluating consumer staples exposure, summarize gross margin trends for S&P 500 consumer staples companies over the last four quarters, noting any outliers. Return results as a bullet list organized by company.
Tip: Specificity applies to scope as well as subject. The more clearly you define what you are and are not asking for, the more precisely GenSearch can target its response.
2. Frame the Type of Answer You Want
GenSearch handles both factual and analytical queries, but it responds differently depending on how you frame the question.
- Use factual framing when you want data, figures, or direct quotes: What did the CFO say about capex guidance on the Q3 earnings call?
- Use analytical framing when you want interpretation or synthesis: How has management's tone on capex shifted over the last three quarters?
Mixing the two in a single prompt can dilute the response. When you need both, ask in sequence.
Engineer Your Output Format
If you want results in a specific format, say so explicitly. Common options:
- Table — side-by-side comparisons across companies, periods, or metrics
- Categorized list — themes, risks, or topics grouped by type
- Hierarchy — layered analysis where top-level findings are supported by sub-points
- Narrative paragraph — synthesis or executive summaries
Tip: Default to a 12-month window when no timeframe is specified, and adjust explicitly when you need something different: over the last two quarters, since Q1 2023, in the most recent fiscal year.
3. Use Qualifiers to Sharpen Results
Qualifiers help GenSearch filter by relevance, recency, or specificity without requiring manual scope adjustments.
Useful qualifier types include:
- Time references: in the last 12 months, since Q1 2023
- Source types: in earnings call transcripts, in 10-K filings
- Analytical stance: from a bearish perspective, focusing on downside risk
Stack qualifiers when precision matters:
Summarize what sell-side analysts said about supply chain risk in semiconductor earnings calls in Q4 2024.
4. Break Complex Questions into Steps
Multi-part questions often return unfocused answers. If your question has two or three distinct components, ask them sequentially or structure them explicitly.
Instead of: asking for a full competitive analysis in one prompt
Try: starting with revenue trends → follow with margin comparisons → close with a synthesis request
This approach also makes it easier to spot where a response goes off track, so you can correct course without starting over.
5. Use @ Mentions to Target Specific Sources
The @ mention syntax lets you target a specific company, watchlist, or industry from inside the prompt itself without adjusting filters or reconfiguring your search scope beforehand.
How to Use It
Type @ anywhere in the prompt input field. A picker will appear listing available entities. Continue typing to narrow the list, then select the entity you want. You can add multiple @ mentions in a single prompt.
Free-Form vs. @-Driven Prompts
| Approach | Example |
|---|---|
| Free-form | Summarize the key risks from Acme Corp's most recent 10-K. |
@-driven | Summarize the key risks from @AcmeCorp_10K_2024. |
The free-form version works when GenSearch can reliably infer the correct document from context. The @-driven version eliminates that inference step entirely, anchoring the response to exactly the company, watchlist or industry you intend to research.
6. Iterate on Prompts Deliberately
Treat your first prompt as a draft. If the response misses the mark, identify the specific gap before revising , and avoid rewriting the entire prompt when only one element is off.
| Problem | Adjustment |
|---|---|
| Too broad | Add a qualifier, a time range, or a named source |
| Too narrow | Remove a constraint or reframe as an analytical question |
| Wrong format or tone | Specify the output explicitly — table, bullet summary, narrative |
Targeted edits make it easier to learn what works and build a personal library of prompt patterns over time.
7. Writing Prompts for Agent Workflows
Agents follow multi-step instructions. Write agent prompts as sequences, not single questions.
Structure: State the objective first → list the steps → define the output format.
Example: Monitor earnings call transcripts for {watchlist} each quarter. Flag any mention of margin compression or pricing power. Return a summary table with the company name, quote, and date.
Ambiguity is more costly in agent prompts than in single queries because errors can propagate across steps. Be explicit about conditions, exceptions, and the format you expect at each stage.
Quick Reference
Do
- Use the RCOEI framework — define Role, Context, Objective, Examples, and Necessary Instructions where relevant
- Ask one complete question at a time
- Name specific companies, metrics, timeframes, and document types
- Engineer your output format explicitly (table, categorized list, hierarchy, narrative)
- Default to a 12-month timeframe and adjust when needed
- Use
@mentions to target companies, industries, or watchlists directly - Break complex questions into sequential steps
Don't
- Use vague terms like "recent" without a defined range
- Submit a keyword or company name alone without a complete question
- Ask multi-part questions in a single prompt when precision matters
- Assume GenSearch will infer source intent from a company name alone when a specific document is needed
- Mix factual and analytical framing in a single prompt when you need precision from both
Comments
0 comments
Please sign in to leave a comment.