4 Illustrative Examples of AI‑Assisted Work
This chapter presents illustrative examples of how generative AI can be used in research and investigative contexts. The examples are intentionally generic. Their purpose is not to prescribe workflows, endorse specific tools, or define best practices, but to demonstrate patterns of use that align with the conceptual and normative commitments outlined in earlier chapters.
Across all examples, the emphasis is on how AI outputs are interpreted and integrated, rather than on the outputs themselves. The value of AI assistance lies not in producing answers, but in supporting human sense‑making, reflection, and revision.
4.1 Early‑stage inquiry
In the early stages of a project, the problem space is often under‑specified. Questions may be loosely defined, relevant boundaries unclear, and alternative framings insufficiently explored. In this context, generative AI systems can help broaden the space of possible questions, angles, or interpretations.
For example, a researcher might ask for alternative ways of framing a problem, related issues that could warrant attention, or different analytical lenses that might be applied. The resulting outputs function as prompts for further thinking rather than as research findings. They are starting points for exploration, not endpoints for analysis.
4.1.1 Mini‑example: exploratory framing
Hypothetical prompt
“List several alternative ways this policy issue could be framed for analysis, highlighting different underlying concerns or objectives.”
Illustrative outcome
The system proposes multiple framings (e.g., equity‑focused, cost‑containment‑focused, implementation‑focused), each emphasizing different trade‑offs.
How it is used
The researcher reviews the list, discards irrelevant framings, and selects one to pursue based on the project’s purpose and constraints.
4.1.2 Misuse or overreach
Counter‑example
Treating the generated framings as a comprehensive or authoritative taxonomy of the issue, and proceeding as if unlisted framings do not exist.
This overreach risks premature closure and can obscure perspectives that require domain knowledge or stakeholder input to surface.
4.2 Drafting and revising analytical text
Generative AI can assist with drafting and revising analytical text, particularly with respect to clarity, organization, and tone. For instance, it may help identify ambiguous passages, suggest alternative ways of structuring an argument, or offer rephrasings that improve readability for a given audience.
In this role, the system functions as a linguistic and organizational aid rather than as a substantive contributor. Claims, interpretations, and evaluative judgments must remain under human control. AI‑assisted revisions should therefore be reviewed not only for surface quality, but for whether they preserve the intended meaning, qualifications, and analytical stance of the original text.
4.2.1 Mini‑example: revising for clarity
Hypothetical prompt
“Rewrite this paragraph to improve clarity and flow, while preserving all substantive claims and qualifications.”
Illustrative outcome
The system produces a smoother, more concise version with clearer sentence structure.
How it is used
The researcher compares the revision against the original to ensure no claims were strengthened, softened, or reframed unintentionally.
4.2.2 Misuse or overreach
Counter‑example
Accepting a revised paragraph without checking whether nuances, uncertainties, or methodological caveats were altered or removed.
This can result in analytically misleading text that appears more confident or definitive than the underlying evidence supports.
4.3 Working across multiple sources
When dealing with large bodies of text—such as reports, interview transcripts, policy documents, or literature reviews—generative AI systems can help summarize recurring themes, contrasts, or points of divergence across sources. These summaries can serve as navigational aids, helping researchers decide where closer reading or deeper analysis is warranted.
However, such outputs should be treated as provisional and incomplete. They do not replace engagement with primary material, nor do they resolve questions about evidentiary weight, methodological quality, or contextual nuance.
4.3.1 Mini‑example: cross‑document orientation
Hypothetical prompt
“Summarize recurring themes and points of disagreement across these documents, without assessing their validity.”
Illustrative outcome
The system highlights several common themes and notes where documents diverge in emphasis or conclusions.
How it is used
The researcher uses the summary to prioritize which documents to read closely and where to look for supporting or contradictory evidence.
4.3.2 Misuse or overreach
Counter‑example
Treating the AI‑generated summary as a substitute for reading the source material, or citing it as if it reflected evidentiary consensus.
This risks flattening important distinctions and overlooking context, methods, or limitations embedded in the original sources.
4.4 Interpretive control as a unifying principle
Across these examples, a consistent theme is interpretive control. Generative AI can assist with exploration, organization, and expression, but the usefulness of its outputs depends entirely on how they are evaluated and integrated by the researcher.
In each case, the system provides material to think with, not conclusions to accept. Maintaining this distinction is essential for ensuring that AI‑assisted work remains analytically rigorous, methodologically transparent, and accountable to the standards of the research context in which it is used.
4.5 Transition: from examples to practice
The examples in this chapter illustrate what AI‑assisted work can look like when used in support of inquiry rather than as a substitute for judgment. The next chapter builds on these illustrations by turning to practice: how to review, document, and govern AI‑assisted work in ways that make assumptions visible, decisions traceable, and responsibility clear.
Where this chapter focuses on patterns of use, the following chapter addresses patterns of oversight—closing the loop between exploration, interpretation, and accountability.