Context Engineering for Better AI Content: How Human Oversight Creates Structured Rules¶
Date: September 16th, 2025
Author: Daniel Shanklin, Director of AI and Technology, AIC Holdings
Tags: AI Development, Content Creation, Prompt Engineering, Human-AI Collaboration
Sources: - AI Pulse Article Development Process: Original version of Bridgewater analysis showing typical AI content issues - Revised Bridgewater Analysis: Human-guided revision demonstrating improved content quality
This article itself demonstrates a key principle: AI alone produces structured but robotic content, while AI paired with human oversight can build systematic rules that create valid, verifiable, and engaging writing. What you're reading now emerged from an iterative process where AI learned to recognize and avoid common content problems through specific feedback and structured guidelines.
The Problem with Default AI Content¶
When I first asked Claude to write about Bridgewater's AI fund, the result was technically accurate but painful to read. The AI produced exactly what most AI systems create: an over-structured document filled with bullet points, bolded subheadings, and business jargon that read like a technical specification rather than business analysis.
The original version included phrases like "systematic evolution," "institutional validation," and "sophisticated technology governance approaches." Every section had rigid formatting with bolded labels like "Technical architecture:" followed by bullet lists. The writing had no personality, no flow between ideas, and no clear analytical perspective.
More problematic, the AI initially attributed academic research results to Bridgewater's actual fund performance, creating a factual error that could mislead business decisions. This highlighted a critical issue: AI systems make logical leaps without proper verification, especially when synthesizing information from multiple sources.
Building Systematic Rules Through Human Feedback¶
The transformation happened through iterative feedback that identified specific problems and created structured rules to prevent them. Rather than general guidance like "write better," I gave Claude precise examples of what to avoid and what to do instead.
For business jargon, I provided specific before-and-after examples: - Instead of "leverages best-in-class AI solutions for optimal synergies" - Use "uses OpenAI, Anthropic, and Perplexity APIs"
For rigid formatting, I explained the bridge sentence technique: - Instead of ending paragraphs with conclusions followed by bolded subheadings - Create hooks that pull readers into the next idea naturally
For factual accuracy, I established verification requirements: - Every number, percentage, and quote needs a traceable source - Never invent company examples or performance figures - Distinguish clearly between academic research and actual business results
Context Engineering Creates Transferable Guidelines¶
The breakthrough came when we documented these specific corrections into systematic guidelines. The AI didn't just learn to fix one article - it learned principles that apply to any business writing task.
The guidelines now include concrete examples of human-like writing techniques: the "data sandwich" method for presenting statistics with context, transitional phrases that replace rigid subheadings, and narrative frameworks that organize complex information without over-structuring it.
Most importantly, the guidelines include structured negatives - explicit lists of what not to do. AI systems excel at following rules when those rules are specific and actionable. Telling an AI "don't use business jargon" is less effective than providing a list of specific phrases to avoid: "synergies," "best-in-class," "scalable solutions."
The Verification Challenge¶
One critical discovery was that AI systems can hallucinate facts even when trying to be accurate. Claude initially claimed Bridgewater's fund achieved a 4.57 Sharpe ratio, but verification showed this number came from unrelated academic research, not Bridgewater's actual performance.
This led to stronger verification requirements: every factual claim must have a traceable source, and academic research results cannot be attributed to specific companies without explicit evidence. The AI now flags potential attribution errors and distinguishes between theoretical research and real-world implementation.
Human oversight proved essential here because I could recognize when something didn't match my knowledge of the industry and investigate further. The AI couldn't self-correct this type of logical error without human guidance.
Measurable Improvement in Content Quality¶
The difference between the original and revised Bridgewater article is dramatic. The revised version flows naturally from one idea to the next, uses specific facts instead of vague business speak, and maintains analytical credibility without sounding robotic.
Reading time improved because the content became more engaging. Information density increased because we eliminated redundant formatting and focused on insights rather than categorization. Most importantly, the factual accuracy improved through systematic source verification.
The revised article sounds like a knowledgeable analyst explaining their research findings rather than an AI organizing information into neat categories. This transformation happened because human feedback created structured rules the AI could follow consistently.
Implications for AI Development at AIC¶
This process reveals how we should approach AI implementation across other business functions. AI systems work best when paired with domain experts who can provide iterative feedback and create systematic guidelines.
The key insight is that effective AI deployment requires context engineering - building specific rules, examples, and verification processes that guide AI behavior toward business objectives. General instructions like "be professional" or "write well" don't work. Specific examples and structured negatives do.
For AIC's portfolio companies, this suggests a framework for AI implementation:
| Step | Brief Description | Detailed Description | Example |
|---|---|---|---|
| 1. Domain Expert Oversight | Identify common AI failure modes | Have subject matter experts work directly with AI systems to spot recurring problems and quality issues that automated systems miss | Content expert recognizes AI uses business jargon like "synergies" and "best-in-class" instead of specific, actionable language |
| 2. Create Specific Guidelines | Build examples of what to do and avoid | Develop concrete before-and-after examples rather than general instructions, focusing on specific behaviors and outputs | "Instead of 'leverages best-in-class solutions' write 'uses OpenAI, Anthropic, and Perplexity APIs'" |
| 3. Build Verification Processes | Establish quality control checkpoints | Create systematic checks for accuracy, sourcing, and alignment with business objectives before final output | Every factual claim requires traceable source; distinguish between academic research and actual company performance |
| 4. Document Transferable Rules | Turn successful patterns into systematic guidelines | Capture what works into repeatable frameworks that improve AI performance across similar tasks | Content writing guidelines with specific examples, structured negatives, and verification checklists |
The Human-AI Partnership Model¶
The most effective approach isn't replacing human judgment with AI, but creating systems where AI handles information processing while humans provide strategic guidance and quality control.
In content creation, AI excels at research, synthesis, and first-draft generation. Humans excel at recognizing when something sounds wrong, understanding business context, and creating guidelines that improve future performance.
This partnership model scales better than either pure human effort or unsupervised AI. The human investment in creating good guidelines pays dividends across multiple AI tasks, while AI handles the time-intensive research and drafting work.
The goal isn't perfect AI that works without oversight, but reliable AI that consistently produces quality output when properly guided. The structured rules we developed for content creation demonstrate how human expertise can be systematized into guidelines that improve AI performance across similar tasks.
This approach to AI development - iterative feedback creating systematic rules - offers a practical framework for implementing AI across other business functions where quality and accuracy matter more than speed alone.