The Compound Prompt Produces Better Content
Asking AI to check its work produces significantly better results for writers and content creators.
Artificial intelligence tools are ubiquitous. AI has infiltrated search results, been added to just about every software tool we use, and has become the first stop for many, if not most, writers and content creators working on articles, blog posts, or email newsletters.
Even as the letters in this sentence are being typed, Grammarly's aggressive AI is underlining phrases, complaining about passive voice, and hating the Oxford comma. So, it is not whether we use AI, but how.
Article SOPs
Since May 2024, I have had an assistant (human) follow a prompting standard operating procedure (SOP) to produce articles for some of my blogs and content sites. For example, she has prompted her way to articles for Science Fiction Classics and Ecommerce Shelf Life.
There is an SOP for each kind of article she prompts. Each SOP has her follow a series of prompting steps that, in one sense, are unique to the particular type of article being composed, but, more generally, follow a prompting framework.
One of the best improvements we made in this prompting framework was having the AI check its work, specifically, asking it to fact-check itself and critique its writing.
The result has been a marked improvement in writing quality.
Prompting Framework
Here is how this "check yourself" step works in a typical article prompt SOP. This framework assumes you have a topic —often in the form of a working title for the article.
- Develop the Research. Since my assistant is neither a subject matter expert nor a journalist, we have the AI do the initial research. Often, this prompt asks the AI to "teach me" about the topic. Recently, we have been using "Deep Research" modes in ChatGPT or Gemini for this step.
- Assign the task. Once the AI has researched the topic and provided background information, assign the task. This prompt is usually something like, "Imagine you are an entertainment journalist with 25 years of experience. You're composing...."
- Include references. When we assign the task, we also upload three documents. One is the publication's style guide. The second document defines the audience, and the third outlines the article structure, specifies the word count, and includes examples.
- Fact check. When the AI generates its article draft, we ask it and another large language model to look for factual errors. So, if you're working with ChatGPT, ask GPT to fact-check the work, and also paste it into Gemini and ask for it to check too. Have GPT update the article draft with any changes.
- Critique. Finally, ask the AI to review and critique the article. The AI will evaluate its work and offer recommended changes. Have it update the work.
- Human review. Lastly, my assistant and I will each read the article and make minor changes.
We've found this produces better results than what is possible when you just ask for an article or blog post.
Why This Works
This approach works because of how large-language models (LLMs) function.
An LLM has been trained with massive amounts of text —books, articles, websites, and more— to learn how words and ideas usually follow one another. In a sense, it predicts what comes next when you write a blog post or article, but it does not understand the language like a person does.
Imagine writing a sentence and constantly guessing the next word. The model starts with a prompt or a few words, then picks the word it thinks fits best, adds it to the sentence, and repeats this process. Each new word is chosen solely based on the words that came before it.
In its simplest form, the LLM never goes back to change what it already generated. It does not edit previous words and, therefore, makes mistakes in fact and composition.
When you ask AI to check its work, you allow it to make those corrections.