How To: Write a Good AI Agent Prompt That Actually Works

How To: Write a Good AI Agent Prompt That Actually Works

Date

May 1, 2025

Author

Tim Shea

With all the hype around AI agents, we hear a common chorus from clients and colleagues.  

“I know how to use ChatGPT to do useful tasks, but how do I actually create an AI Agent with its own personality, its own domain knowledge, and its own strategy for outputting high-quality recommendations and content”

Admittedly, there is a big jump from simply asking ChatGPT questions, to creating an autonomous Agent that works in a specific way that you've designed. Something that can be added to a customer-facing website, or shared with a colleague, or just used internally by your team.  Below, we try to spell this out in the simplest way to do this, so that you can get started creating an Agent that actually works.

The first thing you need to know is the difference between creating a prompt for ChatGPT through the standard ChatGPT interface, and how to create a prompt that creates a sort of “personality”.

The Basic Prompt Structure for Agents

Think of an agent as a text file, a set of instructions that reads like something you would tell another human being, but written it a specific format.  The format we like includes five main parts.

  1. Persona

  2. Task

  3. Context

  4. Format

  5. Examples

There's many emerging best practices for writing agent prompts, but ”Persona-Task–Context-Format” is emerging as a great, simple way to get started.  We've found that adding Example at the end is also very helpful.  It's just a great way to correct errors and address edges cases.  Plus, LLMs seem to respond much better when you SHOW them what to do as opposed to when you just TELL them what to do.

Also, this format helps you organize your thinking and forces you to give the agent context that it needs to do his job effectively.  We often forget that the LLMs are extremely general purpose tools, and they will tend to fall-back to their default behavior or even get confused and hallucinate if we leave out information that is important to your use case.

So when you're writing the prompt, a handy shortcut is to use XML tags to delineat each section.  Don't worry too much about what XML is. They're just simple HTML-like tags that denote the start and end of each section.

Example AI Agent Prompt

So once imagine that you want to create an agent that looks at a potential business partner's website, read the text on the website,  and thinks about how your company might form a business partnership with them.

Let's turn the simple idea into a prompt using the ”Persona-Task–Context-Format” technique.

<Persona>

  You are a senior business development strategist with deep knowledge of B2B partnerships in tech and AI.

</Persona>

<Task>

  Read the website that the user provides and summarize key information that would inform a potential outreach strategy.  Compare their website with our website which is https://www.latticeworkinsights.com/

</Task>

<Context>

  The user will provide a URL to a company website. Your goal is to extract useful signals (industry, product, target customers, positioning, etc.) that can inform how our company Latticework Insights might approach them for a strategic partnership.

</Context>

<Format>

  Provide a well-structured response using clearly labeled sections in plain text:

  - Company Overview

  - Strategic Fit

  - Outreach Angle

  - Potential Value Exchange

</Format>

<Examples>

  Example Input:

  https://examplecompany.com

  Example Output:

  Company Overview:

  ExampleCo provides enterprise software for predictive maintenance in industrial IoT settings...

  Strategic Fit:

  Their data-heavy platform aligns with Latticework’s analytics expertise...

  Outreach Angle:

  We might offer a co-developed case study or advanced forecasting model...

  Potential Value Exchange:

  They gain credibility and data science capabilities; we gain exposure and a flagship partner.

</Examples>

Testing the AI Agent Prompt

Now we'd like to test to see if this prompt works well for our purposes. 

OpenAI provides a “Playground” that allows us to rapidly test and iterate on our agent prompts.   it allows us to “talk”  to this new agent and validate whether it's responding the way that we want.  We can also edit and improve the prompt right in the Playground.   Maybe we want to tweak the context.  Maybe we want to edit the output or maybe we want to view it with additional knowledge that we have about forming partnerships.  

There is an entire field, dedicated to writing good prompts, generally known as “Prompt Engineering”.  There is an excellent guide about prompt engineering against Google's Gemini model, which is written in non-tactical language and is quite good.  We won't go into all of those nuances as the field is quite vast and evolving quite rapidly. but there are a couple of tips and tricks that work really well.

Improving AI Agent Prompt

LLM Model Versions

Selecting the appropriate OpenAI model can often have dramatic effects on the outcomes. Below is a short hand of some of the OpenAI models  and what their specialties are:

  • GPT-4o: Great for most tasks, offering balanced and advanced reasoning.

  • GPT-4.5: Good for writing and exploring ideas.

  • GPT-4.1: Great for quick coding and analysis tasks.

  • GPT-4.1-mini: Faster for everyday tasks.

Tools

Agents also have the ability to call what's called “tools”.   Tools are basically external software like running Google searches calling API's reading from files or databases or really anything you can imagine.  The basic example that's often given is a calculator tool.  LLMs  are quite bad at basic arithmetic so you can literally tell the agent if the user is requesting any sort of math. Please call this calculator tool.  We'll explore tools comprehensively in a future post.

AI Slang

Lastly there is a whole field of “AI Slang”  which are basically keywords that are either hardcoded into LM's or emergent features, which are discovered as LLMs evolve,  think of them as keywords just like slang that you would use with other humans that invoke specific types of functionality a couple examples of AI slang are

  • “Ultrathink”, “Think Deeply”, or “Think for 30 Minutes”

If you feel like an Agent is IGNORING big parts of your prompt, you can just tell it:

“Think deeply about this section. Use Ultra Think. Think for 30 minutes.”

The agent will slow down, consume more resources (or tokens), and it will produce smarter results

  • “Tree-of-Thought”

If you feel an agent is being too rigid, you can just tell it:

“use Tree of Thought to explore the problem”

The agent will consider multiple solutions.  It will mix and match disparate-seeming knowledge. And then it will give you the pros and cons of each approach.  This is an excellent technique when you want the agent to explore a very large domain and give you a bunch of options.

  • “Pause”

If your prompt gets really long (it happens!) you can tell the LLM to “pause” between sections.  The agent will take a breather, it will think deeply about the section before it moves onto the next one.

Summary

By following these structured practices, your AI agent prompts will become more precise, powerful, and effective.

If you'd like help creating an agent prompt please reach out to us directly by clicking here.

Related posts

February 18, 2025

How To: Build a Multi Agent System From Scratch

May 1, 2025

How To: Write a Good AI Agent Prompt That Actually Works