Why I Rebuilt My Own Prompting Framework

CHAT GPT

Phillip Twyford

Most AI output is disappointing. Not because the tools are bad, but because the instructions going in are vague.

Business owners open ChatGPT, Claude, etc., type a rough request, get a mediocre answer, and conclude that AI isn't for them. That conclusion is wrong, but the frustration is fair. When you put in a task without context, without a clear outcome, without any structure, you get back something generic. It's cause and effect.

The fix isn't a better tool. It's a better process.

How SCORE Started

A while back, I built a prompting framework I called the SCORE Method. It was a five-step system I created as a lead magnet for my website, designed to give business owners a structured way to write better prompts without having to think too hard.

SCORE stood for:

  • Situation

  • Context

  • Outcome

  • Role

  • Examples

It did what it was supposed to do. It was clear, simple, and repeatable.

The Comparison That Made Me Look Harder

Not long ago, I came across a prompting framework. It was structured differently from SCORE. Rather than dismiss the comparison or defend my own version, I decided to do what I think any decent consultant should do: look at both honestly and see which held up.

It was a useful exercise. When I put them side by side, SCORE performed well on structure and context. But one thing was missing.

SCORE never asked you to define the business goal before you started the task.

You could go through every step correctly and still produce content that was technically well-structured but commercially pointless. You'd have given the AI a clear role, solid context, a specific format -- and no idea what the output was actually supposed to achieve.

That's a real gap. So I fixed it.

Introducing SIGNAL

SIGNAL is SCORE rebuilt with that gap closed. It's a six-step prompting framework designed for business owners who want AI output they can actually use, not output they have to rewrite from scratch.

The name change matters. SCORE was a useful checklist. SIGNAL is a system. The difference is that SIGNAL starts where prompting advice almost never does: with the business outcome you're trying to produce.

Here's the full framework:

S -- Set the Goal

Before you write a single instruction, define what success looks like. Not the task. The outcome.

"Write me a follow-up email" is a task. "Get a quote request from a warm prospect who went quiet three weeks ago" is a goal. One of those gives the AI something to aim at. The other doesn't.

This step is what most prompting advice skips. It's also the step that makes the biggest difference to output quality.

I -- Instruct

Assign a role and give a precise task.

"Act as an experienced sales consultant writing to a B2B professional services prospect. Write a two-paragraph follow-up email that reopens the conversation without pressure."

The role shapes how the AI thinks. The precise task shapes what it produces.

G -- Give Context

Describe the situation, the audience, and the relevant background. The more specific, the better.

"The prospect is an accountancy practice owner in Dublin. We spoke three weeks ago. They were interested but said the timing wasn't right. The tone needs to be warm but professional -- no hard sell."

Generic context produces generic output. Specific context produces something that sounds like you actually know the person.

N -- Note the Format

Tell the AI exactly how you want the output structured.

Word count, number of paragraphs, whether to include a subject line, whether to use bullet points or prose, and what to avoid. If you don't specify, the AI will make its own decisions. Sometimes those decisions are fine. Often they aren't.

Also state your deal-breakers. If there are phrases you never use, or a sign-off that's yours, include that here.

A -- Ask for Reasoning

Ask the AI to show its thinking and flag anything it's uncertain about.

"Before you write the email, tell me what approach you're taking and why. If there's anything you're unclear on, flag it."

This step catches problems before they appear in the output. It also produces better output, because the act of explaining its reasoning forces the AI to think through the brief more carefully.

L -- Loop and Refine

The first output is a starting point, not a finished product.

Read it. Edit it. Ask the AI to revise specific sections. Ask it to try a different tone for the opening. Push back on anything that doesn't sound right. Once you've got something that works, save it as a template for next time.

Most people stop at the first answer. That's where the process should start.


What a Full SIGNAL Prompt Looks Like

Here's an example using all six steps for the follow-up email scenario above:

Goal (S): I want to reopen a conversation with a prospect who went quiet three weeks ago and get them to agree to a 20-minute catch-up call.

Instruct (I): Act as an experienced B2B sales consultant. Write a short follow-up email from a digital marketing consultant to a prospect who expressed interest but said the timing wasn't right.

Context (G): The prospect runs an accountancy practice in Dublin. We had a good initial conversation three weeks ago. They're interested in getting better results from their marketing but felt they needed more time to think. The relationship is warm but not close. Tone should be professional and low-pressure.

Format (N): Two short paragraphs. No subject line needed. Plain prose, no bullet points. End with a soft CTA -- suggest a quick call rather than demand one. Don't use phrases like "just checking in" or "circling back."

Reasoning (A): Before writing, briefly describe the approach you're taking and flag anything you'd want to clarify.

Loop (L): After the first draft, suggest one alternative opening line that takes a slightly different angle.

That prompt takes about three minutes to write. The output it produces takes about three minutes to edit. That's a usable email in under ten minutes, written to a specific goal, in a tone that fits the relationship.


The SIGNAL Checklist

Before you send any AI prompt, run through this:

  • Have I defined the business outcome, not just the task?

  • Have I assigned a role and given a precise instruction?

  • Have I provided specific context -- audience, situation, background?

  • Have I specified the format, length, and any deal-breakers?

  • Have I asked the AI to show its reasoning?

  • Am I treating the first output as a draft, not a final?

If you can tick all six, you're prompting correctly. If you're skipping steps, that's where the disappointing output comes from.


Read all my Digital Sparks on my blog here.