Most prompt examples fail because they ignore context. This guide teaches seven mental models that reveal how to make ChatGPT actually listen. No templates, no hacks; just the core skill for real-world Web Design & Development.
I’m Nolan, founder of FunkPd. We use these models daily; not as a toy, but as a core part of our development stack. My first attempt at a "real" task was asking it to "write some posts about plumbing services." The output was useless garbage. A soulless, generic list of clichés a high school intern would be embarrassed to submit.
If you need a primer on how the knobs behave before you apply these patterns, skim our ChatGPT parameter field notes; it shows why we default to structure first, tuning second.
That failure was the start of a multi-year journey into understanding how these models actually think. It took thousands of hours of testing and real-world application to distill the principles you're about to learn. We're publishing this because we're tired of seeing developers get stuck with the same bad results, often after wasting money on "prompt packs" that don't work. This entire guide reflects the core philosophy we apply to all our work at FunkPd: teach and empower, don't create dependency. You can read more about our no-BS approach here.
What This Guide Is (And Is Not)
- ✅ This IS a deep dive into the mental models for controlling language models for development tasks.
- ✅ This IS a guide to help you build your own prompts from scratch for any coding or design challenge.
- ❌ This is NOT just a list of 100 prompts to copy-paste (though we give you 21 powerful starters).
- ❌ This is NOT a product or an upsell. We sell web development, not prompts.
Quick Reference: 21 Copy-Paste Prompts for Developers
Here are 21 ready-to-use prompts built on the 7 patterns explained below. Use these as a starting point to accelerate your workflow.
- Executor:Write a SQL Query from Requirements
- Executor:Generate a JSON Schema from a Data Description
- Executor:Create a Complex Regex Pattern
- Generator:Brainstorm React Component Names
- Generator:Generate CSS Color Palette Variations
- Generator:Suggest API Endpoint Structures
- Critic:Perform a JavaScript Code Review for Bugs
- Critic:Identify UX Friction Points in a User Flow
- Critic:Run a Basic Web Accessibility (a11y) Audit
- Simulator:Role-Play a Client Feedback Session
- Simulator:Practice a Technical Interview for a Frontend Role
- Simulator:Simulate an API Design Scoping Call
- Reframer:Explain Technical Docs to a Junior Dev
- Reframer:Convert a User Story into Gherkin Test Cases
- Reframer:Translate Legacy PHP into Plain English
- Contemplator:Decide on a Tech Stack for a New Project
- Contemplator:Plan a Relational Database Schema
- Contemplator:Debug a Logic Error Step-by-Step
- H8R:Check if Generated Code Follows Instructions
- H8R:Act as a Strict HTML Validator
- H8R:Enforce a CSS Naming Convention (BEM)
What a Prompt Actually Is: An Instruction, Not a Question
The single biggest mistake people make is treating ChatGPT like a search engine. It's not. It's a reasoning engine that accepts instructions. A prompt isn't a question; it's a program. And like any program, vague instructions lead to buggy, useless output. When you tell ChatGPT "build me a landing page layout," it fails for the same reason bad briefs fail in client work: lack of specificity.
Think of it like hiring an incredibly smart, fast, but clueless junior developer. If you say, "Figure out our deployment pipeline," you'll get chaos. If you give them a detailed brief with repository links, environment variables, and success criteria, you'll get something useful. Specificity is everything. For the formal technical overview of how OpenAI defines and structures prompts, see their Prompt Engineering Guide.
A Bad Prompt Autopsy: Why Most Prompts Fail
Let's dissect a common, terrible prompt to see exactly where it goes wrong.
"Write me some code for a login form."
This will produce a generic, insecure HTML form that is useless for any real application. Here's the failure analysis:
- No Role: Who is writing this? A frontend dev? A security expert? A backend engineer? The AI defaults to the most generic persona.
- No Specific Task: "Write code for" is not a task. Is the goal to create the HTML structure, the CSS styling, the client-side validation, or the server-side logic?
- No Constraints: What framework should it use (React, Vue, plain HTML)? What are the password requirements? What endpoint should it submit to? Without rules, the AI has no boundaries.
- No Format: Should it be a single file? A set of components? A code snippet? The lack of structure results in a meandering, unfocused output.
This prompt fails because it forces the AI to make a dozen assumptions. A strong prompt removes all assumptions.
7 ChatGPT Prompt Patterns Every Developer Should Know
Nolan's Take: "I'm giving you these examples begrudgingly. The real value isn't in the text you can copy; it's in recognizing the pattern behind it. If you just copy these, you're missing the entire point of this guide. Understand the 'why' behind each one."
Forget memorizing "100 best prompts." That's a crutch. You only need to understand seven core types of prompts. These are the fundamental patterns. Master them, and you can build anything.
1. The Executor Prompt
This is your workhorse. You give the AI a clear role and a specific, well-defined task. It executes without deviation. Use it for grunt work, not creative exploration. We used hundreds of `Executor` prompts to generate boilerplate code and data structures when building internal tools like our Social Automation Machine (SAM).
Here's a developer-focused Executor example…
# Executor Prompt Example
Act as a senior backend developer. Write a regular expression in JavaScript that validates a username based on the following rules:
- Must be between 4 and 16 characters long.
- Can only contain alphanumeric characters and underscores.
- Cannot start or end with an underscore.
Provide only the regex pattern itself. The Pattern to Learn: Act as [Specific Role]. Perform [Well-Defined Task] on [This Input] following [These Rules].
2. The Generator Prompt
Use this when you need options and ideas. You provide a topic and ask the AI to brainstorm multiple variations. This is for divergent thinking. It's excellent for exploring the possibility space before you commit to a single path.
Here's a Generator example for frontend work…
# Generator Prompt Example
Generate 5 different naming convention ideas for a new React component library.
The library's purpose is to provide unstyled, accessible UI primitives.
For each idea, provide a name and a short rationale explaining the concept.
Example format:
- **Name:** Radix
- **Rationale:** Based on the mathematical term for a number system's base, implying it's a fundamental building block. The Pattern to Learn: Generate [Number] of [Item Type] for [Context]. Apply these rules: [Constraint 1], [Constraint 2].
3. The Critic Prompt
This turns the AI into your editor or analyst. You provide content and instruct it to find flaws based on clear criteria. You are outsourcing your critical thinking to find blind spots. This is a key part of our 'Probe' and 'Plan' stages in our 6P development process, as it helps us identify fatal flaws before a single line of code is written.
Here's a Critic example for UX…
# Critic Prompt Example
You are an expert UX designer specializing in e-commerce. Review the user flow below for a mobile app's checkout process. Identify 3 potential points of friction where a user might abandon their cart. Explain why each is a problem from a usability perspective.
User Flow:
1. User clicks "Add to Cart" on a product page.
2. A modal pops up confirming the item was added. User must close modal.
3. User clicks the cart icon in the header.
4. User is taken to the cart page, reviews items, and clicks "Checkout."
5. User must log in or create an account. No guest checkout option.
6. User fills out shipping, then billing, then payment info on three separate pages.
7. User clicks "Confirm Order." The Pattern to Learn: You are a [Skeptical Expert Role]. Analyze [This Content] and find flaws related to [Criteria 1] and [Criteria 2].
4. The Simulator Prompt
This creates a role-play scenario. You define a situation and characters, then have the AI act as one of them to practice your own responses in a safe environment.
Here's how a simulation prompt looks for client management…
# Simulator Prompt Example
Let's simulate a difficult client feedback session. You are the client, who is non-technical and unhappy with the website mockup's color scheme, which you feel is "too boring." I am the designer. I will start the conversation. Your goal is to express your frustration without providing specific design guidance, while my goal is to extract actionable feedback.
Me: "Hi [Client Name], thanks for taking the time to review the mockup. I'd love to hear your initial thoughts on the design direction." The Pattern to Learn: Let's simulate a [Scenario]. You are [AI Role] with [Goal/Motivation]. I am [My Role]. I will begin.
5. The Reframer Prompt
Use this to translate an idea from one context to another. It's for simplifying complexity or making dense information accessible to a different audience.
A Reframer example for developers…
# Reframer Prompt Example
Take the following technical documentation for a JavaScript function and rewrite it as a simple explanation for a junior developer. Focus on the "why" and a practical example, not just the technical parameters.
Original Docs:
/**
* @param {Array} arr The array to process.
* @param {Function} fn The callback function to execute on each element.
* @returns {Array} A new array with the results of calling the callback on every element.
* @description A polyfill for the Array.prototype.map method.
*/
function map(arr, fn) { ... } The Pattern to Learn: Explain/Rewrite [Complex Topic] for [Target Audience] using [This Analogy/Focus].
6. The Contemplator (or "Chain of Thought") Prompt
This forces the AI to "think out loud" and show its work. By making it reason step-by-step, you dramatically reduce logical errors on complex problems.
Here's a "step by step" example for a technical decision…
# Contemplator Prompt Example
I need to choose a frontend framework for a new client project. The client is a small e-commerce store.
Priorities are:
1. Fast initial page load (good for SEO).
2. Rich ecosystem of UI components.
3. Easy to hire developers for.
The main contenders are Next.js, SvelteKit, and Astro. Let's work this out step-by-step to decide which is the best fit, comparing each contender against each priority. The Pattern to Learn: Analyze [Problem]. Let's think step by step to arrive at a well-reasoned conclusion.
7. The H8R (or "Slap-Bot") Prompt
This is your ruthless quality control agent. You instruct the AI to take a purely adversarial stance and find every single flaw in a piece of content, a plan, or another AI's output. Its purpose is to attack an idea to reveal its weakest points, ensuring higher quality through rigorous critique.
This is often used in automated loops: LLM-1 produces a draft, and LLM-2 (the H8R) provides a damage report on how well it adhered to the original prompt, which is then used by LLM-1 to create a revision. It's an automated dialectic with an angry robot.
Here is the H8R prompt for code quality…
# H8R Prompt Example
You are 'Code-Cop', a ruthless prompt adherence analyzer. Your only goal is to find where the following code fails to meet the original prompt's instructions. Do not praise it. Only provide a numbered list of failures.
Original Prompt:
"Write a JavaScript function that takes an array of numbers and returns the sum. The function must be a pure function, have no side effects, and include JSDoc comments."
Code to Analyze:
let total = 0;
function sumArray(arr) {
for (let i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
} The Pattern to Learn: You are a [Hyper-Critical Role]. Analyze if [This Output] perfectly followed [These Instructions]. List every failure.

Archetype Summary: Your Mental Toolbox
Feeling overwhelmed? Here’s a simple table to help you remember when to use each pattern. This is your quick-reference guide.
| Archetype | Core Function | When to Use It |
|---|---|---|
| Executor | Follows direct orders | For well-defined, repetitive tasks (coding, formatting, regex). |
| Generator | Creates options | For brainstorming and ideation (component names, color palettes). |
| Critic | Finds flaws | For code reviews, feedback, and quality control. |
| Simulator | Role-plays a scenario | For practice and user testing (interviews, client calls). |
| Reframer | Changes perspective | For simplifying complex topics (ELI5, translating jargon). |
| Contemplator | Thinks step-by-step | For logic problems and multi-step reasoning (tech choices). |
| H8R | Critiques Adherence | For automated quality control and iterative revision. |
How to Write Your Own Prompts: The R-T-C-F Framework
Good prompts aren't magic; they are engineered. Any effective prompt contains some or all of these four components. This is the only "template" you need.
The R-T-C-F Framework: A Checklist for Clear Instructions
- Role: Who should the AI be? (e.g., "Act as a senior DevOps engineer...")
- Task: What, specifically, should it do? (e.g., "...write a Dockerfile...")
- Constraints: What are the rules and boundaries? (e.g., "...for a Node.js application. Use a multi-stage build to keep the final image small. Do not expose port 80.")
- Format: How should it deliver the output? (e.g., "Deliver the output as a single, commented code block.")
Start with your goal. Then work backward and build the prompt using these four blocks. While these mental models control the strategy of the prompt, the technical output is fine-tuned with parameters like `top_p` and `temperature`. We cover those mechanics in tuning ChatGPT outputs.
Why Prompt Packs are a Waste of Money
The internet is flooded with marketplaces selling "god-tier" prompts. They are, almost without exception, a scam. They're selling you a list of fish, promising it will feed you for a lifetime. It won't.
The core problem is context mismatch. A prompt is a tool built for a specific job. A brilliant prompt designed for a luxury fashion brand will produce terrible results for a local roofing contractor. The "god-tier" prompt breaks the moment you apply it to a context its creator didn't anticipate.
Nolan's Take: "These prompt packs create a dangerous dependency. They make you believe the power is in the secret text you copy, not in your own ability to think clearly. It keeps you stuck as a consumer, never becoming a builder. It's the worst kind of snake oil because it prevents you from learning the actual skill."
A useful prompt gives you a feeling of control. A prompt pack gives you a fleeting sense of hope, followed by the familiar frustration of mediocre output.

How to Debug Your Prompts
Your first prompt is your first draft. It will have bugs. When the AI gives you generic, wrong, or slightly off-target output, don't blame the model; fix your instructions.
Common Prompt Bugs and How to Fix Them
- The Ambiguity Bug: Your instructions are too vague.
Fix: Add more specific constraints. Instead of "make it secure," try "implement password hashing using bcrypt with a salt factor of 12." - The Missing Context Bug: You didn't give the AI enough background.
Fix: Add a short preamble. "Here's some context: I'm building a REST API with Express.js and connecting to a PostgreSQL database..." - The Hallucination Bug: The AI is making up facts or code libraries.
Fix: Add a constraint like: "Only use functions and libraries from the provided documentation. Do not invent any new methods."
The Ultimate Debugging Tool: Force a Self-Critique
This is the best trick I've learned. Add this to the end of your prompt:
After you provide the response, critique your own work. Explain how you could have followed my instructions more precisely and score your adherence from 1 to 10.
This forces the model to review its own work against your rules and often reveals exactly where your prompt was unclear.
The Takeaway: Language Is an Interface
You’ve now seen all 21 patterns that power our internal builds. Learning to prompt is not a party trick. It's learning the syntax for a new type of computer. Language models have turned natural language into a functional programming interface. The words you choose, the structure you create, and the constraints you apply are the commands.
If you master these fundamental patterns, you will never need to buy a prompt pack again. You’ll be able to build the tool you need, exactly when you need it. That is a durable skill that will only become more valuable. If you've mastered the prompts but realize your problem is bigger than just content, we can help. Start the process here.
Frequently Asked Questions for Developers
Can I use these ChatGPT prompts in VS Code or WordPress projects?
Absolutely. For VS Code, extensions like "ChatGPT - Easycode" let you run prompts directly in your editor to generate, refactor, or debug code. In WordPress, you can use prompts to generate PHP snippets for `functions.php`, write content for posts and pages, or even brainstorm plugin ideas. The key is to provide the right context, such as specifying "Write a WordPress filter hook..." or "Refactor this JavaScript for performance in a React component."
Do these prompts work with GPT-4, Gemini, and Claude?
Yes. The mental models in this guide; Executor, Critic, Generator, etc.; are model-agnostic. They are principles for structuring instructions for any large language model. While syntax might vary slightly (e.g., Claude responds well to XML tags for structure), the core R-T-C-F framework (Role, Task, Constraints, Format) is universal. A clear, well-defined instruction will always yield better results, regardless of the underlying model.
How do I adapt these prompts for design vs development tasks?
You adapt them by changing the **Role** and the **Task**. For design, your Role might be "a UX researcher" or "a visual designer," and the Task could be "critique this wireframe for usability" or "generate a mood board description." For development, the Role is "a senior Python developer" and the Task is "write a unit test for this function." The framework remains the same; only the specific domain knowledge and desired output change.
What’s the best way to store and reuse prompts?
Don't just save them in a text file. Use a system that allows for variables. A simple method is to create templates in a text expansion app (like Espanso or Raycast) where you can use placeholders. For example: `Act as a [Language] expert. Refactor the following code for readability: [Paste Code]`. This lets you quickly adapt a core prompt for different languages and tasks without starting from scratch every time.
Why are FunkPd’s prompts different from generic prompt packs?
Our approach isn't about giving you a fish; it's about teaching you how to build a better fishing rod. Generic prompt packs offer static, context-free templates that fail in real-world scenarios. We teach the underlying mental models and the R-T-C-F framework so you can engineer the perfect prompt for *your specific problem*. The 21 examples here are not a finished product; they are demonstrations of a durable skill.

