We use cookies to ensure you get the best experience on our website.

8 min read
Prompt Engineering
A practical guide to using LLMs as your first line of support when debugging. Covers prompt strategies, follow-up techniques, and how to refine model answers to match your codebase and tools.

How to Replace Stack Overflow with an AI Pair

Link to this section

Large Language Models (LLMs) have fundamentally changed how developers write, test, and—most importantly—debug code. When you’re stuck on a cryptic error message or a piece of logic that just won’t behave, an AI pair programmer can be an invaluable first line of support. It’s like having a senior developer on call 24/7, ready to look over your shoulder and offer a fresh perspective.

But getting high-quality, relevant help from an AI isn’t as simple as pasting an error and hoping for the best. It’s a skill. This guide will teach you how to move beyond basic questions and use prompt engineering to turn your AI assistant into a genuinely effective debugging partner.

How it works: from error message to AI solution

Link to this section

Using an AI for debugging is an iterative conversation. You provide the context, the AI provides a solution, and you refine it together. This process helps you pinpoint issues faster and understand the underlying problems more deeply.

The core of this process involves a few key steps:

  1. Isolate the problem: Don’t just dump your entire codebase. Pinpoint the specific function, component, or code block that’s causing the issue.
  2. Frame the prompt: Provide the AI with a clear and concise description of the problem, including the code, the error message, and what you’ve already tried.
  3. Analyze the suggestion: The AI will provide a potential solution. Your job is to critically evaluate it. Does it make sense? Does it fit your codebase?
  4. Refine and iterate: If the first suggestion doesn’t work, tell the AI what happened. Provide the new error message or explain why the logic is still flawed. This back-and-forth is where the real magic happens.

Core prompt strategies for effective debugging

Link to this section

The quality of your AI’s assistance depends directly on the quality of your prompts. A well-crafted prompt provides all the necessary context for the model to understand the problem in its entirety. Here’s how to structure your requests for the best results.

Set the stage: context is everything

Link to this section

First, tell the AI what it needs to know about your environment and goals. This initial context-setting is the most critical part of a good prompt.

  • Assign a persona: Start your prompt by telling the AI to act as an expert in a specific domain. For example, “You are an expert in Next.js and Tailwind CSS.”
  • Define the tech stack: List the key technologies you’re using. This helps the AI provide solutions that are compatible with your project.
  • State the objective: Clearly explain what you’re trying to achieve. What is the code supposed to do?

Here’s an example of how to set the stage:

You are an expert back-end developer specializing
in Node.js and asynchronous JavaScript.

I'm working on an Express.js application that uses
Prisma as an ORM to connect to a PostgreSQL database.
I'm trying to write an API endpoint that fetches a
user and their related posts.

Be specific and clear: frame the problem

Link to this section

Next, provide the problematic code and the exact error message. Vague descriptions lead to vague answers.

  • Isolate the code: Paste only the relevant code block. Too much code can confuse the model, while too little can lack important context.
  • Include the full error: Don’t summarize. Copy and paste the complete error message and stack trace.
  • Explain the unexpected behavior: If there’s no error message, describe what’s happening versus what you expect to happen.

Here’s how you would frame the problem:

Here is my API endpoint code:

app.get("/users/:id", async (req, res) => {
    const {id} = req.params;
    const user = await prisma.user.findUnique({
        where: {id: parseInt(id)},
        include: {posts: true}
    });
    res.json(user);
});

When I call this endpoint with a valid ID, I get a 500 internal server error and the console shows this: TypeError: Do not know how to serialize a BigInt.

State your intent: what do you want the AI to do?

Link to this section

Explicitly tell the AI what kind of help you need. Do you want it to find a bug, refactor the code, or explain a concept?

Here are a few ways to state your intent:

  • “Review the code above and identify the bug that is causing the TypeError.”
  • “Refactor this function to be more performant.”
  • “Explain what this error message means in the context of my code.”
  • “Suggest a fix for the bug and explain why it works.”

Include examples: show, don’t just tell

Link to this section

If you can, provide an example of the expected output. This is especially helpful when dealing with data transformations or complex logic.

I expect the JSON response to look like this:

{
	"id": 1,
	"email": "[test@example.com](mailto:test@example.com)",
	"posts": [
		{ "id": 101, "title": "My First Post" }
	]
}

Follow-up techniques: refining the conversation

Link to this section

Your first prompt rarely yields the perfect answer. The real skill of AI pair programming lies in the follow-up. Here’s how to guide the conversation when the initial response isn’t quite right.

What to do when… the first answer is wrong

Link to this section

It’s common for an AI’s first suggestion to be incorrect or incomplete. When this happens, don’t start a new chat. Instead, continue the conversation by providing feedback.

  • State what happened: “I tried your suggestion, but now I’m getting a different error: [new error message].”
  • Correct the AI’s assumptions: “Your solution assumes I’m using version X of the library, but my project uses version Y. How would the solution change for version Y?”
  • Provide more context: “That didn’t work. I think the issue might be related to the [other part of the code].” Then, provide the additional code.

What to do when… the code works, but you don’t understand why

Link to this section

A working solution is great, but understanding why it works is even better. Use the AI as a learning tool to deepen your knowledge.

  • “Can you explain this line-by-line?”
  • “What is the BigInt data type, and why did it cause a serialization issue?”
  • “Are there other ways to solve this? What are the pros and cons of your suggested approach versus other methods?”

What to do when… the suggestion causes new problems

Link to this section

Sometimes a fix in one area can cause a regression elsewhere. When this happens, bring the new problem into the existing conversation.

  • Describe the new issue: “Your fix resolved the TypeError, thank you. However, now my Jest tests for this endpoint are failing. Here is the test file and the error.”
  • Ask for a more robust solution: “Can you modify the solution to work with both the API and the existing test suite?”

Common challenges and how to overcome them

Link to this section

While incredibly powerful, using an AI for debugging comes with a few potential pitfalls. Being aware of them can help you avoid common mistakes.

  • Sharing sensitive information: Never paste confidential or proprietary code into a public LLM. Most companies have strict policies against this. For sensitive code, use a private, self-hosted AI model or focus on debugging general logic rather than specific implementations.
  • Blindly trusting the output: Always treat AI suggestions as advice, not gospel. Review the code, understand what it does, and test it thoroughly before committing it to your codebase. An AI can and will provide code that is inefficient, insecure, or just plain wrong.
  • Providing too much or too little code: The sweet spot for context is key. Pasting an entire 500-line file is often less effective than providing a well-isolated 20-line function. Practice identifying the minimum amount of code needed to reproduce the problem.

Best practices for integrating AI into your workflow

Link to this section

To get the most out of an AI pair programmer, integrate it naturally into your existing habits.

  1. Start small: Begin by using it for isolated, well-defined problems like a single function bug or a confusing error message.
  2. Use it for learning: When you encounter a new concept or a library you’re unfamiliar with, ask the AI to explain it with code examples.
  3. Don’t replace, augment: An AI is a tool to make you a better developer, not to replace your critical thinking. Use it to explore ideas, get unstuck, and learn faster.

How Kinde helps

Link to this section

When you’re integrating a new service like Kinde for authentication and user management, you’re often working with unfamiliar SDKs and APIs. This is a perfect scenario for using an AI pair programmer.

For example, if you encounter an issue while setting up a Kinde SDK in your application, you can use the prompting techniques in this guide to quickly resolve it. You could provide the AI with your setup code, the relevant Kinde documentation, and any error messages to get targeted, actionable advice. This can dramatically speed up your integration process and help you understand how Kinde works under the hood.

Kinde doc references

Link to this section

While this guide focuses on general debugging skills, you can apply these techniques when working with Kinde’s developer tools. For more information on Kinde’s SDKs and APIs, check out our official documentation:

Get started now

Boost security, drive conversion and save money — in just a few minutes.