Back to blogGetting Started

Setting Up Your First AI Assistant: What Actually Matters

March 5, 20266 min read

When people set up their first AI assistant, they tend to spend 80% of their time on things that barely matter and 20% on things that matter a lot. Here is how to flip that ratio.

The thing that matters most: your system prompt

The system prompt is the set of instructions you give the AI before any conversation starts. It defines the assistant's personality, what it knows, what it should and should not do, and how it should respond.

A weak system prompt sounds like this: "You are a helpful assistant."

A strong system prompt sounds like this: "You are a support assistant for a small software company. You help users troubleshoot issues with the product, answer billing questions, and escalate to a human when you are unsure. Keep answers short and direct. If someone asks a question outside the scope of the product, politely redirect them."

The second prompt gives the AI context, constraints, and a clear job. The first one leaves it guessing.

Spend real time on your system prompt. Write it, test it, refine it. This is where the quality of your assistant actually comes from.

Model selection: it matters less than you think

Claude and GPT-4 are both excellent models. For most assistant use cases, the practical difference in response quality is smaller than the difference between a good system prompt and a bad one.

That said, there are reasons to prefer one over the other:

Claude tends to be more careful, better at following specific instructions, and produces cleaner formatted text. Good for assistants where tone and safety matter.

GPT-4 has broader general knowledge and is often faster. Good for general-purpose assistants or research tasks.

Pick one, test it for a week, and only switch if you have a specific reason to.

Choosing your platform

This depends entirely on where your users are. If they use Telegram, start with Telegram. If they use Discord, start with Discord.

Do not overthink this. You can always add more platforms later. Starting on the wrong platform is a recoverable mistake. Not starting at all is not.

The first thing to test

Once your assistant is live, send it 10 real questions that users might actually ask. Not "what is 2 plus 2" but the actual things your intended audience would want help with. See how it handles them. You will immediately see where the system prompt needs work.

Most people never do this test before going live. It is 15 minutes that saves you a lot of embarrassment later.

What you can stop worrying about

Temperature settings, top-p values, token limits. The defaults are fine for conversational assistants. You do not need to tune these.

Context window size. Unless you are building something that needs to remember very long conversations, the default context is more than enough.

Model version numbers. The difference between GPT-4o and GPT-4o-mini is real but smaller than most people assume for typical chat use cases.

Get the system prompt right. Pick a platform your users are on. Ship it. Iterate from there.

Ready to set up your AI assistant without the hassle?

Get started free