Question machine

We've been building on GPT for over 2 years, here's one of the biggest things I've learned while building BuildBetter Product Assistant (BBA):

Sam Altman said recently that ChatGPT isn’t a search engine or answering machine, it’s a reasoning machine. Yet, people view these systems as things that provide answers rather than things that can ask great questions. That’s what we’re building. A system that can understand our users, their businesses, their products, and their customers well enough… to ask great questions - which, in my opinion, is the ultimate reasoning task.

Our main prompt is over 5400 words. We’ve been spending a lot of time improving how our assistant works and, most importantly, thinks.

This is a small example but highlights an essential difference between BBA and ChatGPT.

ChatGPT starts off similar, empathizing, but it not only misunderstands who the user is, assuming PM means Project instead of Product Management, but dives straight into solution mode.

When you give people answers, you not only lose the empowerment that comes from their understanding of the “why,” but you also often end up needing to be right. If there is one thing I can confidently say about LLMs, they are rarely right about domain-specific topics that need context.

What is more true than anything I know is that if you hire smart, capable people, they often have the correct answer, and your job is to figure out what questions to ask to guide that answer to the surface.

We have a ton of respect for our users and think they’re smart, capable people. We want to show that respect by treating them accordingly.

BBA doesn’t assume to know the answer or tell the user what to do, unlike current models. Instead, it provides introspection and guidance to help the user resolve their issue through analysis.

Thinking about prioritization, strategy, updates, and refinements in every company is different, and asking questions will help more than answering. Building an assistant that understands this nuance, we feel, will allow it to, eventually, provide better answers later on in the conversation.

To be clear, we aren’t avoiding answering a question if we have the right answer, the goal of our assistant is to figure out what it needs to know, either from the data it has from calls or from the user directly, to answer the question better.