How's it going? I'm Dan. I'm co-founder of Prompt Hub, a prompt management tool designed for teams to make it easy to test, collaborate, and deploy prompts. Today, I want to talk to you a little bit about prompt engineering, including over three easy-to-implement tactics to get better and more accurate responses from LLMs.
But first, why prompt engineering? Can't I just say what I want to the model and I get something pretty good back? And while for the most case, that's true, additional techniques can go a long ways in terms of making sure that responses are always better. The non-deterministic nature of these models makes it really hard to predict, and I've seen that having little changes in a prompt can have outsize effect on the outputs.
And this is especially important for anyone who's integrating AI into their product, because one bad user experience, or one time the model decides to go off the rails, can result in disaster for your brand or your product, resulting in a loss of trust. Additionally, users, now that we all have access to ChatGBT and can really easily access these models, we have very high expectations when we're using AI features inside of products.
We expect outputs to be crisp, exactly what we wanted. We should expect to never see hallucinations. And in general, it should be fast and accurate. And so I want to go over three easy-to-implement tactics to get better and safer responses. And like I said, these can be used in your everyday when you're just using ChatGBT, or if you're integrating AI into your product, these will help go a long way to making sure that your outputs are better and that users are happier.
The first are called multi-persona prompting. This comes out of a research study from the University of Illinois. Essentially, what this method does is it calls on various agents to work on a specific task when you prompt it. And those agents are designed for that specific task. So for example, if I was to prompt a model to help me write a book, multi-persona prompting would lead the model to get a publicist, an author, maybe the intended target audience of my book.
And they would work hand-in-hand in kind of a brainstorm mechanism with the AI leading this brainstorm. They'd go back and forth, throwing ideas off the wall, collaborating until they came to a final answer. And this prompting method is really cool because you get to see the whole collaboration process.
And so it's very helpful in cases where you have complex tasks at hand or it requires additional logic. I personally like using it for generative tasks. Next up is the according to method. What this does is it grounds prompts to a specific source. So instead of just asking, you know, what part of the digestive tube do you expect starch to be digested, you can say that and then just add to the end according to Wikipedia.
So adding according to specified source will increase the chance that the model goes to that specific source to retrieve the information. And this can help reduce hallucinations by up to 20%. So this is really good if you have a fine-tuned model or a general model that you know that you're reaching to a very consistent data source for your answers.
This is out of Johns Hopkins University. It was published very recently. And last up and arguably my favorite is called Emotion Prom. This was done by Microsoft and a few other universities. And what it basically looked at was how LLMs would react to emotional stimuli at the end of prompts.
So for example, if your boss tells you that this project is really important for your career or for a big client, you're probably going to take it much more seriously. And this prompting method tries to tie into that cognitive behavior of humans. And it's really simple. All you have to do is add one of these emotional stimuli to the end of your normal prompt.
And I'm sure you'll actually get better outputs. I've seen it done time and time again from everything from cover letters to generating change logs. The outputs just seem to get better and more accurate. And the experiments show that this can lead to anywhere from an 8% increase to 115% increase, depending on the task at hand.
And so those are three really quick, easy hit methods that you can use in ChatGPT or in the AI features in your product. We have all these available as templates in PromptHub. You can just go there and copy them. It's PromptHub.us. You can use them there, run them through our playground, share them with your team, or you can have them via the links.
And so thanks for taking the time to watch this. I hope they've walked away with a couple of new methods that you can try out in your everyday. If you have any questions, feel free to reach out and be happy to chat about this stuff. Thanks.