back to index

Prompt Engineering Tactics: Dan Cleary


Whisper Transcript | Transcript Only Page

00:00:00.000 | How's it going? I'm Dan. I'm co-founder of Prompt Hub, a prompt management tool designed for teams
00:00:19.640 | to make it easy to test, collaborate, and deploy prompts. Today, I want to talk to you a little
00:00:25.360 | bit about prompt engineering, including over three easy-to-implement tactics to get better
00:00:29.800 | and more accurate responses from LLMs. But first, why prompt engineering? Can't I just
00:00:37.240 | say what I want to the model and I get something pretty good back? And while for the most case,
00:00:41.880 | that's true, additional techniques can go a long ways in terms of making sure that responses are
00:00:47.560 | always better. The non-deterministic nature of these models makes it really hard to predict,
00:00:52.040 | and I've seen that having little changes in a prompt can have outsize effect on the outputs.
00:00:59.600 | And this is especially important for anyone who's integrating AI into their product,
00:01:02.800 | because one bad user experience, or one time the model decides to go off the rails, can result in
00:01:09.360 | disaster for your brand or your product, resulting in a loss of trust.
00:01:13.360 | Additionally, users, now that we all have access to ChatGBT and can really easily access these models,
00:01:21.120 | we have very high expectations when we're using AI features inside of products.
00:01:25.520 | We expect outputs to be crisp, exactly what we wanted.
00:01:28.320 | We should expect to never see hallucinations. And in general, it should be fast and accurate.
00:01:33.760 | And so I want to go over three easy-to-implement tactics to get better and safer responses.
00:01:41.760 | And like I said, these can be used in your everyday when you're just using ChatGBT,
00:01:45.360 | or if you're integrating AI into your product, these will help go a long way to making sure that
00:01:49.840 | your outputs are better and that users are happier.
00:01:52.080 | The first are called multi-persona prompting. This comes out of a research study from the University of Illinois.
00:02:00.240 | Essentially, what this method does is it calls on various agents to work on a specific task when you prompt it.
00:02:07.600 | And those agents are designed for that specific task. So for example, if I was to prompt a model to help me write a book,
00:02:15.200 | multi-persona prompting would lead the model to get a publicist, an author, maybe the intended target audience of my book.
00:02:25.520 | And they would work hand-in-hand in kind of a brainstorm mechanism with the AI leading this brainstorm.
00:02:31.520 | They'd go back and forth, throwing ideas off the wall, collaborating until they came to a final answer.
00:02:36.640 | And this prompting method is really cool because you get to see the whole collaboration process.
00:02:41.920 | And so it's very helpful in cases where you have complex tasks at hand or it requires additional logic.
00:02:48.160 | I personally like using it for generative tasks.
00:02:53.600 | Next up is the according to method. What this does is it grounds prompts to a specific source.
00:02:59.680 | So instead of just asking, you know, what part of the digestive tube do you expect
00:03:04.400 | starch to be digested, you can say that and then just add to the end according to Wikipedia.
00:03:10.320 | So adding according to specified source will increase the chance that the model goes
00:03:15.680 | to that specific source to retrieve the information. And this can help reduce hallucinations by up to 20%.
00:03:21.600 | So this is really good if you have a fine-tuned model or a general model that you know that you're
00:03:26.160 | reaching to a very consistent data source for your answers. This is out of Johns Hopkins University.
00:03:34.080 | It was published very recently. And last up and arguably my favorite is called Emotion Prom.
00:03:40.640 | This was done by Microsoft and a few other universities. And what it basically looked at was how LLMs would react to
00:03:49.120 | emotional stimuli at the end of prompts. So for example, if your boss tells you that this
00:03:53.760 | project is really important for your career or for a big client, you're probably going to take it much
00:03:59.200 | more seriously. And this prompting method tries to tie into that cognitive behavior of humans.
00:04:05.120 | And it's really simple. All you have to do is add one of these emotional stimuli to the end of your
00:04:09.600 | normal prompt. And I'm sure you'll actually get better outputs. I've seen it done time and time
00:04:14.480 | again from everything from cover letters to generating change logs. The outputs just seem to get better and
00:04:21.360 | more accurate. And the experiments show that this can lead to anywhere from an 8% increase to 115% increase,
00:04:28.480 | depending on the task at hand. And so those are three really quick, easy hit methods that you can use
00:04:36.640 | in ChatGPT or in the AI features in your product. We have all these available as templates in PromptHub.
00:04:43.920 | You can just go there and copy them. It's PromptHub.us. You can use them there, run them through our
00:04:49.680 | playground, share them with your team, or you can have them via the links. And so thanks for taking the time to
00:04:56.560 | watch this. I hope they've walked away with a couple of new methods that you can try out in your everyday.
00:05:01.040 | If you have any questions, feel free to reach out and be happy to chat about this stuff. Thanks.