Hi everyone! I'm so excited to be part of this conference and share with you five practical steps from software developer to AI engineer. And if anyone is wondering here this avatar on the slide, this is what happens if you ask AI to make you look a little bit more agentic.
Alright, let's get started. So I'm pretty sure everyone is familiar with this image here and the post from SWIX that defines the new role of the AI engineer. And as you've experienced probably daily in your jobs, you don't need to be a full ML researcher anymore or data scientist.
Things that took months or years before to get AI projects into production is now able to bring just a couple of API calls. Super exciting. But still, if you're working with AI, it still makes sense to understand the basics of the technology. And this involves a couple of things, right?
So you have to understand at a basic level how foundation models work, why they're sometimes producing output that you don't expect in your application code, right? You have to understand how you can customize the models, how you can, you know, for example, sometimes fine-tune models to adapt them to your specific use cases and data sets, how to include functions in your application code to give them access to additional systems.
The good news is if you're just starting on this journey to become an AI engineer, there's plenty of resources now these days available to you to learn. And I wanted to call out one specific course here, which is called Generative AI with Large Language Models. A few colleagues in mind, we actually collaborated with Andrew Ng and the team at deeplearning.ai to put this course together and help you understand the fundamentals of Generative AI to help you build real-world applications.
If you're curious, it's available on deeplearning.ai and on Coursera. Now, the second step in this journey is to start get hands-on with the AI developer tools to help you increase your productivity. And I think we're all seeing this quote here and we experienced it in our daily jobs that how we do work, how we develop applications has changed a lot.
These days, we can literally use natural language inputs to interact with applications and really, English has become one of the most popular and hottest programming languages. I think we can see this happening. For example, you can go these days from English, English to code, by asking AI to, for example, rewrite a readme file.
We can also do code to English, for example, asking AI to document functions in our code. But this is not all. If we look at the software development life cycle, I think many of us can agree that the majority of time we usually spend not writing valuable code, but all the other things around it.
So sometimes up to 70% of unvaluable tasks, which is writing boilerplate code, writing documentation, trying to maintain old code bases. Right? And sometimes we only have like a fraction of the of the time, maybe 30%, that we're spending on actually what, you know, creates joy in kind of the creative tasks in software development.
And this is what led us at AWS, this inspired us to create Amazon Q. Amazon Q. Amazon Q is a generative AI powered assistant specifically developed for software development. And this is much more than just a coding assistant. Q developer actually uses agents to perform much more complex tasks and help you automate those.
For example, feature development and also code transformation. Think about working with old Java based code bases that you need to migrate maybe to your newer Java version. And to show you how this works, I asked my colleague, Mike Chambers, to put together a quick demo. Let's have a look.
Mike Chambers: With Amazon Q installed inside of my IDE, I can go to new tab and I can start a conversation with Amazon Q developer and I can do the kinds of things that maybe you'd expect, such as how can I create a serverless application? And how do I get started?
And the chat session brings back a list of instructions of what I should do, starting off by installing AWS SAM CLI, how to do that, where to get that from, and how to step through the creation of a project. Now, if I've done that, then serverless SAM, for example, might actually come back with some generated code.
And here is that code. Maybe I don't quite know what this code does. So I can right click on the code and send it to Amazon Q asking Amazon Q to explain. And the code then will go into a prompt along with explain and generate an answer. And this is great for code that's been generated for us.
But also imagine code for legacy systems, something that was worked on years ago by somebody else, where you can get Amazon Q to help explain it. We can also get Amazon Q to generate code. Now, this is, again, probably the kind of thing you'd expect. I can put in a comment line inside of my code.
In this case, I want to create an input checking function. I'm going to give it some more definition here that I actually want it to trim any string that's being sent into this function. And yes, Amazon Q can generate this small function. Well, that's great. But what about if I've got more code that I need to have generated?
Well, I can go to the chat and put in /dev. And I can put in a much more comprehensive description of something that I would like. In this particular case, I'm going to ask for it to write a function to search by category in DynamoDB with a bunch of details about the way that I want the output to be formatted.
So this is much more than just a single or a few lines of code. And in this particular case, what's going to happen is it will come back again with a step-by-step list of what's required. So I need to add in template.yaml. It's recommending that I create search by category.mjs and many more things.
But this isn't just a big shopping list of things that I need to do. This is actually a plan. And it's a plan that Amazon Q can actually follow for us. So it generates some code as a change set something that we can look at the difference between our current code and what it suggests.
And if we like that, we can actually click on the insert code button. And it will add all of that code into our project way more than just a couple of lines. So Amazon Q developer is much more than just code completion. All right. If you're curious to learn more about Amazon Q, Amazon Q developer, we have a couple of more sessions throughout this day.
So make sure you're checking those expo sessions. And we also do have a session at our AWS booth here. You can also visit our Amazon Q developer center for much more examples what you can do with it. All right. Let's come to step three. And this is where the fun starts.
Start prototyping and building with AI. And the fun includes a couple of steps, right? Everyone developing with AI knows this. It starts all with defining your use case. And then really, you're on this road, trying to choose from different models. You're trying to, you know, customize them to your use case, decide whether it's prompt engineering, whether you do rec, whether you need to do a little bit of fine tuning there with your data.
And of course, across the whole development workflow, you have to incorporate responsible AI policies, making sure data is private and secure, and also implementing guardrails into your application. And then when you're integrated, another fun part, obviously working with the agents, what we're hearing a lot here throughout this conference and the fun topic of, you know, how to keep them up to date.
GNI ops. I think there's a lot of terms for that. MFM ops, LLM ops. So really kind of a lot of things to consider here. I want to dive in briefly into the topic of models to choose. And this is really an important topic. When you're evaluating models, you have to really evaluate them thoroughly, because most likely, there's not just going to be one size fits all for you.
In fact, if you look at all your use cases you want to implement, there's likely no one model to rule them all. And this is why we developed Amazon Bedrock. Bedrock is a fully managed service that gives you access to a wide range of leading foundation models that you can start experimenting with, implementing into your applications.
It also integrates the tooling you need to customize your model, whether it's fine tuning, also to include reg workflows, to build agents, and of course, everything in a secure environment where you are in full control of your data. And speaking of choice, just to give you a quick overview, as of today, this is the selection of models you can choose from.
We're working with leading companies such as AI 21 labs, Entropic, Coher, Meta, Mistral AI, Stability AI, and we also offer our own Amazon Titan models for you to choose from. And I'm super excited just to call this out. Last week, together with Entropic's launch, we integrated Cloud 3.5 Sonnet on Amazon Bedrock as well, so you can also, since last week, use this model.
Super exciting. Now, with choice also comes responsibility, right? And we continuously innovate and trying to make it easier for you to build applications across the different model types. And just a few weeks ago, we introduced a new unified Converse API on Amazon Bedrock. And what does this do? The unified Converse API helps you with a new unified method structured invocation, meaning you can use the same parameters and bodies regardless of which model you choose.
And we are on the platform side. We're handling this translation if parameters are called different for the different models, handling the system user assistant prompts for you, and also giving you a consistent output format. And as well, having native function calling support in here. But let me show you how this looks in code.
So here's the Python example that shows how you can use the new API. This is Python, so we're starting by just integrating the Python SDK client here. And then you can define this list of messages. And here's, for example, where you put in your user message prompts. You can put in system prompts as well.
And then this message list, you can just pass in this single API call using the Converse API here. In the model ID, you can choose which model you want to test. Here I'm using an entropic model. And then pass the messages and also the inference parameters. And again, in this API, all those parameters are standardized, and we're going to make the work behind the covers to convert this to the specific format that the model is expecting.
So you have an easy way to work across different models. Similarly, here for function calling, we do have the support built in that with the models that support it. So how we implement this is by defining a tool list. So tool here equivalent to the functions you want to give access to.
And then when you're doing the Converse API call, you can pass this list of tools. All right. If you want to find out more about Converse API, here's a link to our generative AI space on community.aws, which has a lot more tutorials, code examples, not just for Python, but across different languages as well.
So check it out. The author here, Dennis Traub, is also somewhere in the audience here this week. So if you want to connect with him, talk about different code examples and how to use the API, feel free to reach out. All right. Now let's integrate AI into our applications.
And this can be a whole session in its own, but I want to focus on one of the hottest topics right now that we're discussing during the conference, which is, of course, agents. And I have one more demo here, and I asked my colleague Mike last time to put together an exciting demo to show you what you can do with agents.
Mike. And we need sound. To be able to create agentic workflows right inside of the AWS console and inside of the service. It works fully serverless, and I've used it to create an agent that plays Minecraft. Let me show you how I did it. If we jump into the AWS console, go down the menu on the left-hand side to agents, you can see the agent screen here, and I can open up my agent, my Minecraft agent.
Now, if I just go into agent builder and just expand the screen out a little bit, you get to see some of the parameters that I used to create this agent. So you can see the large language model I used, in this case, Claude3 Haiku. And you can also see this, the instructions for the agent.
Now, this is not some notes for myself. This is actually prompt engineering that we're doing to explain how we want the agent, in this case, the Minecraft bot, to play the game. And then we also have to add some tools in, right, some Minecraft tools. So we do that through actions and inside of action groups.
So I've got a couple of different action groups. We've got Minecraft actions and Minecraft experimental. Let's have a look at actions. And inside of here, we can see some really simple things, some actions that the bot will be able to do. And these are all linked up to code.
So we've got the action to jump. We've got the action to dig. And you can see the description here for action to dig. It's got some instructions. Again, this is prompt engineering. And then we've got some parameters that we can select, collect. In fact, we require these parameters. So the bot needs to get these for us.
If I scroll down a little further, there's a couple of really simple actions in here. Action to get a player location, and action to move to a location. I want to show you those in action, because the bot can actually problem solve and reason its way to be able to use these tools to solve simple problems.
Let's jump into the game. And so it is nighttime. So let's set it to be the daytime. So that we can see what's going on. So set time to day. Okay. And there in the middle of the screen, you can see Rocky. Rocky is the bedrock agent running inside of the game.
And we can talk to it. We can have a chat session. But what about if we want it to come to us? Now, there is no tool to come to us. So I'm just going to back up a little bit further, make it a little bit more of a challenge.
And I'm going to say, come to me in chat. And what's going to happen now is that the agent's going to reason through a whole set of actions. It's going to look to see who requested that. It's then going to take that name, and that's my name. And it's going to find the location of that player.
And then it's going to map a path from where it currently was to me. All of those things happened all in that blink of an eye. And there's agentic workflows making all of that happen. This is super exciting. I'm discovering new things that this bot can do every day.
But with that, it's back to you. All right. Thank you, Mike. If you're curious to know how we did this, check out our booth session. We're running the demo there as well. And we have another session in the agent's track later today. So make sure you're popping in there if you want to know more.
And, of course, you can find the project code for this on GitHub. So if you want to play it on your own in how you can integrate agents into a fun thing, check out this project. All right. We're almost there. So the last step I really want to call out is stay up to date.
There's so much happening in this space, as you all know. And a really good way to do that is to engage with the community. Speaking of community, I have one last announcement to make. And I'm super excited to announce that we're transforming our AWS loft here in San Francisco into the AI engineering hub for the community.
So we're super excited to host workshops, events, and meetups there. If you want to suggest a couple of topics you're most interested in to make those events most valuable to you, fill out this quick survey here. Also, if you're interested in speaking or hosting a meetup yourself, you can let us know.
And also, we do have another event tonight, which I think we're reaching capacity or just have reached capacity. But we do have a happy hour with Entropic tonight at the loft. In case you didn't make it anymore and we're at capacity, don't worry. We're working on putting together much more events like this in the upcoming weeks and months.
So keep an eye out for those. And with that, I'm coming to the end of my presentation. This wraps it up the five practical steps to become an AI engineer. And let's innovate together. And I'm looking forward and I'm excited to see what you built with AI. Thanks so much.
Make sure you're checking out the rest of the sessions here and also pop by our booth outside. Thanks so much. I'll see you next time.