By now you will have probably heard about AutoGPT, powered by GPT-4, which can prompt itself and autonomously complete tasks. Give it a mission and through a combination of automated chain of thought prompting and reflection, it will delegate tasks to itself and run until it's done, or at least until it falls into a loop.
I was going to do a video just on AutoGPT, but then Microsoft launched a demo of Jarvis based on Hugging GPT. I tried it out and I'm going to show you that later, but then in the last 48 hours there were a further 5 developments, including the release of a long term memory addon to ChatGPT called MemoryGPT, the detailed plan for a 10 times more powerful model than GPT-4 from Anthropic, and the worryingly named ChaosGPT, based on AutoGPT and designed to cause maximum damage.
I'm going to try to cover it all, but the first upgrade to the original AutoGPT was to give it the ability to write its own code and then run it on the internet. I'm going to try to cover it all, but the first upgrade to the original AutoGPT was to give it the ability to write its own code and then run it on the internet.
As the author of AutoGPT put it, this allows it to recursively debug and develop. I'm going to show you some amazing use cases in a moment, but this original demo caught the attention of OpenAI's Andrej Karpathy. He called AutoGPT the next frontier of prompt engineering, and later in the thread said this, "In fact, they're golds defined in English in prompts." I think of it as a bit like another layer of automation, where you don't have to come up with each individual prompt, just the overall goal.
Based on my recent video on reflection, I already thought that was big enough news, but we're actually just getting started. Text-to-speech was then added to the model, and here you can see and hear how it's going to search the web for top digital nomad destinations, and then consolidate the info into a CSV file.
"Now I'm extracting information from a relevant website about the best places for digital nomads, including their short descriptions and cost of living. I have successfully created the nomad.zuss.esv file containing the best places for digital nomads, along with their short descriptions and cost of living. Thank you for using my assistance." McKay Wrigley showed how you can sync up speech-to-text to create an app just with your voice.
Based on a similar premise, his coding assistant could design a basic social network, including building and designing a front-end, creating a back-end with a working database, hooking them up, and uploading the code to GitHub. "Run Jarvis." "Yes?" "I need another Next app with Tailwind." "One sec. What would you like the app to do?" "This time I want a basic social networking app, and it needs to have three things.
It needs to have a profile creation form, a profile viewer, and I also want a way to see all the users on the network." "One sec. What do you want?" "What would you like the profiles to have?" "A name, an email, and a username." "I'll add those fields to the profile schema.
What else can I do?" "I want you to optimize the site so that it works with mobile and desktop, and I also want you to style it in like a dark mode." "Anything else?" "Start building." "I'm building the features you asked for." "Oh, he's going fast." "Continuing. The app is building correctly.
Would you like me to deploy it?" "I'm all done. Wasn't that fast? You should give me a raise." "Guess what we have now? We have a profiles table with the correct schema. We should be able to submit this, and this should work. And if we go to our database, it is in our DV.
The game has changed, everyone." Of course, these are not stunningly complex projects, but will this mean you can soon create an app just by speaking your idea into your phone? Imagica AI certainly thinks so. This week, they debuted this. "I'm on the waitlist and will review it when it comes out, but it certainly points the way towards what the future might look like." "On a more concerning note, people have already tried to use AutoGPT to cause mayhem, giving it the goal of destroying humanity, establishing global dominance, causing chaos and destruction, controlling humanity through manipulation, and, of course, destroying the world." "And attaining immortality, for good luck." As I said earlier, this unrestricted agent didn't actually achieve anything other than creating this Twitter account and putting out a few sinister tweets.
But it is a reminder of how important safety tests are before an API is released. That was already enough news for one video. But then yesterday, there was news of Memory GPT. As the creator put it, it's ChatGPT but with long-term memory. It remembers previous conversations. Here's a little glimpse of how it will work.
"I just made ChatGPT but with long-term memory." Basically, anything you say, it's going to remember and it's going to make your experience a lot more personalized. Let's also tell it that I'm launching a new project called Memory GPT, which is like ChatGPT but with long-term memory. It's going to say, "Wow, cool, all this stuff." But now, to prove that it works, I'm going to open it in a new tab.
I'm going to refresh my window. And let's also ask it if it knows of any projects I'm working on. Let's ask that. "Yeah, you're working on Memory GPT, which is like ChatGPT." Imagine the possibilities that will open up when models like GPT-4 can remember everything you've talked about in the past.
Just when I was getting ready to film that video, Quora released this Create a Bot feature on their website, Poe.com. You can use either their Claude model or ChatGPT for this feature. Essentially, what it does is it allows you to give a bot a certain background and personality and then share that bot with others.
To quickly demonstrate, I decided to make this video. I'm going to make my bot an impatient French film director with a pet parrot. This is all totally free. You just scroll down and click on Create Bot. This creates a chatbot and a URL which you can then send to anyone you like.
It's actually really fun to chat to these personalities. And of course, you can do it in the director's native tongue of French. And he will respond in kind in fluent French. One other great thing you can try is creating two different bots and getting them to debate each other.
Here I had Nikola Tesla in conversation with Aristophanes. You just create two bots and copy and paste the outputs. It's an amazing conversation. And less than 72 hours ago, the creators of Claude, Anthropic, announced a $5 billion plan to take on OpenAI. TechCrunch obtained these documents and I found two fascinating quotes from them.
The model is going to be called Claude Next and they want it to be 10 times more capable than today's most powerful AI, which would be GPT-4. This would take a billion dollars in spending over the next 18 months. Now I know some people listening to that will say 10 times more powerful than GPT-4 in 18 months.
That's just not realistic. Just quickly for those people, here is what Nvidia say. On a recent earnings call, the CEO of Nvidia said that over the next 10 years, they want to accelerate AI by another million X. If you break that down, that would be about 10 times more compute every 20 months.
So the Anthropic timelines look plausible. And the second fascinating quote was this. "We believe that the companies that train the best 2025-26 models will be too far ahead for anyone to catch up in subsequent cycles." It is very tempting to speculate as to why that might be. Could it be that the frontier models that these companies develop would then assist those companies in developing better models?
Or is it that these companies would eat up so much compute that there wouldn't be much left for other people to use? Who knows, but it's fascinating to speculate. Before I end though, I must touch on two last things. Hugging GPT and the Jarvis model the video was originally supposed to be about.
And also safety. Here is the Hugging GPT demo, codenamed Jarvis, released by Microsoft. The link will be in the description, as will some instructions on how to set it up. But I should say, it's a little bit hit and miss. I would call it an alpha prototype. By the way, if you haven't heard of Hugging GPT, check out my video on GPT.
It's a very interesting and interesting experiment. It's a very interesting experiment, and it's based on the GPD4's self-improvement. Essentially, it uses a GPT model as a brain, and delegates tasks to other AI models on Hugging Face. When it works, it's really cool, but it takes a little while and doesn't work too often.
From my own experiments, I've noticed that the images have to be fairly small, otherwise you'll get an error. But let me show you one example where it worked. After setting it up, I asked it this: "Please generate an image where four people are on a beach, with their pose being the same as the pose of the people in this image." I know there's a slight typo, but it understood what I wanted, and the image, by the way, is generated from Midjourney.
What did the model do? Well, it analyzed the image, used several different models, and detected the objects inside the image. It then broke down their poses, and generated a new image with the same poses, with people on a beach. That's four or five different models cooperating to produce an output.
But before I end, I do briefly want to touch on safety. Some of these models fail quite hard. They end up in loops, but sometimes quite concerning loops. This AutoGPT ended up trying to optimize and improve itself recursively. Of course, it failed, but it is interesting that it attempted to do so.
And remember, this isn't the full power of the GPT-4 model. This is the fine-tuned, safety-optimized version. And that does make it a less intelligent version of GPT-4, as Sebastian Bubeck recently pointed out with an example. Over the months, so you know, we had access to the GPT-4 in September, and they kept training it.
And as they kept training it, I kept querying for my unicorn in TickZee. To see whether, you know, what was going to happen. And this is, you know, what happened. So it kept improving. And I left out the best one, it's on my computer. You know, I will maybe reveal it later.
But, you know, it kept improving after that. But eventually, it started to degrade. Once I started to train for more safety, the unicorn started to degrade. So if tonight, you know, you go home and you ask GPT-4 and ChatGPT to draw a unicorn in TickZee, you're going to get something that doesn't look great.
Okay, that's closer to ChatGPT. And this, you know, as silly as it sounds, this unicorn benchmark, we've used it a lot as kind of a benchmark of intelligence, you know. So yes, we're not getting the most powerful or intelligent version of GPT-4. But in some circumstances, that might actually be a good thing.
As Yohai, the creator of BabyAGI, which is similar to AutoGP, demonstrated in this example. He tasked his model to create as many paperclips as possible. Sounds good. But the model refused, saying that it should be programmed with a goal that is not focused solely on creating paperclips. And later on said this: "There are currently no known safety protocols to prevent an AI apocalypse caused by paperclips." Eliezer Yudkowsky, a decision theorist and AI safety researcher, reacted like this: "That face when the AI approaches AGI safety with the help of the straightforwardness of a child and gives it primary attention from step one, thereby vastly outperforming all the elaborate dances and rationalizations at the actual big AI labs." And he ended by saying: "To be clear, this does not confirm that we can use AIs to solve alignment, because taking the program with the seriousness of a child is not enough, it's only the first step." But Sam Altman may have a different idea.
Four days ago, he admitted that they have no idea how to align a superintelligence, but that their best idea was to use an AGI to align an AGI. But we do not know, and probably aren't even close to knowing, how to align a superintelligence. And our LHF is very cool for what we use it for today, but thinking that the alignment problem is now solved would be a very grave mistake indeed.
I hesitate to use this word because I think there's one way it's used which is fine, and one that is more scary, but like AI, they can start to be like an AI scientist and self-improve. And so, can we automate our own jobs as AI developers very first? The very first thing we do.
Can that help us solve the really hard alignment problems that we don't know how to solve? That honestly, I think is how it's going to happen. So it could be that the first task of a future AutoGPT model is: solve the alignment problem. Let's hope that that prompt comes back with a positive output.
much for watching to the end and have a wonderful day.