In the 48 hours since I released my first code interpreter video, I believe I have found another 12 use cases that showcase its power. From finding errors in datasets, to reading Anna Karenina, ASCII art, to image captioning. Most of these I haven't seen anyone else mention, so let's begin.
First is creating a 3D surface plot which you can see on the left from the image on the right. I know I will get to professional uses in a second, but I was personally very impressed that all of this that you can see can be done through the interface of ChatGPT.
You can even see the little buildings at the bottom left reflected in this 3D surface plot. To give you an idea of how it works, you click on the button to the left of the chat box, and then it analyzes whatever you've uploaded. And all I said was, analyze the RGB of the pixels and output a 3D surface map of the colors of the image.
Now I will admit it doesn't do a perfect job immediately. At first it wasn't downloadable, and then it wasn't big enough, but eventually I got it to work. But it's time for the next example. And what I wondered was, what is the biggest document I could upload to get it to analyze?
The longest book that I've ever read is Anna Karenina. I think it's about a thousand pages long. And I pasted it into a Word doc, and it's about 340,000 words. I uploaded it, and then I asked, as you can see, find all mentions of England, analyze them, and then I asked them to discover the tone in which the country is perceived in the book.
Now I know what some of you may be wondering, is it just using its stored knowledge of the book? And I'll get to that in a second. But look at what it did. It went through and found the relevant quotes. There are seven of them there. I checked the document and these were legitimate quotes.
But here's where we get to something that you can't just do with control F in a Word document. It analyzed the tone and sentiment of each of these passages. And you can see the analysis here. Then I asked, drawing on your own knowledge of the 19th century and the findings above, write a 2,000 word reflection on the presentation of England in Anna Karenina.
Now I know many of you won't be interested in that book, but imagine your own text. This is 340,000 words. It then created a somewhat beautiful essay. And yes, it did bring up each of those quotes with analysis. Now here is where things get kind of wild. Just to demonstrate that it's not using its own knowledge, I asked the same question.
I asked the same question in a different window without, of course, uploading the file. And at first I was like, oh, damn, it did it. Here are the quotes. Wow, it did the job. It didn't even need the document. But that was until I actually checked out whether the quotes were legitimate.
And lo and behold, it had made up the quotes. I searched far and wide for these quotes. And unless I'm going completely crazy, they are completely made up. So when it found those quotes earlier, it wasn't drawing upon its own knowledge. It was finding them from the document. And that serves as a warning of the hallucinations that the model can do if it doesn't have enough data.
I'm going to get back to reliability and factuality in a moment. But just quickly, a bonus. I got it to write an epilogue to the death of Ivan Ilyich, an incredible short story by Leo Tolstoy. And as some people had asked, it can indeed output that to a PDF, which is convenient for many people.
Next, what about multiple files? I didn't actually investigate this in my previous video, which if you haven't watched, by the way, please do check it out. I've got 73 examples of use cases there. But anyway, what I wanted to try out was I wanted to upload four datasets. And then I wanted to get GPT-4 to find any correlations between the datasets.
Also, I was kind of investigating if there was a limit to the number of files you could upload. And honestly, there doesn't seem to be. I picked this global data almost at random, to be honest. It was the amount of sugar consumed per person. And then the murder rate per 100,000 people.
And then the inequality index of each of those countries. And then the population aged 20 to 39. But first, notice how it didn't stop me. I could just keep uploading files. And then it would ask me things like: "Please provide guidance on the kind of analysis or specific questions you would like me to investigate with these four datasets." So it's still aware of the previous files.
What I asked was this: "Analyze all four datasets and find five surprising correlations. Output as many insights as you can, distinguishing between correlation and causation." This is really pushing the limits of what Code Interpreter can do. But it did it. Many of you asked before if it could be lulled with false data into giving bad conclusions.
And it's really hard to get it to do that. GPT-4 is honestly really smart and increasingly hard to fool. You can read what it said. It found a very weak negative correlation, for example, between sugar consumption and murder. And then just admitted there is probably no significant relationship between these two factors.
But notice it then found a correlation that it found more plausible. There is a moderate positive correlation, 0.4, between the murder rate per 100,000 people and the Gini Inequality Index. This suggests that countries with higher income inequality tend to have a higher murder rate. I then followed up with this: "Drawing on your own knowledge of the world, which correlation seems the most causally related?" It then brought in research from the field of social science and gave a plausible explanation about why this correlation might exist.
Obviously this was just my example. You would have to think about all the different files and data sets that you were willing to upload to find correlations and surprising insights within. I'm going to try to alternate between fun and serious. So the next one is going to be kind of fun.
I was surprised by the number of comments asking me to get it to do ASCII art. Now you may remember from the last video that I got it to analyze this image. And yes, I asked it to turn it into ASCII art. And here is what it came up with.
Not bad. Not amazing. Not bad. A bit more seriously now for data analytics. What I wanted to do is test if it could spot an error in a massive CSV or Excel file. This is a huge data set of population density. And notice what I did. I say notice, you almost certainly wouldn't be able to notice.
But basically for the Isle of Man, for 1975, I changed 105, which was the original, to 1500. And I did something similar for Lichtenstein for a different year. Then I uploaded the file and said, find any anomalies in the data by looking for implausible percent changes year to year.
Output any data points that look suspicious. And really interestingly here, the wording does make a difference. You've got to give it a tiny hint. If you just say find anything that looks strange, it will find empty cells and say, oh, there's a missing cell here. But if you give it a tiny nudge and just say that you're looking for anomalies, look out for things like implausible percent changes.
Data that looks suspicious. Then look what it did. It did the analysis and you can see the reasoning above. And it found the Isle of Man and Lichtenstein. And it said these values are indeed very unusual and may indicate errors in the data. It said it's also possible that these changes could be due to significant population migration, changes in land area or other factors.
I guess if in one year one of those places was invaded and it was only a city that was left officially as part of the territory, the population density would skyrocket. So that's a smart answer. But it spotted the two changes that I'd made among thousands of data points.
In fact, actually, I'm going to work out how many data points there were in that file. I used Excel to work it out, of course, and there were 36,000 data points. And I made two changes and it spotted both of them. But now it's time to go back to something a bit more fun and creative.
Next, I gave it that same image again. And I said, write a sonnet about a morpho AI reflecting on a dystopian landscape. She does look kind of solemn here. Overlay the poem in the background of this image using edge detector to avoid any objects. Now, there are different ways of doing it.
It can detect the foreground and background. So it put the text away from the central character. I think the final result is really not bad. And the sonnet is pretty powerful. I'll read just the ending. Gone are the humans it once adored, leaving it in silent solitude. In binary sorrow, it has stored memories of a world it once knew.
In the void, it sends a mournful cry. A ghost in the machine, left to die. Anyway, this is a glimpse of the power of merging GPT-4's language abilities with its nascent code interpreter abilities. Next, people asked about unclean data. So I tried to find the most unclean data I could find.
What I did is I pasted in directly from a website, RealClearPolitics, a bunch of polls, different polls. And notice the formatting is quite confusing. You've got the dates on top. You've got the matching data, coloured data, all sorts of things. Then I asked: Extract out the data for the 2024 Republican presidential nomination.
Analyse the trend in the data and output the results in a new downloadable file. It sorted through and then found the averages for each of the candidates in that specific race. And I'm going to get to factuality and accuracy just a bit later on. The hint is that the accuracy is really surprisingly good.
But I wanted to push it a bit further and do some trend analysis. First, it analysed some of the other races from that very unclean data set. And then what I did is I pasted in an article from Politico. And based on this very messy data, I got it to do some political prognostication.
It analysed the article and the polls and then I think gave quite a smart and nuanced answer. And what about accuracy? I know many people had that question. Well, I uploaded this data and I'm also going to link to it in the description so you can do further checks.
It was based on a fictional food company based in the United States. It was based in Boston and New York. And what I asked was: draw six actionable insights that would be relevant for the CEO of this company. It then gave the insights below. And even though I didn't actually ask for this, it gave six visualisations.
Let me zoom in here so you can see it. And then I picked out at random five of those data points. Obviously, I'm not going to check hundreds of them, but I picked out five. Then I laboriously checked them in Excel and they were all right. Obviously, though, I'm not guaranteeing that every single one of them is correct.
And as I say, you can download the file and see if the six visualisations are correct yourself. So far, honestly, it's looking good. And then below we have more detail on those insights and then some actions that we could take as a CEO. Just like I did with Anna Karenina, I then followed up and said: Use your own knowledge of the world and offer plausible explanations for each of these findings.
This is where GPT-4 becomes your own data analyst assistant. And it gave plausible explanations for these findings. And it also gave a full summary of some of the findings. For example, the higher sales in the East region could be due to a higher population density, better distribution networks, or higher demand for the company's products.
And at this point, you could either use the web browser plugin to do more research on your own, or you could upload more files. I actually asked, and I think this is a great question: Suggest six other company datasets you would find helpful to access to test these suppositions.
Now, obviously, a lot is going to come down to privacy and data protection. But GPT-4 Code Interpreter can solve this. It can suggest further files that would help it with its analytics. And it gives those below. And again, the lazy CEO could just upload those files and get GPT-4 Code Interpreter to do further analysis.
You don't have to think about what to upload. GPT-4 will suggest it for you. Whether that's advisable or not, I'll leave you to decide. The next one is slightly less serious. And it's that Code Interpreter can output PowerPoint slides directly. Now, I know when Microsoft 365 Copilot rolls out, this might be a little bit redundant.
But it's cool to know you can output directly into PowerPoint the visualizations and analysis from Code Interpreter. Now, on to mathematics. And many people pointed out that I didn't fully test out Wolfram to give it a fair shot. So I tested both Code Interpreter and Wolfram on differential equations.
And they both got it right. Interestingly, though, they gave you a link for the step-by-step solutions. Because this is a paid option on the Wolfram website. But I did find some other differences between them. And honestly, it favored Code Interpreter. So here is a really challenging mathematics question. And Wolfram can't get it right.
It says that the answer is 40%, even though that's not one of the options. And yes, it used Wolfram, I think, about five times. Here was the exact same prompt, except instead of saying "use Wolfram", I said "use Code Interpreter". And this was not a one-off example. It fairly consistently got it right.
So Code Interpreter does indeed have some serious oomph behind it. Just quickly again on the silly stuff. I uploaded the entire demo. And I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.
I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this. I did not even know that I was going to be able to do this.