Elisabeth Buffard Reviewer: Peter van de Ven Hey, good morning, everyone. Let's start by taking a step back. Remember GANs, Generative Adversarial Networks. They represented a very compelling architecture, in my opinion. Two neural networks working hand-in-hand, one generating and one is the critic, in order to generate high-quality outcomes. Then came Transformers, that changed everything.
We dropped the adversarial, and the focus became solely on the generative. And they became the state-of-the-art for a variety of use cases. But code is very, very nuanced. We believe that in order to generate code that actually works as intended, the right architecture is actually GAN-like architecture. And what I mean by that is not the actual neural network.
It's the system. It's the concept of having two different components. One focused on the code generation piece, and one that serves as the critic. We call it the code integrity component. It actually analyzes the outcomes, the generation of the code gen component, and it reviews it, it analyzes it.
It tries to figure out all the different edge cases, in order to generate high-quality code that works as intended, based on the developer's actual intent. This is our focus at Codium AI, on the critic piece. We help developers understand the behaviors of their code. We believe that behavior coverage is a more useful metric than actual code coverage.
We help them generate tests for these behaviors, enhance their code, and review their code. And we do that throughout the developer lifecycle, leveraging our IDE extensions for both JetBrains and VS Code, and our Git plugin. And then soon in the future, in the near future, we will also offer APIs for this, to be able to be embedded in various agents.
So, we're going to focus the majority of the time in a live demo, which is a risky thing to do in this situation here. But let's go for it. Okay, I'm here in my VS Code. I have the Codium AI extension installed. We now have around 200,000 installs across both JetBrains and VS Code.
I have here an open source project that's called AutoScraper. It's basically a scraping class that automates the process of generating the rules for scraping information from websites. It's a very cool project. It has more than 5,000 GitHub stars. But the problem is that it doesn't have any tests. So, it's very hard to make changes to a project where it doesn't have any tests because there's nothing that protects you from making changes.
So, I'm going to go ahead here and trigger Codium AI on this class. This is a 600-line class, complex code. And you can see that I can trigger Codium AI either on the class level or at the method level. So, I'm starting on the class. I'm actually going to re-trigger it.
The first thing that happens is that Codium analyzes the class. It basically maps out different behaviors. And it starts generating tests. You can see it starts streaming the tests. I already have one, two. I'm getting more tests. You can see some of them are quite complex. It also generates a code explanation, detailed code explanation, that shows me how this class actually works.
The example usage, the different components, the methods, very detailed. And then I have all my tests. As you can see, we look at different examples, both happy path, edge cases, variety of cases. Okay, so here I have the different behaviors that were generated. Now, this is crucial. We're basically mapping the different behaviors of this class, doing both happy path, edge cases.
And for each one of them, we can drill deeper down and see the sub behaviors below them. And we can generate tests for anyone that is important for us. So, let's pick a few and add additional tests. Let's pick some edge cases as well. Let's generate a test here.
Maybe here we'll generate another one for an edge case. And you can see it's very simple. A few clicks, and I have a test suite that is built out. I already have nine tests here. The next step will be to run these tests. So, let's go ahead and do that.
So, I'm hitting run and auto fix. You can see some of these very complex tests are actually passing. And here I have a test that actually failed. What happens in a failure is that the model actually analyzes, reflects on the failure, and then it tries to generate a fix in an automated manner.
So, we have a fix generated. And now it's going to be run. And it passed in a second try. So, this is this chain of thought. This reflection process in order to get to a high-quality test suite. Okay. So, I'm going to start with these eight tests. Let's open them as a file.
I'm going to save them in my project. And done. I have a test suite that now protects me. So, now I'm going to go ahead and take the next step. Let's use Codium AI to actually enhance this code. Now that I have a test suite that protects me. So, I'm going to choose a method here.
The build method that has a lot of the main functionality of the class. I'm going to trigger Codium AI on that. And now let's focus on the code suggestions component of Codium AI. So, Codium analyzes this code. And it basically recommends different improvements, enhancements. And these are deep enhancements.
We're not talking about linting or things like that. We're talking about things related to performance, security, best practices, readability. So, I'm going to look at this. Let's choose one that makes sense. Maybe the first one that looks quite important for performance. Basically, it recommends to replace the hash leave with Blake 3.
I'm going to prepare the code changes. And apply it to my code. And now I can save this. But remember, now I have a test suite. So, now I can actually go to my test suite. And run it. And, of course, it broke on me. For some reason, as things happen in a demo.
But, let's see this again. Okay, I have one test that failed. I'm going to ignore that for now. Okay, so let's continue. I created my test suite. I enhanced my code. The next step would be to prepare for my PR. So, I'm going to go ahead here and commit these changes.
And I'm going to go to the code you may I PR assistant. And I'm going to do a slash commit to get a commit message. And now I have a commit message. So, I can commit. And now that I committed my changes, I can then go ahead to the last step and prepare for the PR.
So, I'm going to do a slash review. And that's basically a review process that code you may I would do. And it will try to see if there is any issues, anything I may have missed. It will summarize the PR. It will give it a score. And then we can see if there is anything that maybe I have missed here.
Let's take a look. So, this is the main theme of the PR. You can see that it's tested. You can see that it's basically telling me that it's pretty well structured. Let's let it continue. But it says that it does introduce a potential security vulnerability. So, I'm going to do a slash improve to try to fix that.
And it looks like I forgot an API key in my code. So, CodeMai will then suggest a fix for this. And I can actually see the API in my code. Let's give it a second. It looks like I'm going to do it again. And this is where I actually have the API in my code.
Yeah. Now, here we go. So, basically, it's saying here's the API key. I'm going to click on this and it will launch me to where I actually forgot the API key. Forgot the API key. And this is the actual fix. So, with that, I'm going to conclude the demo so we can go back to the slides.
So, we were able to see how we were able to use CodeMai to map our behaviors, to generate tests, to review our code, and to do it throughout the entire life cycle. We also have, as I mentioned, a Git plugin that enables us to do that inside of GitHub as well.
So, I'm going to end with a personal note. So, we're a company that is based in Israel. While we were on the plane on the way here, the Hamas terrorist organization launched a vicious attack on Israel. The Hamas terrorists are not humans. They are animals. Maybe not even animals.
They entered into towns, they slaughtered men, women, and children, innocent people in their home, and abducted many. They entered into the Gaza Strip. This is the picture that my co-founder and CEO, Itamao, sent me. He left his eight months pregnant wife at home, and is now in military reserve duty.
In the screen, you can see a chart that shows the CodeMai usage constantly increasing. Behind it is his rifle. We will prevail. Thank you.