back to index

Move Fast Break Nothing: Dedy Kredo


Whisper Transcript | Transcript Only Page

00:00:00.000 | Elisabeth Buffard Reviewer: Peter van de Ven
00:00:07.000 | Hey, good morning, everyone.
00:00:16.120 | Let's start by taking a step back.
00:00:21.160 | Remember GANs, Generative Adversarial Networks.
00:00:27.000 | They represented a very compelling architecture, in my opinion.
00:00:31.560 | Two neural networks working hand-in-hand,
00:00:34.440 | one generating and one is the critic,
00:00:36.520 | in order to generate high-quality outcomes.
00:00:39.080 | Then came Transformers, that changed everything.
00:00:44.920 | We dropped the adversarial,
00:00:46.840 | and the focus became solely on the generative.
00:00:50.120 | And they became the state-of-the-art for a variety of use cases.
00:00:54.960 | But code is very, very nuanced.
00:00:59.520 | We believe that in order to generate code that actually works as intended,
00:01:05.520 | the right architecture is actually GAN-like architecture.
00:01:10.480 | And what I mean by that is not the actual neural network.
00:01:15.040 | It's the system.
00:01:16.400 | It's the concept of having two different components.
00:01:19.920 | One focused on the code generation piece,
00:01:23.040 | and one that serves as the critic.
00:01:26.000 | We call it the code integrity component.
00:01:29.040 | It actually analyzes the outcomes, the generation of the code gen component,
00:01:35.120 | and it reviews it, it analyzes it.
00:01:38.160 | It tries to figure out all the different edge cases,
00:01:41.280 | in order to generate high-quality code that works as intended,
00:01:46.720 | based on the developer's actual intent.
00:01:49.600 | This is our focus at Codium AI, on the critic piece.
00:01:54.160 | We help developers understand the behaviors of their code.
00:01:58.160 | We believe that behavior coverage
00:02:02.640 | is a more useful metric than actual code coverage.
00:02:06.400 | We help them generate tests for these behaviors,
00:02:09.280 | enhance their code, and review their code.
00:02:12.160 | And we do that throughout the developer lifecycle,
00:02:15.840 | leveraging our IDE extensions for both JetBrains and VS Code,
00:02:20.400 | and our Git plugin.
00:02:22.960 | And then soon in the future, in the near future,
00:02:27.040 | we will also offer APIs for this,
00:02:29.680 | to be able to be embedded in various agents.
00:02:33.040 | So, we're going to focus the majority of the time in a live demo,
00:02:40.960 | which is a risky thing to do in this situation here.
00:02:44.000 | But let's go for it.
00:02:58.000 | Okay, I'm here in my VS Code.
00:03:01.440 | I have the Codium AI extension installed.
00:03:03.600 | We now have around 200,000 installs across both JetBrains and VS Code.
00:03:10.480 | I have here an open source project that's called AutoScraper.
00:03:13.840 | It's basically a scraping class that automates the process of
00:03:18.720 | generating the rules for scraping information from websites.
00:03:23.040 | It's a very cool project.
00:03:25.520 | It has more than 5,000 GitHub stars.
00:03:28.320 | But the problem is that it doesn't have any tests.
00:03:31.920 | So, it's very hard to make changes to a project
00:03:35.920 | where it doesn't have any tests because there's nothing that protects you from making changes.
00:03:40.560 | So, I'm going to go ahead here and trigger Codium AI on this class.
00:03:45.120 | This is a 600-line class, complex code.
00:03:49.440 | And you can see that I can trigger Codium AI either on the class level or at the method level.
00:03:55.760 | So, I'm starting on the class. I'm actually going to re-trigger it.
00:03:58.480 | The first thing that happens is that Codium analyzes the class.
00:04:04.800 | It basically maps out different behaviors.
00:04:08.480 | And it starts generating tests.
00:04:10.880 | You can see it starts streaming the tests.
00:04:14.240 | I already have one, two.
00:04:15.520 | I'm getting more tests.
00:04:16.800 | You can see some of them are quite complex.
00:04:18.560 | It also generates a code explanation, detailed code explanation,
00:04:23.360 | that shows me how this class actually works.
00:04:25.280 | The example usage, the different components, the methods, very detailed.
00:04:31.440 | And then I have all my tests.
00:04:40.000 | As you can see, we look at different examples, both happy path, edge cases, variety of cases.
00:04:46.400 | Okay, so here I have the different behaviors that were generated.
00:04:59.920 | Now, this is crucial.
00:05:01.440 | We're basically mapping the different behaviors of this class,
00:05:04.640 | doing both happy path, edge cases.
00:05:07.040 | And for each one of them, we can drill deeper down and see the sub behaviors below them.
00:05:12.000 | And we can generate tests for anyone that is important for us.
00:05:15.920 | So, let's pick a few and add additional tests.
00:05:18.160 | Let's pick some edge cases as well.
00:05:22.960 | Let's generate a test here.
00:05:24.480 | Maybe here we'll generate another one for an edge case.
00:05:26.880 | And you can see it's very simple.
00:05:32.000 | A few clicks, and I have a test suite that is built out.
00:05:35.920 | I already have nine tests here.
00:05:37.120 | The next step will be to run these tests.
00:05:40.000 | So, let's go ahead and do that.
00:05:41.040 | So, I'm hitting run and auto fix.
00:05:49.200 | You can see some of these very complex tests are actually passing.
00:05:52.640 | And here I have a test that actually failed.
00:05:55.920 | What happens in a failure is that the model actually analyzes, reflects on the failure,
00:06:02.160 | and then it tries to generate a fix in an automated manner.
00:06:05.440 | So, we have a fix generated.
00:06:09.680 | And now it's going to be run.
00:06:12.240 | And it passed in a second try.
00:06:17.360 | So, this is this chain of thought.
00:06:19.840 | This reflection process in order to get to a high-quality test suite.
00:06:23.360 | Okay.
00:06:26.320 | So, I'm going to start with these eight tests.
00:06:28.240 | Let's open them as a file.
00:06:34.240 | I'm going to save them in my project.
00:06:50.080 | And done.
00:06:50.480 | I have a test suite that now protects me.
00:06:52.560 | So, now I'm going to go ahead and take the next step.
00:06:58.240 | Let's use Codium AI to actually enhance this code.
00:07:00.720 | Now that I have a test suite that protects me.
00:07:04.800 | So, I'm going to choose a method here.
00:07:07.200 | The build method that has a lot of the main functionality of the class.
00:07:10.960 | I'm going to trigger Codium AI on that.
00:07:14.800 | And now let's focus on the code suggestions component of Codium AI.
00:07:20.320 | So, Codium analyzes this code.
00:07:26.960 | And it basically recommends different improvements, enhancements.
00:07:30.800 | And these are deep enhancements.
00:07:33.200 | We're not talking about linting or things like that.
00:07:37.680 | We're talking about things related to performance, security, best practices, readability.
00:07:44.400 | So, I'm going to look at this.
00:07:46.720 | Let's choose one that makes sense.
00:07:50.240 | Maybe the first one that looks quite important for performance.
00:07:54.800 | Basically, it recommends to replace the hash leave with Blake 3.
00:08:00.800 | I'm going to prepare the code changes.
00:08:02.800 | And apply it to my code.
00:08:05.120 | And now I can save this.
00:08:09.760 | But remember, now I have a test suite.
00:08:13.040 | So, now I can actually go to my test suite.
00:08:16.400 | And run it.
00:08:19.680 | And, of course, it broke on me.
00:08:24.800 | For some reason, as things happen in a demo.
00:08:28.400 | But, let's see this again.
00:08:35.360 | Okay, I have one test that failed.
00:08:39.600 | I'm going to ignore that for now.
00:08:49.920 | Okay, so let's continue.
00:08:50.960 | I created my test suite.
00:08:55.920 | I enhanced my code.
00:08:57.360 | The next step would be to prepare for my PR.
00:09:00.160 | So, I'm going to go ahead here and commit these changes.
00:09:08.960 | And I'm going to go to the code you may I PR assistant.
00:09:11.600 | And I'm going to do a slash commit to get a commit message.
00:09:14.640 | And now I have a commit message.
00:09:21.120 | So, I can commit.
00:09:27.200 | And now that I committed my changes, I can then go ahead to the last step and prepare for the PR.
00:09:33.600 | So, I'm going to do a slash review.
00:09:34.880 | And that's basically a review process that code you may I would do.
00:09:41.520 | And it will try to see if there is any issues, anything I may have missed.
00:09:45.360 | It will summarize the PR.
00:09:47.680 | It will give it a score.
00:09:48.720 | And then we can see if there is anything that maybe I have missed here.
00:09:53.200 | Let's take a look.
00:09:54.480 | So, this is the main theme of the PR.
00:09:57.200 | You can see that it's tested.
00:09:58.960 | You can see that it's basically telling me that it's pretty well structured.
00:10:02.240 | Let's let it continue.
00:10:10.160 | But it says that it does introduce a potential security vulnerability.
00:10:14.240 | So, I'm going to do a slash improve to try to fix that.
00:10:18.320 | And it looks like I forgot an API key in my code.
00:10:23.040 | So, CodeMai will then suggest a fix for this.
00:10:32.880 | And I can actually see the API in my code.
00:10:39.680 | Let's give it a second.
00:10:40.560 | It looks like I'm going to do it again.
00:10:46.320 | And this is where I actually have the API in my code.
00:10:56.000 | Yeah.
00:11:01.920 | Now, here we go.
00:11:02.560 | So, basically, it's saying here's the API key.
00:11:06.560 | I'm going to click on this and it will launch me to where I actually forgot
00:11:09.200 | the API key.
00:11:09.920 | Forgot the API key.
00:11:12.160 | And this is the actual fix.
00:11:14.960 | So, with that, I'm going to conclude the demo so we can go back to the slides.
00:11:20.960 | So, we were able to see how we were able to use CodeMai to map our behaviors, to generate tests,
00:11:29.520 | to review our code, and to do it throughout the entire life cycle.
00:11:32.320 | We also have, as I mentioned, a Git plugin that enables us to do that inside of GitHub as well.
00:11:38.480 | So, I'm going to end with a personal note.
00:11:42.880 | So, we're a company that is based in Israel.
00:11:47.840 | While we were on the plane on the way here, the Hamas terrorist organization launched a vicious attack on Israel.
00:11:55.200 | The Hamas terrorists are not humans.
00:12:00.880 | They are animals.
00:12:02.000 | Maybe not even animals.
00:12:07.920 | They entered into towns, they slaughtered men, women, and children, innocent people in their home, and abducted many.
00:12:20.320 | They entered into the Gaza Strip.
00:12:21.280 | This is the picture that my co-founder and CEO, Itamao, sent me.
00:12:27.520 | He left his eight months pregnant wife at home, and is now in military reserve duty.
00:12:34.400 | In the screen, you can see a chart that shows the CodeMai usage constantly increasing.
00:12:41.520 | Behind it is his rifle.
00:12:44.960 | We will prevail.
00:12:49.520 | Thank you.