Amid the escalating buzz surrounding AI tools, many development teams grapple with deciding which ones suit their needs best, when to adopt them, and the potential risks of not doing so. As AI continues to pose more questions than answers, the fear of falling behind the competition lurks for many.

This week's episode of Dev Interrupted aims to dispel these uncertainties by welcoming CodiumAI’s founder & CEO, Itamar Friedman. In one of our most illuminating discussions this year, Itamar pierces through the AI hype, explaining what AI tools bring to the table, how to discern the ones that would truly augment your dev teams, and the strategies to efficiently identify and experiment with new tools.

Beyond the allure of AI, Itamar doesn't shy away from addressing its pitfalls and adversarial risks. He also probes into the future of the developer's role in an increasingly AI-driven landscape, answering the question: “Will there be developers in 10 years?”

Episode Highlights:

  • (2:40) Founding CodiumAI
  • (8:25) Will there be developers in 10 years?
  • (11:20) What kinds of AI tools are popping up?
  • (15:00) Core capabilities of AI
  • (19:30) Finding AI tools to solve pains you don't know you have
  • (23:00) Enabling your team to use AI
  • (26:45) Falling behind the competition
  • (33:00) Pitfalls of AI
  • (38:30) Adversarial risks of AI
  • (43:45) Experimenting with new tools
  • (47:40) Measuring the success of AI tools
  • (50:15) Will AI replace or empower us?

Episode Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Yishai Beeri: Hey everyone, and welcome to Dev Interrupted. My name is Yishai Beeri. I'm the CTO at LinearB, and I'm filling in for Dan Lines today. Some of you have may have seen me from previous episodes, mostly of our labs. That's because I work a lot with our data team and I'm exposed to a lot of the exciting stuff that we're doing with understanding the behaviors of developers and dev teams at large using the huge data that we have.

I'm happy to dive into AI today. The buzzword that everyone is talking about, and I have a special guest, so welcome Itamar Friedman, the co-founder and CEO at Codium AI..

Itamar Friedman: Hey, Yishai happy to be here. Thank you for inviting me.

Yishai Beeri: Great, great to have you. Thanks for joining. I always like to start with my guests give me a brief overview of your background, your journey so far. So you're now co-founder and CEO of Codium but how have you started and how did you get here?

Itamar Friedman: I'll do it shortly. So I was a two-time CTO of VC backed startups. So actually 20 years of R&D or CTO management and hacking. My last company I sold was acquired by Alibaba Group then I joined Alibaba Cloud.

Alibaba was journey was fascinating. I was there for four years to, we did quite a few projects, but two products that I'm really proud of is one that we build an auto ML solution as part of Alibaba Cloud. It means that we automated machine learning ML model creation. And the second one was actually a B2C app.

That within one and a half year of development from Israel, we reached to 10 million monthly active users. And it's growing nicely and already monetizing, et cetera. Before that I did a bachelor and master degree in a Technion. Mastering and machine learning optimization was one of my favorite, topics and I worked on here and there a few startups maybe and companies maybe worth mentioning that I, in the early days I worked on verification, like in Melanox and actually it was hard verification.

And then I worked on system verification in a company called silk. So that's also relevant to what I'm doing today. Roughly speaking, that's that's my background.

Yishai Beeri: Great. So you moved from more like the formal stuff of ML and, verification all the way to The new fluffy area of I don't know, generative ai.

Can you tell us a little bit about CodiumAI? What, what does CodiumAI do?

Itamar Friedman: Yeah, of course. So yeah, like the really early days of my career, if you can call it this way, was actually when I was a teenager and I had a first company, my first company and like what, like 40 clients and quite a few employees.

I'm mentioning it because one my ever first employee was today my co-founder today Carretta. So we know each other for 25 years or more, but it just, it's to say that I did web, mobile hardware system, networking, storage, but my major is machine learning, actually even robotics. But almost in all of these I had like algorithms involved and machine learning.

So I, I've seen algorithms from in the previous century dose of the 2000, 2010. And my master degree, I actually deal with a bit of deep learning. I, I see being, taking part in this amazing progress and evolution. I actually see it like a step-by-step progress.

Yes. Progress is going faster and we do have even growing gradient. But I do see it as a progress. For example, in 2016, there was already the gun, the Generative Israel network with generating a network that is in charge of generating content, and then one charge of verifying, the content, et cetera.

And then there was 2018 attention is all you need, et cetera. So I see it as a direct continuation. What we're seeing today, I wasn't surprised from, the lms, one of the reason I left Alibaba in the end of 2021, the beginning of 22. It's because we developed with large language models, we even created a few foundation models, et cetera at a certain scale.

And we saw how powerful these models became. And if you can frame a problem, if you can frame a problem as a language not necessarily natural language, it could be like a DNA sequence, it could be a user usage of your website. If you can frame it as a language, Laurie's language model could get you really far with analyzing the content and also generating relevant content to that.

And then I thought to myself that hey I as a R&D manager for 20 years, maybe it's time to tackle one of my biggest problem, which is code logic testing. Similar to what I did, maybe in the hardware area where we did formal verification, we had tools to, because it was really expensive to tape out.

It means to send hardware to production and then having bugs there. But but in the software we had, we have less fitting. And there are reasons. It's not just the how much it's cost, to, to ship software. There's reason and I realize that some of the reasons behind why we don't have.

Good products and tools to check code logic can be mitigated or even solved with ai. And that's why when I left Alibaba and some mediation about the exact, vector and resting in in July 22nd, which is almost one year we're celebrating soon, one year I opened a company together with my partner Reto.

And and what we're doing in coding AI is building an AI coding assistant focused on helping developers reach zero bugs. Okay? The second part is the most important. There's a lot of code generation tools, but we are more like the adversarial actually. We're like helping to reach zero bugs, helping to verify that the code is generated as expected, that what you write rather within AI or not is fits to your intent, et cetera.

So that, that's what we do and that's how we reach that through the, our background and machine learning and large language models through our own pains as developers and managers.

Yishai Beeri: Amazing. So you're deep in that space where AI and developers meet each other and as we prepped for this show you said like maybe one of the greatest questions as now hovering over our industries says are we gonna even have developers in 10 years?

So what's your think of that? Is AI gonna eat our lunch?

Itamar Friedman: AI is gonna eat our lunch, but probably we have much more lunch to, to eat in general. Okay. So first of all, it's very hard to predict especially the future, but I'm still going to try to do that. I think the I think it's worthwhile separating between 5, 10, 15 years.

I'll largely, I have a general thought, general thinking process. I'm thinking this way. I like when I have a riddle, I like to think on the extreme. So I have a question for us think about a future imagine a future with no developers at all.

Can you think about it for a second? You ask any piece of software, anything that you want, like whatever, and get it from ai, even like software that was not written before or requires like a really high coordination between I don't know, different system between people, et cetera.

When I put it in a framework of, I don't know, 10 years I don't see that. And I think that maybe that AI needs to rule, to some extent, needs to rule the entire let's say ecosystem needs to rule the agenda. And then that may happen. And that's why I said like maybe 15 years it's another issue.

Now the other way around, do you see a case like in five years that their AI is not taking a major part? It also doesn't make sense. Like you see how much it's doing right now and you see all the effort that is being put into it and like diverse effort. And I see like in five years, I think we're gonna have a, like an intelligence software development stack that enables every developers actually even creators to be able to code and deploy and think about, software and as if they are enhanced, but also as if they have 10 more skilled developers obeying to them to, to some extent.

So that's how I see like the five year and the 10 year is like where AI is doing a lot. And as software developers, we can reach really harder like tasks like, reaching Mars and whatever. And 15 years is like really hard to predict. It really depends where AI is take us generally speaking.

So that, that's how I see it.

Yishai Beeri: So let's part this large question. I see where you're going with looking at the edges of this problem and understanding probably that's, it's never gonna be an all or nothing. But now I'm a development manager. I'm an VP of engineering somewhere, or I'm a developer starting or in, in my journey.

what are people asking? What is on top of mind now for people that are starting to see this ai in AI in development happening?

Itamar Friedman: Great question. I do wanna say I think you should have two angels on your shoulders right now.

So sorry. Two advocates, I meant, and devil advocate. and an angel advocate. And first, I'm gonna relate to the angel advocate. I elaborate a bit what I just said. I think, listening to your angel advocate, you should be really open and being active about listening and adopting ai.

At the same time with the Devil Advocate, you need to ask yourself a few questions. What is the transition? What is the best tool I wanna try to use right now? Which one of them is actually mature? Or if you wanna be a bit more positive, which actually really fits my my use case. You consider them both, and maybe later we can put more focus on a devil advocate.

But first I'll focus on the angel advocate. I think really high level speaking, usually as developers we have the setup phase. It's either our setup environment, it's either our communication with the product team on what we're building doing the research about how we wanna accomplish things.

Then there's the implementation phase, right? You write your code, you debug commit. you have the process with the development team and then there's the deployment stage, whether I'm including them not only the ci cd in the cloud, but actually even the feature flagging.

Okay. Yeah. And now, my point here is that I think that for each one of these elements and there's sub elements, each part of these. Task or sub process. And there's there you will find a tool that is somewhere or even a few between starting to half mature, half bake to, to mature. And what you can do is try to start with your pains, try start, look like I'm like breaking down my difficulties as a manager or as a developer by the way.

And let, but let's focus as a manager for a second and where are my difficulties? look on this matrix, Try to sync, the pains that you sink, that you can imagine that if you had an intelligent creature, don't think about the product could it help?

And then try start looking for products and most a really good chance that you'll find something. And let's look in CodiumAI. If I may say about what we're developing. So let's say that you're you're a manager that you're really struggling with reducing amount of bugs in production.

And the way that you're doing it, I don't know, is asking your developers or to adding more tests and increased code coverage. But at the same time, that code coverage could be a proxy metric to actually good testing or even sometime vanity. If you don't treat it correctly, and it's hard to treat it correctly, then okay, you have a pain.

Then you look for these, words and you'll fight CodiumAI because this is exactly, those are the pains we wanna help with, like helping developers. If you look on the comments of our product, you see the people saying, I hated testing. Now it's fun. And oh my God, I found two bugs in production.

Things like that. And like in the code that is already in production. So that's my main recommendation.

Yishai Beeri: Think about AI as another intelligent being that, and how can that apply to my problem?

Is that what if I had another developer on my team that I would put just on these tasks? Because developers tend to be intelligent. If I'm looking for a general Answer through intelligence. If I hired another developer and asked them to do this, is that a good initial proxy to think about what I can get from ai?

Just more firepower with brains probably cheaper than a developer. But is that like ideally on par with what an AI could give me

Itamar Friedman: Like one word? Yes. Two words? Yes. No. And four is yes, no, maybe so because I'll explain I think ai like the core capabilities are relatively different in my opinion.

And then it's depends on how the product was packaged. product that is AI empowered could behave like a developer, but it could do other things. It's Claiming and just for the analogy, basically you can say that Google basically does 1 million people, 1 million developers, work to go over the entire internet and index it or something like that. But it's prac in impractical, right? Like it's it's not visible and to some extent. So it's, yes, because some product are like, for example those that are trying to like to be chatty or so imitate, like talking to a developer, but some of them are really bringing like U UI that is, is really different.

So I, I agree that it's one way to, to think about it. But but another way is that there are capabilities like indexing and looking things at a whole. And then one more perspective is that, You also care a lot about developer happiness and maybe you don't care so much on AI happiness.

Okay. Yeah. So right now by the way I do care. I'm just like considering that in the future. So I think that for example, one of the reason, it's only one reason that coding ai, we decided to tackle code integrity, like code logic testing is because I think right now developers are either do not do too much testing because they most, mostly they hate it or because they like the creation and not like the verification or they do testing and they do practice verification call testing, et cetera.

But. They hate it. So either you don't do it and hate it, or you do it and hate it. And here you go. There's a tool that can help with developer happiness and spread love, and with this best good practice. Okay. So to summarize this point, yes it's, it does make sense to sync with this framework of I ha I, I'm actually buying more developers and much cheaper, but I think it's more I'm adding more key in, additionally adding more capability that maybe developers couldn't do, especially things in large scale just in time.

You never go to sleep and as well as doing this har harder or the stuff that developers maybe wouldn't want to do, you still want the developers as the drivers, if you ask me, and not ai, AI right now is more like empowering. Okay.

Yishai Beeri: Using that metaphor to, to imagine where I could use ai, right?

If I'm stuck with resources or my developers. Yeah. Don't wanna do this. And, we all hate verification. Then I say, okay, what if I had a magic developer that loved it and was good at it? Yeah, but you're saying there's, it goes beyond some some skills or capabilities are very different.

For example, having access to troves of data and being able to index, or work very fast on a large scale problem. Okay, so correct number one, you're saying look at your current pains as a practitioner or a manager and know that there's a new class of solutions bubbling up around probably most of these pains, so actively look for something.

Sometimes what I'm seeing is the co-pilots and other solutions, they're not, developers are not necessarily finding them because they have a paint. I know how to eventually write my web app or my database service, but I'm still using co-pilot. So sometimes it's not a direct, like I'm not looking for something to ease a pain, but I find something cool and actually saves me time.

And so how do you go about discovering those things that are not well, where I'm not even aware or acutely aware of the pain?

Itamar Friedman: Cool. I, I think I know I'm like a bit of generalizing here but develop like R&D manager or director of engineering or whatever, what's your purpose?

What we're trying to aim, like maximum value. For the organization. And the minimum time. So you basically like maximum value getting out maximum value from my r d team, maximum problem. Develop the product and the business. Yeah. In minimum time. So basically time, like spending time is a pain, for example.

And I think, pre copilot then you would spend a lot of time in searching in Google and Stack Overflow and things like that. You would, it take you a lot of time to search and al also combine searches. I think copilot is basically disrupting that part. How do instead of going search, , on, on Google or Stack Overflow, et cetera fortunately and unfortunately for different stakeholders, then you just search inside your id.

But also like it's combined a few searches. That's what it's true. So it's, it actually saves you a lot of time and it's a pain. The second thing is that for. Junior developers if I may say it subjective, then it's actually even increases your level. It's not only help you with the search and combination, but actually choose for you the best one.

And to some extent, if you're a junior, probably raise your level. So that's a pain, like two pains reducing time from if you wanna make your software. But I agree that like maybe you wouldn't think about it directly. And this is why I also think with all the respect that copilot is a really productive tool, but I think if you think it's a game changer, I think in five years you'll think differently.

That you will see like game changers too, like really addressing like again, like I, I know what I'm saying is like more like a vision. I'm not saying that right now. CodiumAI helps you like guarantee that you have zero bugs. It's Tesla zero emission necessarily. They reach, but they're progressing really nicely.

So think about if you can know that you have zero bugs in production and how much fast you can move when knowing that you're being covered this way. I think that's a game changer. So again, I have a lot of respect to copilot, like co-generation. I think it's like really boost productivity. And maybe you didn't think about it as a pain originally because it's not that acute pain, but if you think about the two pains I told you, then I think actually you do find it like they are quite big, maybe not acute, and that's why it's it's fun and also magical of it, right?

As a developer, oh, I can't believe that AI is doing it for me. And it was, original, like it was one of the first AI products. That's I think why we also love it so much.

Yishai Beeri: So yeah so if, if I'm a Dev manager, so there's looking for my specific pains and actively looking for solutions, what about like my developers and, people on my team are probably looking for those solutions too, starting to use them.

How do I become more proactive in you, not just finding something to help me with a pain, but also helping my team discover and start to use these how do I, we're gonna talk about pitfalls in a bit, but even before, yeah.

Even in the rosy area, it's one level to say, oh, I have a painless look for a solution.

How do I enable my organization to go and, jump on this and harness this revolution beyond just, looking for direct solutions. Yeah. So everyone try out things. Do I have to do a program? What's the, how do I approach this with, I have a hundred developers in my org, what now?

Itamar Friedman: I have a bit of a few perhaps generic suggestions, but I actually wanna twist them towards the AI tools because I guess that most managers would know the, or , the trivial ones. Like for example define how like in your core, do you want to be a first adapter, like early adopters and wanna try and you wanna be in the front?

You wanna think like a mature, do you care or ease of integration or do you care about like how much of the pain it solves you? It's, so there, there's a trivial thing. I just mentioned a few points, and I think there are worth sitting and sit thinking about it. I don't think like more than 30 minutes or so, like a discussion between a few people.

But twisting it thinking about like specific AI angle into it is that, AI is like a statistical creature. By the way, there is a lot of debate is sat machine learning is like a nice framework around statistics or two different things. I don't know if you know all these memes.

I have some opinions, but it's not the topic. So allow me to say AI is a statistical creature for and it doesn't necessarily give you cons. It depends on the product, but cons, consistency, answer, and there is need different guardrails d develop by a product owner. And it could because of its, it's like that it could work differently between, Two different companies, or two, two different code bases or two different teams, for example, or, so it doesn't mean that there's gonna be a big variance, but it could be.

What I'm saying here is that I think I, I actually think that one of the things you need to look at, how easy for me is to check it a bit and you would like to check a few and see how well they work for you. For example I heard one, one company trying for one quarter, 13 tools and then choosing two, and they were really happy about it because it's like a lot of investment, but they said that they wouldn't necessarily pick the two that they thought about and that wa that was really helping them in meaningful way that they couldn't think about in originally.

So maybe that's an extreme. I'm not sure I'm experiencing this enterprise are using CodiumAI, like we, offer easy trial and that, that's like the atmosphere right now, I believe. I think LinearB B et cetera is like on that, that role right now, I hope. Easy to integrate and then ju just try it out.

So I would just, what I'm saying here is that with ai, do consider. How easy it is to try it out because it's not like a spec is written, it's gonna be exactly like that. It's a lot about how it performs for you.

Yishai Beeri: So you're saying, de active about experimenting, don't just say, I'm gonna find one tool, experiment with tools.

There's gonna be variants. Maybe know that everything is changing so fast. What you see now may not be what you have in six months. Can I even afford not to begin experimenting? Let's say I'm not a early adopter, or I have other priorities. It's not a huge pain I'm solving. But is it a risk or a material risk not to begin experimenting, not to have people on my team begin to look and use these tools?

Am I going to be left behind or at a severe disadvantage if I. Wait for a year, and then I'll see what bubbles up. I'll see what makes it into mainstream and then I'll adopt.

Itamar Friedman: So we can take again the notion of having two extreme cases. let's say that there will be, there are already I think starting to build, but let's say there will be AI tools that helps your team just we're talking an extreme case just to 10 x more, more productive.

Okay? And this means that your competitors are going to develop 10 x faster if they onboard fully onboard, and you did not. Okay. That's an extreme case. On the other way around, if these AI tools are hard to try, hard to experiment with, And basically bullshit. Then you waste your time and you defocus and you're probably having a lot of pressure.

So it's probably like somewhere in the middle. And as time progress it actually goes. So the first extreme, that's my opinion. So that's why so far we talked about like thinking about your pains, that they're there anyway and do start experimenting with them. They become very good chance that, some products could help you even very meaningfully.

But then as you get to use these products that you need a di a slightly different mindset, then you could keep, progressing with more and more tools as time passes. So that's like how I see it. I wouldn't recommend skipping like, cuz there's so many like things like. Not too much time would take you to look on your pain experiment.

A few. And that's my point here actually. I build up to, and I didn't say that. I think there are those tools that are really easy to try. So I don't think the efforts are so big to, and you shouldn't missed the opportunity. Again, advocate part, right?

Yishai Beeri: So you're mentioning the downsides or Devil's advocate, why not start?

Itamar Friedman: Yeah. So I think let's take one of the most dominant tools out there right now GitHub copilot as an example. Okay. So I heard from quite a few people by the way. I personally use the tool and I love it.

I just, I know I'm the ceo, but just I couldn't for two days I was actually programming. I don't do it too much. I couldn't stop myself this time. We're gonna release some new thing soon, and I wanted to contribute. Right now our tool is in the ID extension. We're about to connect also to the CI I c d pipeline, et cetera.

so I love it. But I heard from quite a few people they're passing through the traditional cycle of oh my God, this is amazing. Oh my God, this is actually ruining my development to, okay, this is useful. But I need to use it carefully and and why do we have this cycle?

it's because at the beginning, like you see the magic of AI and you start even adopting the way you develop towards being AI in part, which is good, but then you fall into traps of, the AI suggested something that wasn't really fitting for your use case, and you're already so used to click tab and continue, and then you realize that you can you can say AI will get better and better, but it's more than that.

It's not like just the core capabilities of ai. It's even the u e ui, like the u e UI right now of code completion, not necessarily has the capability to really, like 100% give you coded, always work. We'll talk about it in a second. And then you learn to use it better, and go back to the driver and and then you enjoy the tool.

So I think like by this example you can see that one issue that I raise right now is the way you use AI and you need to work with your developers and yourself not to blindly trust it. This is obvious, but also also not blindly. let's start using after you experimented.

And that's what the team that I told you that tried searching teams and then moved to two, they. Talked about each tool, like how it's the best way to use that tool and like the way you need to start generating best practices around it. and then this, and so basically I'm repeating back like it's a matter, I'm repeating again what I said and before, like it's a matter of statistical, product.

You need to be aware of it, it requires different best practices. You need to be aware of it. These brass practices are, for example, you need to consider, like best practices related to seniors, to juniors, that, that might be different. therefore it also might introduce new problems maybe that you didn't think about.

Maybe right now you're gonna have much more code because your developers are gonna create much more code. And then you have also much more main maintenance now. How are you gonna handle that? Either you bring another AI or you need to deal with it. So that's part of the the pitfalls.

That's one of the reason that and CodiumAI we first decided to focus on something that people know there is a big pain and they're, they really missing resources or, and they don't like to do it too much. And when we focus there because we saw that like it could be a standalone, product, that doesn't necessarily like need to change too much of your best practices, just allow you to do more of them, of the best practices right now.

Yishai Beeri: So if we dive into pitfalls so let's say I'm proactive, I'm starting to experiment, maybe found some good tools to help with pains in my Dev development org. I'm using AI to help me write code or to verify code or to. Add some tests or automate UI out of Figma as whatever.

There's a bunch of yeah, interesting applications. Totally. What should I be careful of? What, where can I get bit by the ai? What's the, where's the danger? Except for I know, yeah, I have to change the way we work and adjust, but where are the things that I should really be afraid of?

Itamar Friedman: The part I read repeat myself, I'll do it shortly, is that do imagine, like if this AI tool is just nice to have, it's nice to have, but if it really boosts productivity, what does it mean? Again, like I mentioned a code of, if I help automating.

Creation of code from Figma or in the code and the developers are focused or half focused because they're actually like accepting check taking a look if it's good enough and continuing they have, they put less attention into it, almost pro, almost a certainly and then is the maintenance harder, is debugging harder and things like that.

So that's one as aspect of it. Now we focus a lot now in like that you need to do bug code that you may maybe necessarily put enough like time, like a human brain on it to Yeah. And now it's harder to find the place to understand it. Maybe you need another ai but it's not like right now the topic of just exact specific issue.

But I want to zoom out for a second because almost all examples that we said is around the implementation part. And I wanna close like a circle going back to the, what we talked in the beginning about pains, and you have the setup, of everything. And you have the implementation, and then you have the the deployment.

So I wanna give you like example of two risks. If I frame this, the term correctly coined correctly in the set of and deployment. let's say that there is imaginary product that helps writing specification, but product specification help writing p r D.

And now the product people, and we, the developers let's say, let's, like a P R D, like almost on a technical, they're working together and they're using that assistant. It uses biases. That's how these mo these products or LLMs are trained. So basically like bias you towards the the trivial stuff towards like converges you to, to maybe other product that existing because it was trained from somewhere else.

Okay. So like the fact that statistical you need to have a product that maybe doesn't drag you to to the trivial solution. It depends on who, what, how much creative you wanna do, how much the mission. 

Yishai Beeri: One risk is that the outputs of ai, at least in the generative mode, and this could be around how I implement the code, but also how I spec it or whatever domain I'm using it.

It's naturally gonna go with a common well-versed solutions or specifications or Right. Like some kind of a lower common denominator because it's trained on a lot of data and it's going for what's common in the data. That's interesting.

Itamar Friedman: Yeah. It could help, it actually can help you to be creative.

When we see it with chat, PT and other tools. But I'll get to that in a second. It could be mitigated with a good product. I don't want to we now can aviation process to be aware of. To be, yeah, exactly. Yeah.

Yishai Beeri: You know that this is a tendency of ai. These kinds of models, they're going for the common data that is dominating their models.

Itamar Friedman: The common name. Be aware of that. Yeah. That's how they trained. So I wanna say that a good product around ai, like a company providing a product could try to mitigate it, but take a look like being aware of it. If they don't, it could create. And now going to the other side of the software development lifecycle, the deployment, same thing there.

Let's say you use tool that actually helps you, let's say like on the deployment you have the observability tool, but you also maybe have an automated deployment. There are different risks if statistically one of these models are incorrect. If you have a problems observ observability and event is being raised to a human right now and this year to be reviewed, then if you have a false positive or something like that, false negative, okay, that could be handled.

As well as overall precision accuracy is good, but what happened if you deployed something wrong wrongly. I would be careful like think about what's the worst scenario, what could it be? And sometimes the worst scenario. Might be worth it of how much it could automate and help you, but maybe you can even think to mitigate it with an additional wrapper of your own that the tool or the AI could not think about it.

But because it's a wrapper, it's some additional blocker or check that, that they could not imagine would use be useful specifically for your use case. So you can, I, you can think about it yourself and implement that.

Yishai Beeri: So what I'm hearing is think about what happens if the AI is wrong because that the shape of being wrong is different between AI and humans.

Use some think about what the, like the outcome of being wrong and then how you protect against that. Sometimes trusting AI in a place where the cost of being wrong is too much. You need to put some protection. 

Itamar Friedman: Think about the worst case scenario. Yeah.

Yishai Beeri: Not that humans aren't wrong, but it's a, yeah.

We are well-versed with how humans get wrong. We're not, this is a new area. What about adversarial risks in ai? Some ways where AI can be manipulated or leveraged to, to harm me in a Intentional way.

Itamar Friedman: Okay. Let's give an example just to, for those listeners that are not well aware, what is adversarial.

And, but my point is gonna be that I sync, like in software development, it's relatively rare right now. It's, you'd need to be aware of it, but try to sync if it's possible for you. Let's say that you're building autonomous driving, software and with cameras and everything.

And now, you're driving. the car let's say level four or whatever, like autonomously. And you were trained on all the data that exists in the world that was available to the company, to the team that is training the models. What happened if somebody, for example, some I don't know pedestrian holds a huge screen, and it's right now working like it's, it is showing like some semaphore or some sign or something like that.

And you can actually cause the car, maybe it even shows your right turn where there wasn't any right turn. What could happen? That's an adversarial case, like most probably, like they didn't see this case, by the way. I'm not saying that they're not. Aware of it and probably they're trying to train against it, like being active, proactive and having an adversarial model or adversarial data within their data set to train a generative or the analysis part of the ai.

So having said that, but it's still, there is option to create an adversarial events. Okay. And then and, but there are many more cases.

Yishai Beeri: Recently there's a talk about like I can put up some malicious libraries on NPM or pi.

And then put up some data that will cause generative AI to recommend code with these libraries. Now I've basically amazing, it's like putting, got a thousand stack overflows article answers with bad code that is intended to harm people that use it. Yeah. Amazing. Example, AI code, and it's gonna import a library that looks legit, but is actually malicious.

Itamar Friedman: Yeah. Amazing example. So I'm repeating just for the sake. Let's say that I wanna curate damage. I'm creating some library. Somehow I bypass the filters of those that training the AI models and being injected into their into their training.

And then that will be suggested automatically. So two things. I think first of all, it's already happening with humans, right? To, to some extent yeah, even worse case. In worse, the worst case scenario, even Unintentionally because look for J or whatever. Sorry I forgot exactly like all the examples.

So it is already happening and I'm not sure it's so easy to make it worse. but having said that I think like you, you almost always like always want to have checks and bias, checks and balances. Almost everything, like everything you do there, there is a reason that you code and you test.

You, do a product, but you collect data to give you feedback if it works. So my suggestion for it, and even by the way, I just want to give like more developer example. You use, probably you use some cloud provider, but also cloud observability tool that is actually is not provided by the cloud provider.

For me, it's unintuitive, if you think about it for like why CloudWatch or whatever by AWS is not better than abc. And the reason is that there, there is a good, like a reason that one is focused on actually checking problems and AWS and maybe they don't want, it's not our focus. So I think same thing here.

If you're afraid of you want to use some tool that you think that is so useful for you, but also afraid from consequences. And also consider using the adversarial one. Sorry, like just fits CodiumAI, like code generation and code integrity or two different, two different tools.

So that, that's my suggestions. and again, lastly, I would say what's the worst case scenario? Like for what you said the worst case scenario is almost as inhuman. So basically what I need to make sure is humans do not like and injects best, best practices that human do not accept, gen AI code without like actually reviewing it or something like that.

That's I think for me it's not such a big risk, if you reference it to what exists today.

Yishai Beeri: Maybe to close out the pitfalls or the dangers, not just of using a generative ai, but also again, I'm thinking about eng engineering manager that is starting to foray into this area.

There's so much hype today. Hyper hype. There's, yeah, even just starting to experiment or looking for tools for my pains or moving from zero to one and how we use AI in, in our development organization. With so much noise happening today, how do I remove the risk of wasting time just because of all the noise?

How do I cut through the amazing hype that's happening right now and actually go to things that are real that actually give value? Cuz everyone and their cousin are doing AI tools right now.

Itamar Friedman: Yeah. I think that if we were talking about half a year ago then I would say that if you want to use them, you'd probably be the very early adopter.

but I think that right now, like the things are moving so fast and the usage is so fast, like we, it could be like a new tool could have like us 100,000 developers like in a month, like joining and trying and the tool and actually using it. And if you see that the reactions like of those the developers, like usually Dev tools have, stars, not GitHub like of the product itself.

Comments, like issues open issues, closed issues. You can look on all these things, even if it's like a ha only half a year or three months, you probably see a lot of companies and developers already trying these. So I think to add to what I said is that you almost always don't need to be the first one and very proudly that someone with your domain, or even a friend, to try that product.

Having said that I'm still like holding my position that. That will help you to, , choose between a lot, but still you need to experiment to check it works, for you. Even in the code generation era like it's our sister market, we're in the code integrity. Then there you have a 10 tools.

It's not only copilot, there's code whisper by by Amazon and others even. So could work differently.

Yishai Beeri: If I wanna save some time and not go for all of those. Again, amazing array of solutions out there. So I should gravitate towards the ones that are already showing some, traction, usage reviews.

And that's a way to look at a, at fewer candidates, like a little more established. Again, this could be three months more established.. Either the large companies anything that open the AI does or GitHub or in every space there's gonna be one or two leaders look for the traction and then follow that.

Itamar Friedman: Yeah. Unless you define yourself as a really early adopters and it's fun or important for you. So if I like, summarize this, it's Hey, there's so many AI tools. Some of them are better and here and there. Some of them are different. Some of them are, have better buzz, et cetera.

Second, all your pains, there's a very good chance that you have a serious pain. There is AI tool for that. Focus on two, three pains the most, if not even one. Go do the research. You probably find a few tools. There are some fields that aren't too many like code integrity. I think maybe there aren't many.

It's a really hard task to check code logic. But in most areas there, there are quite a few, like code generation, you'll find 10. And documentation. You'll find and PR review, et cetera. think about what's your biggest pain, research. check the comments, check issues, check usage and see those that are comments are companies in your domains and using it.

Choose two or three. Experiment because it could be different for you. I don't think all of this, it should take one more than one week. If you decide that ease of use is one of your, decision ion. And I think one of the great things about AI that, tools that many of them are really easy to use, like ChatGPT or so.

Yishai Beeri: I know a lot of Dev managers are asking this. They're asking us, I'm wondering what your thought is, my developers are starting to use co-pilot or some other code generation tool or for reviews or any of the domains. How do I know that's helping beyond developers saying, oh, this is great.

How do I measure the impact of using, let's take a simple example. We're we've deployed copilot in some of the org. How do I measure the impact? Is it really helping? Is it imaginary or is it small addition to our productivity or a large one?

Itamar Friedman: Amazing. So I think different AI tools it's a great point. I think different AI tools and products. it won't be the same measure, like the metrics. The different one will, will enable you to have more concrete or less concrete metrics. For example, with copilot, you can actually see what they're publishing in their youth, like papers, et cetera.

And that they actually mostly focusing on developer happiness they claim that and I'm not saying it's not correct, like they claim that this is the, like research that's the most important thing, for eventually boosting productivity and everything. So there is a reason because it's hard for them to measure something else.

But if, for example, you look on CodiumAI, then, first you can see how much you increase your code coverage or more importantly a metric that we suggest, it's called behavior coverage. Because we think that cold coverage could be a proxy. And again, sometimes mistakes is manatee.

And you can really measure, like the behavior coverage. You can even measure how many bugs were deployed every cycle or so if it goes down. So it really depends on the tool. And I do really suggest choose to check what's your metric? In those cases that it's actually developer happiness, then, you probably will add this tool if your developers are asking it for, because you're you wanna have make them more happy, but for others, maybe you're willing even to pay more because there's more concrete metrics.

Yishai Beeri: Yeah. Developer happiness and experience is very important, but it's it's a lagging indicator. And it's also a little like removed from the actual, like what's the reason they're happy? Is this because of copilot or something else? So you're saying, yeah, there hasn't not a natural thing to measure that directly sees the impact.

At least not someone that's not something that's obvious.

Itamar Friedman: I'm saying depends on which tool. Like it could be really different. If it's a tool helping you in code integrity, then you can have concrete measures. If it's a a tool that helps you to write your, JIRA tickets, then it's harder.

But maybe, yeah, even actually you could have some label that, people can mark for you on how complete it is. Okay. If it's a copilot, maybe they did a job and tell you that it's happiness. So it's really, it really depends. .

Yishai Beeri: For closing, maybe we go back to this 10 year question, and let's try to dig a bit in when we were talking about will we be developing software or will we have developers in 10 years, you suggest that we look at the edges and said, okay, there's probably no future with no devs and no future with no AI in Dev. But let's take that apart. If you look at the range of key activities and skills developers are doing today, what is obvious that is going to be replaced or, heavily impacted by ai, which are the areas that you think are the easiest for AI to solve and remove most of the human labor from?

Itamar Friedman: I think like it's like going to raise everyone up to be frank. Not necessarily replace, I'm probably wrong here, but probably something could be easier and will be replaced. But general notion, I think like everything, like juniors will be like tech leads would be able to influence more the juniors because they will have AI tools where they can put their guidelines and suggestions and that will be impacted.

So part of the thing that we're working on impacted and how the junior works it's also a dangerous bias us not that way. We have to finish very soon, so I won't dig into it, but Jen, I think it'll lift everyone to be frank. I know I'm probably missing in some jobs are gonna be replaced or dismissed or need less of them.

But I think More people will be able to code. Juniors will be able to do more, seniors will be able to influence more and deal with much more complicated tasks. DevOps like people will be able to up orchestrate much more and solve problem much faster, et cetera. And so I don't necessarily like, but by the way, I do think that developers would be able to do much more in DevOps than they could do before with AI tools.

So you can say that supposedly you DevOps people will disappear, rather. I think that maybe you'll need less but they would also be able to deal with much more complicated things. So again, I'm talking about like the five years or more towards and the 15 years is a totally different game.

Yishai Beeri: What you're saying is there are not like skills or professions that are going away. It's more like everyone is, could level up and yeah, the power or the outputs that even a junior could wield, if they're well versed in using those ais and prompts and whatever the entry point is higher.

Yeah. What are the things that you think are impossible or very hard for AI to, again, five years now from now to tackle in the Dev world?

Itamar Friedman: Human to human communication. Of course, it'll actually help us in human to human communication. it, there will be tools that helps product managers to talk with engineers, like you mentioned, the Figma to code. It's actually then now a product manager can push, maybe will be able to push code to production, et cetera.

But still I think converting product concept into the technology being in that point again, AI will help you build architecture better and give you more architecture ideas. But in general product manager I think that, there is a lot more to go on the reasoning because it's also involve a lot of intercommunication between people.

and the second thing are especially where the programming task or whatever DevOps or is very unique. Very unique for a specific company. Okay. You need the AI to reach really high levels to be able to deal with that. So it's either on the wrapping, everything, imagining, leading, and it doesn't mean that you won't have a, like additional agent that do a lot of tasks for you, but you need to manage it.

And the really complicated stuff that's, I think where AI would be hard for in five years to, to reach

Yishai Beeri: Amazing. Itamar, thanks so much for joining us. This was a great conversation. I hope you enjoyed it of us as much as I did.

Itamar Friedman: Thank you so much for hosting me. I really had fun. Thank you Yishai and the listeners.

Yishai Beeri: So yeah everyone, thanks for listening. Please take a minute to rate and review our podcast.

If you haven't done that really means a lot to us. And we'll see you next week.

Want to cut code-review time by up to 40%? Add estimated review time to pull requests automatically!

gitStream is the free dev tool from LinearB that eliminates the No. 1 bottleneck in your team’s workflow: pull requests and code reviews. After reviewing the work of 2,000 dev teams, LinearB’s engineers and data scientists found that pickup times and code review were lasting 4 to 5 days longer than they should be. 

The good news is that they found these delays could be eliminated largely by adding estimated review time to pull requests!

Learn more about how gitStream is making coding better HERE.

Setup gitStream on your GitHub repo today