On this week’s episode of Dev Interrupted, co-host Conor Bronsdon and Ben Lloyd Pearson, LinearB’s Director of Developer Relations, detail the evolution of Continuous Merge and the tool behind it, gitStream. Joining the conversation is Nik LeBlanc, VP of Engineering at DevCycle. 

Nik shares the ways his team is using gitStream to streamline code reviews and offers practical advice for anyone looking to implement the tool on their own team. He also explores the somewhat controversial practice of splitting up and reshuffling engineering teams, a strategy that DevCycle has used to great effect. Nik finds that this practice helps balance teams, manage diverse knowledge bases, and de-risk the organization.

Conor and Ben wrap up the conversation by casting an eye on the future, focusing on the potential and direction of gitStream.

Episode Highlights:

  • (2:50) Splitting and reorganizing dev teams
  • (9:20) How DevCycle is leveraging tooling to enable positive change
  • (14:50) Why teams are paying "a lot more attention" to PRs
  • (17:00) Nik's thoughts on DORA and shipping
  • (21:20) Threads & socials to follow
  • (23:35) Ben & Conor breakdown their convo with Nik
  • (27:30) Applying estimated time to review to PRs
  • (31:30) gitStream integrations
  • (32:30) The future of gitStream

Episode Transcript:

(Disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Conor Bronsdon: Hey everyone. Welcome to Dev Interrupted. This is your co-host Conor Bronsdon. And joining me today I have two incredible folks. I have helping me co-host Ben Lloyd Pearson. Ben, welcome in. So you may have heard Ben on previous episodes, including our recent labs episode. He is LinearB’s Director of Developer Relations and together we're talking with Nik LeBlanc, VP of Engineering at DevCycle. Nik, welcome to the show. Thank you very much. And I heard a rumor this might be your first podcast, Nik, is that correct?

Nik LeBlanc: Yeah, I'm terrified.

Conor Bronsdon: No, you're gonna be great. I hope everyone here who's listening will root Nik on a bit. It's psychically positive probably for him because today's episode is gonna be another great Labs episode and in labs, as we explore the data, research and insights that are most impactful to engineering organizations.

We won't be doing a data deep dive this week. If you want one of those, check out our recent episode on engineering benchmarks, but we will be exploring engineering team composition and knowledge distribution, specifically how they relate to Dora metrics and a concept that LinearB B is calling Continuous Merge and the tool that we built to help.

Push this forward called gitStream. Ben is our resident expert on Continuous Merge best practices and gitStream, and Nik has adopted some of that tooling with his team at DevCycle. So we're gonna ask him about his experience and how he thinks about some of those concepts as a VP of engineering. But first we are gonna start off by talking about team composition and that knowledge distribution piece we mentioned.

So Nik, let's start with you. You implemented a practice, which you admit is a bit controversial on your team, and I think maybe speaks to your approach, as I understand it, you are breaking up engineering teams and recombining them to help distribute knowledge and rebalance teams. Can you elaborate on that?

Nik LeBlanc: The history of DevCycle, we're also a company called Taplytics, that focuses on AB testing. And before DevCycle came around a couple years ago, we ran into issues with like, how do we maintain our SDKs?

How do we make sure that, If there's bugs coming in with our SDKs and we had many, how do we stay on top of those? And the problem that we had is we had basically one Android developer, who is now a product person. He no longer did Android. We had one iOS developer, and then we had our cto, and our CTO happened to cut his teeth on like objective C.

So that was our team for dealing with bugs on our mobile SDKs. so there's a big like bus problem there, because basically if we were to lose our iOS developer or Android developer we'd be screwed. We wanted to get away from that problem with DevCycle. So we wanted everyone to be contributing to the SDKs.

So that speaks to like the cross-functional composition of the teams and how we wanted to make sure that any team could take on any type of work at any time. the idea of cutting them in half, traditionally teams have been led by what's known as a team lead. And that's like my background.

So I started as a team lead at another company and then came here, was a senior, then was a team lead, then director, then vp. Team leads are hard to find. And there's not a lot of people in my experience that want that responsibility because it's ultimately, It's like you, you're responsible for the code, you're responsible for the happiness of your team, you're responsible for everything.

But you need some kind of presence that's pushing the team forward and making sure that everything's on track. So instead of trying to find new team leads, what we did is we just invested in developers, they that we already had, gave them an opportunity to try the position and then split the responsibilities of team lead into two positions.

So we had basically one that is a project lead and one that is a tech lead. For each team. So what that means is the project lead is responsible for basically making sure that the project stays on track, that Jira is like appropriately updated and reflects the current state, that people on the team are happy.

The technical lead is responsible for making sure that the code coming out of that team, meets a certain standard. And that worked fairly well and continues to work fairly well. But I. The fact remains that these are all people that are new to the position. So they're learning how to operate these teams as they go.

They're learning their own skills, their own learning, their own strengths and weaknesses. And each team is developing their own process for how they operate. we have been running like that for about a year now, and each team had learned a lot. So one team is particularly good at area A.

Another team, particularly good at area B and relationships were forming in these teams that didn't have the opportunity to form across the teams, as easily. So we decided to just break it up. So we cut each team in half and then we'd split those halves together to form new teams The point of that was basically to make sure that the leads had an opportunity to work with different leads, and they had an opportunity to blend what they had learned about what works best for their teams, into kind of like a cohesive hold for like how any team, can work at this department at this company.

Sorry. Now the controversy of this is, it's pretty bad to break up a team when the team is operating well. And obviously you're never going to find an ideal time when basically a team's work is over. So it's hard to find that ideal cutoff point where you can say, okay, team A has done with their work, team B has done their work.

We're gonna cut them in half. We're gonna make new teams now. so there's always gonna be an interruption. There's always gonna be chaos. And there was, but it wasn't that bad. Like we, basically pulled the trigger on a Tuesday, and by Friday everyone was operating as though nothing had happened. Which was fantastic to see.

And like I, I give so much credit to the members of the team for their willingness. To do this and their enthusiasm for it, and it just showed how well the members of the department operate together and what they're willing to try to basically ensure that we're operating effectively and that we're learning as we go.

Ben Lloyd Pearson: The amount of experimentation that you're doing is actually, really fascinating and I, and it makes me think about there's this meme out there that, like everyone's becoming a full stack developer these days, and in some senses you're almost taking that to an extreme.

But it seems to be working in many ways. As you've experimented with this over time, what are the results that you've seen? Like how have you adjusted based on I really love the example of deciding you need to actually break up your teams.

What have you experimented with on this and how have those results manifested?

Nik LeBlanc: Yeah. One thing that we've certainly noticed is that, let's say junior developers that might be leaning more front end or more back end. They're gaining a lot more confidence, a lot more quickly outside of the stack that they're comfortable with.

there, there's a couple examples within our company of like junior members that feel comfortable in the front end, but very uncomfortable outside of it. And that comfort level has increased dramatically. And through that, like they, they have more confidence in themselves. So They're being exposed to more technology, essentially. There is a counterpoint that it's hard to develop particular expertise in a one part of the stack or the other. So that, that is certainly like a compromise that structure accepts. But especially for junior developers, I think that like breadth is better than a particular focus at this point.

Ben Lloyd Pearson: That's a great example. So I'm curious then, have you found any sort of ways to address the lack of expertise? Do you, are you starting to build that internally or maybe hire specifically for it?

Conor Bronsdon: Let me maybe expand on Ben's question here. Okay. How are you leveraging tooling to enable these changes you're making in, team structure and composition?

Nik LeBlanc: Yeah. Okay. So that's a good question. We've done some crazy stuff with Jira, and I know everyone loves to hate Jira. I'm a fan of it.

I've learned to love it.

Basically each team has a board they call it a focused board. And at any given moment, a team is working on one particular epic, and that Epic is a feature. So Epic's equal iterations of features around here. And what they do is.

They basically on their focus board, they're able to see all of the tickets of the Epic in question, or they're able to just see the epic in question and they're also able to see any particular bugs unrelated to epics that have been assigned to their teams. We also have guilds, which is another thing that's been historically difficult to make work well.

guilds can either be an opportunity for discussion about a particular. Part of the platform or technology, or they can be an opportunity for like-minded people to focus on a particular part of the stack, build up backlogs that are less product focused and more, let's say technical debt focused or investigation focused.

and they can try to slot that work. Into their team's roadmap. So basically we have teams established with Jira and any product work or any project work goes to a team through that. And then we have labels that get associated to guild tickets and those automatically get assigned to, or those appear on the team's boards as well.

So we've gone through a lot of experimentation with the boards and how to best represent the work at hand for a team. And how to basically prioritize what Guild tickets the team wants to pick up and what tickets the team doesn't.

Conor Bronsdon: And my understanding is you're leveraging continuous merge tooling to enable that more frequent deployment. Can you share a bit about how that is impacting, this team structure you've developed?

Nik LeBlanc: Yeah. So one thing that we haven't done yet with Continuous Merge or with gitStream, but I'd love to is figure out how to like assign prs to people based on their teams. And I know there's a lot of examples in your docs about how to assign them based on familiarity with the code that's changing or lack of familiarity with the code that's changing. So that's something that we're going to lean into heavily in the very near future.

But what we use it for now is essentially to estimate the amount of time that it would take to review the PR and drop those labels on the prs. Merge any depend bot prs automatically, or any PRS that just deal with tests or docs. I think that's just a prime example in your docs and it's worked well for us.

A lot of our Dora lead time metrics are being affected by depend bot. Prs just sitting around and not being looked at. So it just forced us to act on them. So what do we wanna do with these? Do we actually wanna merge them? Or do we just not wanna merge them? So now that kind of took that question away from us, and now it just goes in if it passes tests, which is fantastic.

That's pretty much the extent of gitStream at this point, but it was largely like The reason we adopted it is because of door metrics. We saw blockages in our pipeline and we wanted to smooth them out, and now I want to use it to Basically give people opportunity to explore areas of code that they otherwise have Not yet.

Ben Lloyd Pearson: Yeah that's actually something I've heard from quite a few people that have checked out gitStream and that, they don't really wanna bias their processes too much to relying on experts, but they also don't wanna bias it too much on trying to like, push random reviews onto people, in an attempt to share knowledge. So the fact that we are able to like, give you these like knobs and switches that like, let you fine tune it for your own process, it gives you the best of both worlds, right? Like you leverage your experts when you need them, but then share knowledge in all of other situations.

So yeah, I really love how you mentioned that Dora led you down that path beyond like pickup time and review time is there any like other sort of thing that, has helped you make that connection between the value of tracking door metrics versus actually like implementing these processes?

Nik LeBlanc: One thing that I've actually gotten a lot out of Dora is Basically something to celebrate with the team. Hey, check it out. We're doing great. And it's often, It's often hard to pull yourself back and see the big picture and just appreciate your accomplishments. So by being able to review them daily and say yeah, we, we deploy like 10 times a day this week.

That's really impressive compared to other companies. And so I just like to make sure that the team is aware of that appreciates what it means and like how our process differs from other companies and allows us to ship this quickly. And obviously, gives us insight into anything that's affecting our ability to do

Ben Lloyd Pearson: yeah. And I think you probably also get, if something isn't going well you at least get some insight into why. So you're not just wondering why does it, why does it feel like we're not getting code shipped as often? Or something like that. You actually have the real insight you need to understand it.

Yeah. So issues improvements have, do you think you've seen with. Your developers more interested in being like proactive about, like trying to find those inefficiencies and improve them?

Nik LeBlanc: Yeah, definitely. The there's a lot of attention being paid to, basically just prs how long they sit. Like it, it's become a. An important part of the team's working agreements to discuss like how long they're willing to let a PR sit before they, before it gets reviewed. And they push themselves, and they find new ways to be made aware like prs that do need action.

So generally we would use the GitHub Slack extension. And then there, there's always the process of if you're between tickets, go check out some prs. But what the teams are doing now, so for one, some of us are using graphite. Graphite has a. Really good like GitHub integration for Slack that I personally find to be better than GitHubs.

So it's really good at informing you of prs that require your attention and like review cycles on those prs. But what teams are doing now to make sure that PRS don't get missed is within their team Slack channels. Whenever a PR is ready, they will post it to the channel and say, this PR is ready.

And then if someone is gonna take a look at it, they drop the I emoji on it and now everyone knows that this is being looked at. And then if it's approved, then they will drop like a green check mark on it. No, there's no automations set up that, that this hooks into at all. It's really just like a visual indicator that the team can use within their channel.

And this is one of the processes that kind of bled from one team to another with this, with the. Crazy team stuff that we did, when we merged the teams or split the teams and made new teams out of them. Cuz one team was using this really well and another team wasn't. And so now they both are. And it's just really helping, just making sure that everyone's aware of the things that they're able to do to help the team.

Conor Bronsdon: I'm curious to ask a more conceptual question about where you see. These kind of tools, whether it's continuous merge tooling stuff, more focused on incident response moving in the future, do you see these new concepts reshaping how Dev teams are working? Or where do you see opportunities to grow out automation tooling for workflow in the development process?

Nik LeBlanc: One thing that we're thinking about lately is, like Dora measures your, basically how quickly you can ship and the level of quality, so Is it buggy or not? Are you gonna break your systems with your code? Obviously we're a feature flag company, so we lean on feature flags quite a bit within our own development process so that we can, test things out safely and we can, resolve any issues very quickly by disabling a flag, for example.

Also our deployment process is fast, so if anything does go wrong that we can fix quickly, we can deploy it quickly. But what we're focused on lately is it's great if you can push. It's great if you can ship a lot of code and it's great if you're not shipping bugs, but there's no real, there's no signal there about the value to the user, right?

So you could be shipping high quality code at a very high pace that users just don't care about, right? And so you look like an elite engineering team. But you're not picking up users. You're not giving them a great experience. So that's something that we're trying to heavily focus on now, which is basically like considering developer touchpoints.

So any particular feature that we develop, you can imagine that it will export an interface or yeah it'll expose an interface, To basically like the api, the dashboard and CLI for example. And so we consider those to be like developer touchpoints of features. So the features provide value and then the experience of using the touchpoints also provide value.

So we don't want a bad CLI experience for people. We don't want a bad API experience for people, and we don't want a bad dashboard experience for people. So how do we measure that? How do we make sure that we can still ship quickly, still experiment, But focus on ensuring a good developer experience.

Like the idea of, we want people to be going on Twitter or going on threads and saying, holy cow, this c l I is great. Or holy cow, this platform's amazing. So just a unprompted, like celebrations basically of what we're doing. And we see that as a metric that we're looking for.

We are also focusing on things like, , kinda more product leading metrics, but like, how long does it take people to activate? And so activate is just, it's a metric that we're determining to mean, sign up for an account, get an SDK installed, serve a feature. Like how long does that take?

How many people actually do it? How short can we make the time between signup and activation? So these are all the metrics that we're trying to consider now to determine how good a job we're doing at actually serving our users and delivering value to them.

Ben Lloyd Pearson: Yeah. I'm wondering if maybe you can dive just a bit into the tactical side of it. Do you think this is a, something that, like a feature flag type? Tooling should solve for you? Or like a gap in the tooling ecosystem out there that is preventing you from tracking that today?

Or, have you just not found the right tool yet? What do you think it is?

Nik LeBlanc: We, so we know the tools and we're setting up the tools. So we're using Mixpanel for example. We're tracking particular points in like the user funnels. It's just our focus on that, that's changing. If we truly believe that the c l I needs to be a wonderful experience and the a p i needs to be a wonderful experience, then we need to make sure that someone's accountable for that and that someone is actually like overseeing what it means for that to be a great experience.

So who are leaders in that area that we can model ourselves after? How do we compare to them? How can we. Like we we had a, we've had a c l I for a while and until we started measuring the usage of it, we realized that no one was using it. Even internally. So we put some stats on that and we started tracking them and then we've made some changes and now the uptake has increased dramatically.

So basically just trying to pay attention to the usage of these things from not just a product perspective, but engineering too. So they're determining the metrics that they wanna be pursuing on their teams. They are, establishing the tracking and setting up their own dashboards and going from there.

Conor Bronsdon: Nik, really appreciate you diving into the unique state of your team how you're leveraging continuous merge and gitStream to redefine and improve on your DORA metrics and the tooling approach you're taking. I think it's been a really interesting conversation to dive into some of these pieces and how it's all coming together.

I'd love to let our listeners know where they can learn more about you and about DevCycle. Where can they stay in touch?

Nik LeBlanc: Yeah. DevCycle.com. We are, On all of the social media stuff, so you can find us on there. Just search for DevCycle. I don't have a lot of interesting stuff online, but you can find me at like Nicholas LeBla on Twitter.

Nothing too exciting and very clever about my handle.

Conor Bronsdon: Threads too, or just Twitter for now, threads as well. Yeah, I signed up.

Nik LeBlanc: Oh, there we go. I think within 30 minutes or something. 

Conor Bronsdon: Wow, okay. You gotta have a low number. Nice. All right. 

Nik LeBlanc: Yeah, and I'm a big Chicago Bears fan, and I just kept, I was doing like a Bears Watch, just checking to see.

When they would set up their account. Yeah. And by the next morning they still hadn't. So I wondered if there was like an N F NFL embargo, but so many other teams had already signed up, but the Bears took a long time. It's embarrassing.

Conor Bronsdon: Em - bear - assing, I think. Yeah. No, I, we'll cut that. Cut that. Anyways, you can learn more about what DevCycle is doing with continuous Merge in their blog.

Which we'll link in the show notes. and thank you again for jumping on Nik.

Nik LeBlanc: Hey, thank you for having me. This was a lot of fun.

Conor Bronsdon: Ben, I'd love to dive a bit deeper with you on some of the concepts that we heard Nik talk about in our conversation. It was really interesting to hear about how he's using gitStream. He clearly saw he had a problem and in leveraging Dora metrics to measure it, he realized that gitStream's, programmable workflows and continuous merge capabilities could help us esteem, improve and solve his problem.

What did you think about how he's leveraging these, this tooling?

Ben Lloyd Pearson: Yeah I really love how he described the connection between evaluating Dora Metrics and actually taking action on it with tools like gitStream. Cuz that was really what led up to us creating the tool in the first place is, we, once we show you the numbers, we want you to have what you need to actually improve those numbers as well.

The fact that their team is so able to, to just. Go into the dashboard, get all the metrics that they need about how things went this week or the last month. Cause it sounds like he doesn't really work with set sprints. So they're doing like more of a rolling window.

Look at their performance. And then translating what they see or what they surface from their Dora tools into things that they can actually change about how they write code. And review code, so talking about building more expertise across his company or unblocking, depend abot reviews because they just sit too long when there's really no reason ever for a dependent bot.

Pr or very rarely for them to just sit there, so the fact that they were able to go from understanding the metrics to actually taking action on them is something that just really validates like everything that we're trying to do here at LinearB B. And I also really appreciate how he's taken a more holistic approach to it.

It comes down to how they structure their teams, how they, dish out work to, to each individual, all the way up to like how they view. The entire process from the initial writing of code and submitting prs to responding to incidents and, making sure that they aren't introducing too many breaking changes.

Yeah, just overall it's just really great to, to hear firsthand from somebody who's actually able to, to more effect, be more effective at their job and communicate with their leadership, how their team is performing because of tools like what we're building.

Conor Bronsdon: It was, I completely agree.

Really fascinating to hear from him. It definitely resonated with me, and I hear this often with other engineering leaders that our approach is the right one of, okay, benchmark, figure out where your team is. Get your metrics, understand that you use a free metrics tool like, like LinearB, BS free or something else to get those tools compared to industry standard benchmarks like the Dora Research or our Benchmarks report.

Building upon that and then say, okay, like. How can we add in automation? How can we improve workflows, and then drive that improvement? And so this idea of a holistic software delivery management approach that says, all right, we're gonna understand the problem and now we're gonna apply both tooling and process improvements to actually solve the problem.

Clearly a successful at orgs like DevCycle and elsewhere, flow sports, other folks we've had on. And it makes me excited because it. Does feel like we are beginning to have a real playbook approach where we can say, okay go check out our benchmarks report, free data for the industry to help define where, how successful you are.

Look at the Dora report that, we're partnering with Google on and really define how successful your team is and where you can be. Start to understand those metrics and then start to apply some of the free automation tooling, these merge guides that we've started to develop.

And it's wonderful to hear that impact being made because, I know this is what gets me all excited about coming to work every day, right? Is we're helping Dev teams solve more problems by solving their internal problems.

Ben Lloyd Pearson: Yeah. And we built good stream to solve problems like providing more context when you're going into the PR review process.

So he, he explicitly mentioned that, they're applying estimated time to review to all of their prs now, so that when, just like he said, his developers. Or they're developers, they, sometimes are in between tickets. Maybe they're in between meetings and they got five minutes, maybe they have 10 minutes, 15 minutes.

They can immediately know which PRS in the queue are, available for them. Which ones can they actually tackle in the time that they have. But then there's also, you also mentioned unblocking reviews, like that's been yeah. Key benefit of gitSteam is, a lot of companies, they implement these, like one size fits all review policies where, you know, everything requires one or two reviews from somebody, but there's a certain number of prs within your organization that almost certainly don't need that level of scrutiny.

and unblocking that can, that alone can make. Big improvements on things like your pickup time, your review time, and then beyond that, getting reviews to the people that they need to be, whether you're trying to. Distribute that burden, much Nik was is trying to accomplish, or if you have an organization that is much more dependent on a smaller number of ex experts or maybe, it even depends sometimes on the projects, sometimes you have certain projects that you really have to depend on.

Some expert teams versus other ones that have more like generalist, full stack developers. Regardless of where an organization is, we built gitStream in a way that. It's gonna solve all of those problems for them.

Conor Bronsdon: Definitely. And it's exciting to hear organizations, like DevCycle that have taken on this extension of CI c d and said, okay, we're gonna use a tool like gitStream.

And we're now gonna implement CM into our process to speed up our actual delivery of code and extend what's happening with CI i c d, automate more workflows, apply that kind of machine learning type automation that you mentioned around estimated time to review and these other things that can be hugely impactful because it really drives back to what is showing in our research.

And if you listen to our labs episode from earlier in season three, about compound efficiencies. We see this massive efficiency jump, not just when teams benchmark and start understanding their metrics, but there's a second jump that happens once they start applying workflow automation particularly in that PR process.

And so it's super exciting to hear people starting to use these free tools like hit stream, and validating some of these key use cases around, labeling, prs with context sharing knowledge and building expertise and aligning reviewers.

Unblocking safe changes, and of course, things like estimated time to review, et cetera.

And it, it seems like right now, one of the best ways we can help folks is with the kind of work that's coming out of your team and our internal product orgs.

I've really enjoyed reading some of the continuous merge guides that are coming out of there, the documentation. Maybe just let's close by saying what's the best place for people to get started if they. Do you want to start diving deeper?

Ben Lloyd Pearson: Yeah. So if you're not ready to install something today, we've got, like you mentioned, we've got a guide on, implementing continuous merge for your organization.

It's a really great resource to learn about, how to, if evaluate where you are today and the source of things that you need to implement to adopt that mindset. But if you are ready to install, gitStream and start taking advantage of this. It's a very easy process.

It takes about two minutes to get it set up on your repo, and you can start building automations in just a couple of minutes. Customize them however you need. If you head over to g gitStreamed.cm, we have a really great onboarding guide that will help you get up to speed super quickly and, you can have your estimated time to review or your dependent bot.

PR is unblocked in just a few moments, check out.

Conor Bronsdon: And we've got a few other integrations set up already as well, correct?

Ben Lloyd Pearson: Yeah, so we're, it's one of our big focuses right now is on integrating with more and more tools within the. the developer ecosystem primarily around like your CI system.

We recently released a SonarCloud integration. We're integrating with some docs, platforms like Swimm, but we're also looking for plenty of other places that we can optimize your ci. Processes or any sort of thing that, that loops into your PR review process.

Stay tuned cuz there's gonna be quite a few more ways to extend and customize gitStream in the near future.

Conor Bronsdon: And if you're an engineering team, or leader that is working on. Docs workflow or anything else and you're interested in integrating with gitStream reach out, we'd love to hear from you.

I think Ben would be really excited to, to spend some time with you and see if there's an opportunity for us to collaborate. I know we're really stoked on the impact that we're already seeing with engineering teams who wanna keep doing more. I'm curious also if this conversation with Nik gave you ideas about other things you wanna see or build in the future for gitStream.

Ben Lloyd Pearson: Yeah. Feature flags definitely seem like there's something that is becoming very common practice in the software world. And it makes me wonder if there are ways that we can enable that a little better. It's still early days, so we don't really have any direct integrations with stuff like that yet.

But, it's tools like that we're looking at and trying to figure out if there are. Workflows within those spaces that just aren't optimized in the way that they should be. And if there's a need for them to be because, we really are designing gitStream to be as flexible as possible and as extensible as possible.

We don't wanna leave anyone out just because they've chosen a certain tool.

Conor Bronsdon: Yeah. And we're starting to hit that scale where we now have several thousand daily active users and we're starting to really see these major improvements at some orgs. Is there a key thing that you would want engineering leaders to understand about continuous merge if this is their first time hearing about it?

Ben Lloyd Pearson: Yeah, so one of the, one of the best, one of the features I love the most that we've built recently is in the front end for gitStream. We now, for some automations, we're able to estimate how much time your organization has saved by implementing that automation. And it sometimes can be just a very simple automation, but seeing, seeing, I.

Implemented this automation and in the last week or month or whatever, I've saved X number of hours of development time. Like it really doesn't get much better than that in terms of just being able to see that direct value of what you can get from something like gitStream.

Conor Bronsdon: Yeah. That control plane with the direct ROI, is really awesome to see.

Ben, any other closing thoughts from our conversation with Nik that you wanna share or ideas that are coming to mind?

Ben Lloyd Pearson: Nik had a unique perspective about building teams that, I think was really fascinating.

And to me it's just it's great to see that, we're building tools that solve for problems like him, like his team.

Conor Bronsdon: No, this is, this is the great thing about gitStream right, is programmable workflows enable you to have the unique workflows that your team wants.

Yes, we have examples that any team can implement. We have, a growing library of resources where people can just take a rule and apply it. But if your team has a unique construction, like Nix has a very unique construction, it's also flexible enough to apply. Continuous merge rules and tooling opportunities to your team.

To your point about automation, yeah, like he really wants to automate some of those low risk code pieces, but more importantly, he wants to make sure he's saving Dev times with estimated time to review each other pieces. I think it speaks to the flexibility and the opportunity there to do so much more, whether it's helping automate CI/CD workflows, applying to other automations, more integrations, that kind of thing.

Exactly.

Great. Ben, thanks for staying on for a few minutes. Really interesting to talk to you about the future of gitStream. we'll definitely have to have you back for another Labs episode sometime soon.

Ben Lloyd Pearson: Yeah, thank you. It has been great.

Want to cut code-review time by up to 40%? Add estimated review time to pull requests automatically!

gitStream is the free dev tool from LinearB that eliminates the No. 1 bottleneck in your team’s workflow: pull requests and code reviews. After reviewing the work of 2,000 dev teams, LinearB’s engineers and data scientists found that pickup times and code review were lasting 4 to 5 days longer than they should be. 

The good news is that they found these delays could be eliminated largely by adding estimated review time to pull requests!

Learn more about how gitStream is making coding better HERE.

Setup gitStream on your GitHub repo today