Get a 14-day free trial on Haystack
Try now

An Interview with Julian Colina on Engineering Analytics - Engineering Insights Podcast Ep. 2

Watch

Watch on YouTube.com.

Listen/Download

Listen on Anchor.fm.

Transcript

Junade Ali  0:17  
Welcome to the engineering insights podcast presented by Haystack Analytics. I'm your host, Junade Ali.

Junade Ali  0:27  
In this podcast, we hope to present views and interviews for software engineering leaders. I'm joined today by Julian Colina, CEO of Haystack Analytics will be giving us a primer on aggregate metrics.

Junade Ali  0:49  
Hi, Julian, thank you for joining me on this podcast.

Junade Ali  0:54  
So I think a good area for us to kick things off is if you could tell us a little bit about the story which led you to being a co-founder of Haystack, and why you really wanted to enter this this space?

Julian Colina  1:12  
Yeah, yeah, that's a that's a great question. So it might help to touch a little bit on my background, as well as my co-founder, Kan. Both engineers, eventually tech leads. I was a director for my last company, helped scale my company from nothing to hundreds of millions of users. And Kan has a pretty similar background working over at Cloudflare. And that gave us a pretty unique opportunity to actually advise other CTOs. And at the time, we're actually roommates in Singapore, funnily enough. And over dinner once, we were sort of talking about some of the problems that our clients were running into. And it dawned on us that these were all the same problems that we sort of all ran into both our clients and us in past lives. And what we what we really realised was that engineering leaders lack visibility. And it makes it really hard to understand even simple questions that we are getting at us like, are we getting better over time? Are we more efficient? Where can we improve these, these seemingly simple questions were incredibly difficult to answer for engineering leaders. And when we looked around for a solution, we really weren't too happy with what we saw, we saw a lot of companies trying to measure things like lines of code, hours worked, even number of commits, and quite literally ranking engineers based on these metrics. And we thought that was fundamentally the wrong approach. So you know, we started to build Haystack with a really simple goal. Can we build engineering analytics, to give engineering leaders visibility into how their teams are performing, in a way that truly helps team? And that's sort of how we got to Haystack today.

Junade Ali  2:45  
Awesome. So I guess it'd be useful to drive into a bit about what you mean by being an engineering analytics company. I mean, you know, there's lots of different things people think about when they think about, you know, different types of analytics organisations, Lots of people when they work with engineering teams don't really measure what they're doing. So can you talk about what it really means to be an engineering analytics company?

Julian Colina  3:13  
Yeah, so that's, that's exactly correct. We work with engineering analytics, which is, which can be seen as sort of a vague term. But really, what we do is we use data from GitHub, to help teams understand very specifically their delivery funnel at a team level. So we help them get really good at delivering software by understanding some of the fundamental metrics that make an engineering team successful. And we can talk about some of those specific metrics later. But from a high level, what we do is help you visualise trends across teams like speed and quality, plus dive into every aspect of that delivery funnel. So things like development time, even code review, to help uncover bottlenecks, like, how long does it take to respond to a pull request? How long do we spend reworking code when it's in the review phase, all the way to things like large pull requests that are holding up delivery. So you can imagine, as features go through our delivery funnel, they often get stuck at various friction points, let's say within QA or deployment, and we help uncover those friction points, letting teams identify them, first of all, measure their impact, and then have a really, really nice measurable way of, of understanding if they're making changes to improve on their process or include automation, is that really driving an impact?

Junade Ali  4:32  
Awesome. So I guess part of this is, you know, some people will be out there thinking, you know, if they've got an engineering team and that their organization's that they could be delivering things really, really well, from a product standpoint, you know, they're in a good position, they feel you know, they're building something which is phenomenal, but why is it actually important to measure engineering performance specifically; I mean, how does that engineering performance really tie in with the broader organisational goals? And why is it important to measure that?

Julian Colina  5:09  
Yeah, that's, that's a great question. And if you if you find yourself in the in the position where you're building great product, you're moving quickly, everybody's happy, then, you know, that's, that's fantastic. I think that's what we all strive for. But looking at some of the best engineering teams in the world, they're constantly improving. And there's really no stopping the improvement that you can get from an engineering perspective. And there's been some excellent research actually in the space across 1000s of companies, what the benefits are to the company when an engineering team is truly performant and efficient and able to deliver high quality code, high quality features quickly to customers. And it basically correlates to higher levels of profitability and market share and customer satisfaction. which really makes sense when you think about shipping software is a company's heartbeat, right? The ability to provide value to customers quickly, to innovate, to build, to build something that people want, as Y Combinator likes to say, and if you're able to do these things successfully, as an engineering team, you really do help drive the success of the organisation. And I would urge anybody listening to actually read, Accelerate, which is a great book, which touches on a lot of this research that shows the correlation between engineering performance and business outcomes, which is historically something that's a bit difficult to actually to measure and to understand.

Julian Colina  6:28  
One quote that I like to use is from Marc Andreessen, which which states Cycle Time compression is the most underestimated factors in determining winners and losers in tech. And that that makes a tonne of sense, because when you think about it, actually, decreasing Cycle Time, helps us give value to customers faster, it helps give feedback on whether we're building the right thing faster, and ultimately helps us innovate on what we're building even faster. So all these things really help drive company performance overall. But that's not, that's not the only thing that we're able to achieve. When we're actually focusing on engineering performance. There's actually a tonne of benefits for the engineers themselves as well. There's there's a tonne of common friction points that end up becoming very clear when you're focused on these sort of North Star performance metrics, things like context switching, technical debt, distracting meeting; I mean, the list goes on and on. But when you can actually visualise these metrics, these North Star metrics like speed and quality, then it becomes very easy to advocate for changes and actually improve that developer experience when you can see things like context switching or technical debt having a dramatic impact on your end delivery, which often does. So. At the end of the day, measuring performance has a tonne of benefits, not only to the organisation, but but to the actual developer experience itself.

Junade Ali  7:46  
Awesome. So you kind of touched on Cycle Time there, which I guess we can we can drill into a bit later. But you know, Cycle Time is something we often consider as a North Star metric, right. It's something which measures the end to end performance of an engineering organisation. But you know, that's not the only thing that we we like to measure, you know, we have these leading indicators. We have, you know, short term risk factors, which are important things to, to measure as well. And, you know, so we've got this kind of split between, you know, North Star metrics, leading indicators and risk factors. Could you talk a bit about, you know, why each of these are important and the different role they play for, for measuring engineering performance?

Julian Colina  8:36  
Yeah, sure. That's, that's a great question. And I think it's important to really dissect the difference between a North Star metric and something like a leading indicator or risk factor. So a North Star metric really helps align your team around what matters to the engineering department. And the first question always is, you know, what actually matters. And through reading nearly every piece of research in the space on 1000s of engineering leaders, we've sort of boiled it down into one simple sentence, the number of successful iterations. And what that really means is how frequently can we deploy value to customers? And that's a combination of speed and quality. So it's how frequently, how quickly can we deliver quality code to customers. And the way that we measure some of these things, just to touch on it really quickly, is Cycle Time, Deployment Frequency, Change Failure Rate. And those are really what encompass a North Star metric, this balance of speed and quality. And often the next question that comes along is, you know, doesn't it matter more what you're building, rather than how fast you're building it? And ultimately, the question is, obviously, yes. But the way that I use the analogy that I use to sort of answer that question is you can think about engineering as a race car. Our job is to really move as fast as we can safely, and lucky for us we also have the backing and support of a product team to help us make sure that we're actually driving that right car in the right direction. And the way that the engineering team can actually support building the right thing is to make sure that we can go through fast feedback loops to make sure we can get valued customers quickly get feedback quickly and iterate on our product, making it easier to actually build the right thing. So these North Star metrics really help align your team around that fundamental concept of number of successful iterations. And that helps you quickly identify when there's an opportunity to improve and really get good at delivery, being able to identify when problems arise quickly, and being able to advocate for change using these North Star metrics. Now to touch really quickly on leading indicators are risk factors. These are sort of sub metrics to help drive those North Star metrics. So again, North Star metrics being speed and quality. There's some things that that directly impact those. Those are things like Pull Request size, or first response time. They're not great metrics on their own. Definitely, we shouldn't use something like a PR size as a KPI. But when you use them to actually drive those North Star metrics, it's very effective. So these North Star metrics, when when you're using them in combination with some of these leading indicators, or risk factors, we've seen teams really drive incredible improvements. We've seen delivery speeds up to 90%, decrease in the amount of defects in production. And that's really the power of using these North Star metrics, if you're using the right ones.

Junade Ali  11:24  
Awesome. And I guess yeah, I think it's really important to you know, get those get those measures really spot on, because one of the worst things you can do is he can be measuring something which ends up you know, making your performance worse, if you're measuring something like lines of code changed, or a, you know, meaningless metric that just draws down down performance. So I guess in terms of validating these North Star indicators, you know, there are some third party references we've mentioned earlier, you know, Accelerate, then you and Kan when you were kind of founding Haystack did a lot of your own kind of market research. I'm wondering if you could really summarise both this third party kind of research and the kind of novel research that you did when you were founding Haystack as well.

Julian Colina  12:17  
Yeah, that's that's a great question. So being an analytics company, we're obviously very data driven. So we sort of dive into every research we can find we do a tonne of research on our own as well we have we have a tonne of data to sort of parser and analyse. So out of that comes a tonne of interesting insights into what actually helps engineering teams become more performance, efficient, and how does that really impact the business. So Accelerate, the State of DevOps reports, two great pieces of research, that are done across 1000s of organisations over many, many years. And they really show that the teams who can excel at the North Star metrics that we track so again, that Cycle Time, Deployment Frequency, Change Failure Rate, when you when you're able to excel at these particular metrics - it's highly correlated to organisations with higher levels of profitability, market share and customer satisfaction, which again, going back to Marc Andreessen, his quote about Cycle Time, you know, if you're able to get really good at delivering quickly, high quality value to customers, then that will inevitably start impacting your ability to innovate, which will, you know, impact profitability, market share, customer satisfaction, and these types of things. So, it's, it's about a 200 page book, I think. So there's a tonne to actually dive into. But I think that's the main takeaway. From there, these these metrics and engineering performance as a whole truly does impact the organisation in a meaningful way.

Julian Colina  13:41  
And when it comes to Haystack, doing its own research, I think every metric in haystack has gone through a pretty rigorous process to make sure that it's actively, that it's actually effective. So we're constantly analysing data across hundreds of 1000s of pull requests, finding patterns and things like Pull Request size, context switching, working hours. So we really do the homework for you making sure that every metric that we're looking at actually drives an impact. There's actually a few pretty, pretty exciting pieces of research coming out pretty soon around, what is the impact of context switching? What is the optimal Pull Request size, so pretty excited to release that? Shout out to Haoran who put put in a tonne of work into that piece of research. But ultimately, the entire mission of Haystack is help you get really good at delivering software. So we put a tonne of effort into making sure that there's no fluff and that every metric and insight that we're providing actually helps, you know, move the needle and make make the engineering team better.

Julian Colina  14:38  
One of the funny things when it comes to sort of validation is sometimes you get people trying out Haystack that want to do things that fit outside, what has been validated and some of these things are wanting to measure or compare individual engineers. And time and time again, that research so is that it's an ineffective approach to measuring performance. So we don't have anything like like that in Haystack obviously, and we actively turn customers away, we make a strong effort to make sure everything in Haystack aligns with the leading research and best practice.

Julian Colina  15:10  
So, to sort of summarise between Accelerate and Haystack's own research, everything in Haystack is research backed. And if anybody has questions or want to look through some of those, those reference points, feel free to email me at Julian at usehaystack.io. I love talking about this stuff, can share whatever, although all the research that we've done so far.

Junade Ali  15:31  
Awesome. Yeah, that definitely, definitely makes sense. So, you know, the Four Key Metrics that we saw in Accelerate, you know, we have two which are traditionally based around, you know, reliability - Change Failure Rate and Mean Time To Recovery. We've got number of Deployments there. And then we've got Cycle Time. And Cycle Time, you know, it's often described as particularly important as a you know, North Star metric for engineering teams, it's often something that's overlooked and you know, often, you know, as engineers, we can say, okay, we want to get things as reliable as possible, or we want to up the amount of deployments to get us in terms of, you know, DevOps best practice. But Cycle Time is something which is often overlooked, but often, you know, ties together a lot of these different factors and how good an organisation is. So I'm wondering if you could really walk us through as to why it is such a particularly important metric to look at.

Julian Colina  16:32  
Yes, Cycle Time is a pretty incredible metric, actually, because it has a lot of benefits that I'd say most people don't realise. But put simply just to define Cycle Time for everybody listening, you can think about it as how we can measure how quickly we're able to deliver any given feature. And when it comes to faster Cycle Times, quicker Cycle Times, it has a tonne of benefits. One, like I keep mentioning the faster feedback loop, both for engineers to get feedback on their work. But also, when we deliver that value to customers, it's a fast feedback loop to make sure that we're in the right direction. Then there's, when we decrease Cycle Time, it's also a forcing function to properly scope out work. When you're focused on Cycle Time, the work ends up being smaller, we put more effort into design and properly scoping which makes it easier to review. less prone to error. When we focus on Cycle Time and faster Cycle Times, it actually allows our organisation to be more agile, there's less cost to shifting priorities. So you can handle incoming business needs more effectively. And it actually produces less context switching as there's no long standing work in progress. So ultimately faster Cycle Times leads to faster delivery, obviously. But more importantly, even higher quality releases and increased level of predictability and more agility to actually handle the business needs. Again, I love this, quote, "Cycle Time compression is most underestimated factor in determining winners and losers in tech". And there's so many reasons for that.

Junade Ali  18:03  
Excellent. So, you know, there'll be a lot of a lot of people, you know, when they hear about Cycle Time, the first thing they'll really ask after that is how can my team actually go about, you know, meaningfully reducing the Cycle Time? Okay, we've got the metric we've been able to measure it. What do we do next? How do we actually go about, you know, making a meaningful dented cutting that metric down?

Julian Colina  18:29  
That's, that's a great question. And Haystack can help with that, obviously, but it's 100% possible for any team to decrease Cycle Time. So a lot of a lot of times teams come to us and say, "Oh, you know, we have these large pieces of work or these small piece of work, or, you know, what we're working on is fairly unique." And at this point, we've worked with so many different companies in so many different industries that it truly is possible, we've realised for any any engineering department to reduce their Cycle Time and improve on it. And when it comes down to is there's many, many levers to actually improving Cycle Time. And that part is actually team dependent. Whether it's process changes, etiquette around your code review, reducing Pull Request size, introducing automation restructuring teams, that list goes on and on. The more important thing is to establish a culture of continuous improvement, to be able to actually measure Cycle Time, first and foremost. That way, you can identify bottlenecks that are impacting that Cycle Time. Let's say you have an outlier, one week or one particular piece of work has a very high Cycle Time. What that gives you is one reference point where you can start to understand, okay, why did this piece of work, why was it different from the rest? Was it stuck in QA? Was it a huge Pull Request? Did it touch on some aspects of technical debt within the system. And when the whole team is sort of rallied around this one metric, Cycle Time, you can start to track the impact of your changes. So let's say you measure Cycle Time you identify an outlier, you find what that bottleneck might be, and then you make an active change to improve on that for the next sprint or the next week. Then when you come back, you can actually see the impact of that change. So in the same way that we build, measure learn to improve our product, we should be doing the same for our process itself, we should measure, we should identify bottlenecks. And then we should see the change the impact of the changes that we're making. And that's what we help you do at Haystack. But even without Haystack, if you can get to the point of establishing a culture of continuous improvement, then you'll be able to reduce Cycle Time by simply implementing this whole measure, identify bottlenecks and track the impact of those changes.

Junade Ali  20:34  
Awesome. And I realised we're coming up to 20 minutes. And we've, we've spoken about, you know, a lot of things without mentioning any buzzwords about data science or machine learning. So I think that's always a good sign there. So I guess, to wrap things up, I mean, where should people go to really learn more about about Haystack and a lot of the upcoming kind of, you know, you mentioned some of the research work Haoran's working on and some of the content we're producing and more critically, you know, the people who really need a tool to measure these metrics? How do they go about, you know, working with Haystack?

Julian Colina  21:17  
Yeah, that's a great question. So the first stopping point is probably our website. So usehaystack.io. In in there, you'll find a whole bunch of documentation, blogs, our podcast, as well as some research papers that we've published as well. So that's a great stopping point. If you if you want to read more about Haystack, or any of the research or even how to implement any of the practices that I've described today, there's a tonne of great, great content on the on the blog itself. If you want a more sort of active approach to learning, then you can subscribe. So on our blog, there's a nice subscribe button where we'll sort of feed you insights week over week. And if you want to reach out to me, I read every single email in my inbox. So Julian at usehaystack.io happy to chat about any of this stuff. If you have any questions or, or want to dive into some of the things that your team might be facing today. Happy to help you to do that, too.

Junade Ali  22:11  
Excellent. Well, thank you so much for taking the time. I think it's been been really valuable. It's great to get to be able to record some of your your thoughts on these things. So I'm really grateful for you taking the time today to speak to me.

Julian Colina  22:26  
Awesome. Thanks for having me. It's fun.

Junade Ali  22:46  
Thank you for joining me for this episode of the Engineering Insights podcast where I've been joined remotely by Julian Colina.

Junade Ali  22:57  
We hope you'll join us next time for more resources for Engineering Managers and DevOps leaders.

Junade Ali  23:05  
The soundtrack used in this podcast has been WERQ spelled W E R Q by Kevin McLeod. This podcast has been recorded and produced in Edinburgh, Scotland for Haystack Analytics.

You might also like

Want to drive better software delivery?