Let's talk about continuous discovery. We've identified the bottlenecks and now we need to discover and test the best way to solve them. Billy will walk through how to use opportunity trees and rapid prototyping to continuously find ways to improve the metrics we believe will grow your product.
Now, we're going to take a look at continuous discovery. Here we've got a customer journey and we've mapped out all the different events and moments across that journey. Our customer's going to experience, we really want to dive into one that's not tracking here, and it's not doing what we think and how can we make some impact there?
How can you get people beyond maybe this point of friction and get them to that next magic moment? This is behavior driven. It's not necessarily driven by a bug in our software or a bug in our product. It's something that's just not doing what we expected our users to - our customers to do. How do we get here? How do we find these opportunities?
We're really looking not at the math side of this anymore. It's about human emotions. What are some of their fears and assumptions going into the product? For this example, we're showing maybe a customer views, their first data visualization. It's only hitting 20% and we need to get it up to 80. And that's the thing that we're going to tackle as we move forward.
So diving into that, you know, we're going to assume a bunch of opportunities as we go through, we might look at some initial stuff and say, Hey, this isn't tracking the way we thought. So here's a bunch of opportunities we think we can go after, but how do we validate those opportunities are actual opportunities?
So we have these opportunities and we really want to focus on this metric of increasing it to 80%. Now we're going to come into our first discovery loop and that's our exploration loop. In our exploration loop, the first thing we're going to do is we're going to get some feedback and some data we might be talking to our support team.
We might be looking at customer surveys. We might actually take some time and go to the app store. Look at app reviews, maybe some social posts of what people are posting on Instagram or Twitter about our product. We might be doing observation sessions. We might be looking at users, actual workflow through our product and really finding out what is the issue.
We're not going to find the why yet, but we're definitely gonna identify the what and that'll inform us. When we go into our customer interviews. Customer interviews is really the most valuable thing we can do. We should be talking to customers at least once a week.
We're going to find out areas of friction, pain points they have, things that they're struggling with, and we're going to pull out these different patterns from their stories or observations from their stories. And we're really going to understand our product better than we ever could just looking at that raw data. These customer interviews are going to help us identify those different opportunities, right?
Like we said before, the stories and the patterns are really going to help us identify what we could go after and what we probably can't go after. These are the things that we're going to tackle when we get into this opportunity tree.
Now we can take those opportunities that we assumed existed. And now we've got some actual information to choose and to put together real opportunities. Here, you can see we've killed one-off. Doesn't make sense. Didn't work, not something that we heard in our interviews. Maybe something that didn't show up in our research at all.
We've got two that maybe aren't big enough. We don't feel they're big enough here at this moment. We might come back to those later, but we've got one that we really want to go after and tackle. Within that opportunity, there might be multiple solutions we're showing here is maybe four potential solutions that we're going to go test.
And now we're really going to take these solutions and take them into our second loop on the discovery side, which is prototyping.
In the prototyping loop. The first thing we're going to do is really spend some time designing out our solution. Here, we might create a small team of stakeholders. This could be designers, folks from product, maybe some developers involved.
We might be running design sprints, wireframes really taking multiple solutions through this all at the same time. We might be using exercises like crazy aids, and really understanding how does this fold into our current experience. And how does that data and information architecture work with what we currently have?
We might be throwing out a ton of ideas here, but we're might going to bring a couple of forward into the prototyping phase. We moved to prototyping. We really are working from that lo-fi to that hi-fi, right. We take these crude wire frames and those crude solutions, and we're starting to make them look higher fidelity than before.
We're focused on what does it feel like to click through this experience? What does it feel like to go through end to end? We want to be leveraging our design system because we don't want to be creating a bunch of new stuff that doesn't already exist. We want to be spending time talking to maybe some of our development partners to see what might work. What might take a long time? Might be something that's a lot easier to focus on right off the bat. When we get a couple of prototypes that we like, we want to move it right into user testing. We're going to recruit users either from our existing user base that we've interviewed, or we might get some new customers or customers from a competing product, depending on what we're going after.
We're going to be doing interviews and observations. We were listening to those moments of joy or moments of friction as we're going through those different prototypes that we. Here, we're really observing a lot of different metrics, like time to task. If it's something about completing a certain task, what's the usability of the product.
I want to make sure that it's easy to get through whatever we're trying to measure here. I'm going to be AB testing two solutions against each other, or AB testing against the current product and the current experience, these data and observations and all these competing solutions is going to decide when we get back into our opportunity tree on what solutions move forward.
We're going to impact those different solutions that we kind of started in that opportunity area that were very abstract. And we got a little bit more concrete. It was moving into the solution phase and we want to kill those bad ideas fast. So you see here, we killed two more solutions. As we kind of went through, we have one that we felt is really tracking well, and we can probably move this back or move this into our delivery.
And then we have another one that maybe is something we had to come back later. Maybe we got to run another exploration on it. Maybe we gotta run another prototyping pass on it. The next thing that we're thinking about now is designing out our in- product experiments. You know, who are we targeting with this? What is the segment we want to go after? What does this success criteria or the failure criteria for this specific experiment?
We want to decide those now, before we get financially, emotionally invested into our. Into our product or into our experiment. So we can have a clear mind on what we decide to move forward and not we're going to move into the delivery phase, but we're still about learning.
It's not just about shipping features.
You have customers, have traction, and now you want to scale your startup. You might think that the learning's over and it's just time to execute - but nothing could be further from the truth. You have to keep learning more about your customers and aligning everyone on your team to serve them better.
You have customers, you have traction, and now it's ready to scale. You might think that the learning's over and it's just time to execute - but nothing could be further from the truth.
In this phase, we hear a lot of teams say, "We know what our customers want." It's time to get down into the details so that you can build a business that truly lasts and build a product that customers really love. It's time to start pulling apart the different customer segments you truly serve - and the different journeys they're on - so you can uncover the biggest value opportunities to improve. Your team's rapidly growing and it's going to change and stretch you. There's new roles, different communication, decision-making, delegation, all of that will change.
Most people look outward for more help - to hire more and more people to the team and add more fuel to the flames. But what we've learned is that it's all about looking in. The companies that learn and adapt the fastest win. So it's all about empowering your team to make decisions and take ownership. If you don't do that, you're going to end up wondering how you're going to connect all the dots as a founder. And it's really about helping others help you and help you on your mission. This series is all about getting your team ready to scale. That means learning more about your customers and aligning everyone on your team to serve them better.
We hope you enjoy these videos. If you have any questions along the way, please reach out. We're here to help.
To push your startup to the next level, you will need to grow your team this year — by a lot. As you make hiring decisions and onboard new employees, it’s essential to consider how your teams will be structured.
How do you create an environment that fosters great design? Listen to this roundtable discussion with members of the Design Team at Headway. Learn how we approach the design process, along with ways to create a healthy and happy team culture.
Learn best practices tech leads and software development managers use to build healthy and happy teams. Jon, CTO & Partner, and Tim, Development Lead at Headway, share their real-world experience and advice.
When scaling your startup. One of the most important things you can do is focus on your go-to-market strategy.
Learn how to leverage customer segmentation to find the best opportunities in the market to unlock the potential of your product growth. Understand how it creates a better GTM strategy as you grow and aligns your product teams.
When scaling your startup. One of the most important things you can do is focus on your go to market strategy. There's three components of go to market.
One is who you're targeting. Early on, you're probably reaching lots of different people, lots of different customers, because you're just trying to get traction in the market and you have traction now, and you're trying to scale, but we need to look at who we're targeting again, combined with what we're offering, the product we're offering, and also how we're delivering.
How we're acquiring channels, how we're selling also, how we're, how we're delivering those three components make up, go to market strategy. We're going to focus on the who and the what, which are the two primary components of go-to market strategy. That ends up being the market is the who and the, what we're offering - what we're delivering is the product. And as we try to go to market, and you're trying to go from a small slice of the market, trying to penetrate and go deeper into this market and really become well-known and do really well. The challenge is that the market isn't that simple. Innately, you know this, it's not just a market, we visualize it like this.
There's actually a lot of different customers, different customer contexts. And you know, this. Your salespeople are talking to different types of people. You're hearing different requests from different types of people. Some people are really, you know, excited about your product and really finding a lot of success with your product. They're doing really well. They're referring it to other people, and those are like your champions. Those are your ambassadors in green here, right. And you also have those customers that try it and they just fall off right away and they just try it and leave. You have really poor engagement, a lot of fall off, a lot of churn and you have other customers that, you know, want it to work, but they're really frustrated and they don't know what to do.
They're stuck. They're not getting to success, in red here. Right. And so, the reason why they're having a different experience is because they're actually coming from different contexts. If they were coming from the same context, they would have that same trajectory. They would accomplish and get the, get to green, right.
Get to success. But they're going in a different direction or they're coming from a different starting point. And so we have to realize that we really have different types of customers, different customer contexts. And we think about. It's not just us pushing the product out into the market. It's actually the market pushing back on us, on the product.
The customers that are really happy are pushing us in one direction and asking for certain features. And they want to pull us in one direction, other customers that are falling off, we're trying to save them and figure out why and add different features. And they're asking for different things.
You know, I would use it if you did this, if you had this feature. And there's customers that are frustrated and angry and they're pushing you in different directions. And they're not all asking for the same thing. So what do we do? How do we handle this? You know, your team's getting pulled in all these different directions.
And you're trying to serve different customers who are asking for different things and you can't do it all. It will stretch you and your team to a breaking point. You're going to stretch your product apart. If you try to do too much, or if you try to cram all these features in here, some features are just, aren't going to make sense to some people or just navigation leads.
What do I do here? What is this thing? It's losing its centricity of something, right? Because everyone's asking for different things. That pressure on your team is normal at this phase, but it's a by-product and it's a symptom of poor go-to market strategy. It's a symptom of trying to serve different customer segments at the same time.
It's really important, then, that we focus, not just on the product and the features we build and handling, you know, prioritization. It's actually much more important and much more critical to focus on customer segmentation and getting this right. Now, each of these customer segments actually have different value, you know, contract value. They have different sizes. There's more of some than others. Some are more approachable or more willing to purchase than others.
We're going to talk about total available market right now. Each of these customer segments actually have different sizes, right? What if the customers you're targeting right now that are having a really high success rate with your product.
What if it's actually a small niche and it's not actually going to get you to your growth goals? Maybe you shouldn't actually be focusing on the customers that you're serving well now. Maybe instead you should be focusing on customers that are struggling in some way. But it really is determined by the size of the segment and a whole bunch of other variables, which we're going to talk about in a minute. It might also mean that you stop selling to some customers that aren't happy with your product, right? Selecting which customer segment you go after is going to have a huge impact on you. It's going to make everything easier for your sales team, for your marketing, for your positioning, for your messaging, for your designers, for your development team. If you can figure out how do we set up a good go-to market strategy?
So, how do you prioritize these customer segments? One of the ways we suggest approaching this is through total available market.
Now the way to do this is actually look at all the current customers you have now, everyone who's ever been in your sales pipeline and start to group them and theme them together. There's different ways to do this, but you want to be grouping them by use case. You want to be, make sure that they have the same actual needs that have the same actual desires and the same actual desired outcomes for your product. That gives you your market segments. If this is a B2B example from there, you could actually go and do market analysis to see, hey, how many companies are in this segment? And what's our likelihood to close, right? How attainable is that actual market segment?
That'll give you a TAM for a total available company, but then you want to actually multiply it by your average contract value in that space. And that's going to give you something really, really great. Not only is it going to give you dollars per market segment, but then you can also compare and contrast market segments against each other based on how big that market actually is.
So in this case, we have basically half the number of people, that are doing really, really well. That segment might be a great place to start. It might also mean that's not going to get us to our growth goals, maybe we'll prioritize and shift into this larger segment that we're actually not serving well right now, but we want to move into. The next thing to look at is beyond just market size - to pulp these other variables, because it's not just about market size. It's also about, hey, how well does your product fit the needs of that market segment? It's also, what's their propensity to spend, what's their budget? What's the market concentration, right? Maybe if there's just a few companies, then we have really high revenue concentration, and that could be a risk to the business.
What access do we have in the market? What channels are, who do we know in there? And overall, how attractive is this segment? You start to put this matrix together, not just market size, but market size and these other variables on our likelihood to win. And then you can come up with a really, solid way to prioritize which customer segments to go after.
Then pull them into something that we call the beachhead strategy. Now the beachhead strategy allows you to focus on one customer segment first. Again, that's so important because if we don't do that, your product is going to be pulled in all these different directions. There's going to be a lot of tension and your sales team selling to all these different people. The value prop is different. The pricing might even be different.
You know it's going really well, when you have one focus when your positioning is solid and when your product team is aligned with one real strong customer segment that you can win.
Once you've accomplished that, and made that attack, and that beachhead really, really strong. Then you can always expand and penetrate in different ways, right? Hey, these other segments are great follow-on. I'm not saying that you, you stop, you only do this, this market segment, but you want to start here in something that you can really win at and then domino effect, right?
Learn what most SaaS companies get wrong and tips to set up product onboarding for success with Ramli John, author of Product Led Onboarding.
Get a deeper understanding of how your customers interact with your product so you can make improvements.
Analytics tools can be handy, but they’ll really only provide you with averages when not used properly. Learn how customer segmentation can have an impact on how you see your SaaS analytics.
Which SaaS metrics matter the most? Ryan will unpack where you should be looking within your product's user analytics to find bottlenecks and what really happens when you fix them. This will help you understand which metrics you need to improve to help your startup grow.
Let's talk about the right metrics for your growth-stage startup. There are so many - annual recurring revenue, monthly recurring revenue, signups, churn, monthly active users, daily active users. And there are so many more, right? So the question is, how do you get the right metrics? How do you think about measuring the right things so that you can make the critical decisions you need to make, and your teams can make the critical decisions they need to make on a strategic basis, but also on a daily basis in their work?
An important place to start is actually, let's get back to the customer. Now you've got a really specific customer that you're going after. You've done your go to market, your customer segmentation, really, really well.
And if you're in a two-sided or three-sided market, then you've got a couple targeted customer segments on each side. You have to really understand the customer and their context, right? What are they really struggling with in their day-to-day life? And what are you promising them? What's your value prop? How are you positioning and differentiating yourself? How are you saying that you're going to help make their lives better? And what outcome is that customer really trying to achieve?
This is an overview of jobs to be done, but the important thing is that they're trying to go from where they are today to a really better place.
And that's the promise that you're providing them. We think about metrics along that journey. If we look at it, one of the most important things, and common things to do is actually look at it as a funnel. Now, when a funnel is helpful, you've probably seen this before, it's the pirate metrics model. As we're moving left to right, there are some important things that I think we want to slow down and actually dig in and talk about.
One of those things is as we move left to right from awareness acquisition, activation them coming into your product. You achieving revenue, retaining them, and actually doing revenue expansion and upselling them through organic referral. You noticed that there's, the marketing side of the house on the left and really the core product experience on the right.
When we think about what metrics become important here, there are several. One of those that you might be familiar with is customer acquisition costs and lifetime value. The challenge here is that lifetime value is actually a very lagging indicator, and you don't actually know your lifetime value, right?
Your churn might be high now, but it's going to expand. You're going to do revenue expansion. You're going to upsell. That lifetime value takes a long time to develop. And it's a lagging indicator. The other side of the house is the acquisition cost. The acquisition cost is of course, how much you spent in to acquire that customer.
However, the relationship between these two is also important. Lifetime value and acquisition costs. There is a standard ratio in a subscription type company, at least three to one is the ratio. So you want to have at least three to one on lifetime value to acquisition costs. You should at least be three x-ing your money that you're spending on that customer.
One level deeper that's really, really important. When we think of lifetime value being a lagging indicator. Well then, what should I really focus on from a unit economic standpoint? If you're not familiar with unit economics, unit economics is looking at the economics of each individual customer. The question is, are you profitable on each individual customer? Not as a whole, not as a whole company, but on each specific individual customer, are you losing or making money per account? So when we think about this, one of the most important things you can do for a leading indicator is actually called payback period.
Payback period is the time in months to reacquire your acquisition costs. So if you're, spending $500 to acquire this customer, how fast, when they actually start paying you, does it take to recoup your $500? This is called a J curve. That $500 is when you get back to zero, when you've made that $500 back, that's called the payback period.
The question is how many months does it take to do that? If you're 12 months or more, you're going to have a cashflow problem, right? You're not recouping that revenue soon enough, that investment soon enough in that customer. So what you really want to focus on one of the best leading indicators on the acquisition and marketing side, really is shortening that payback period as tight as possible. The faster you can recoup your acquisition costs, regardless of lifetime value, the faster you can break, even on a per customer basis through all of your acquisition and sales efforts, the more growth you can have, right? You can take that money that you're recouping and actually double-down and pay for more acquisition. It's going to really accelerate your growth. If you can shorten your payback period, that becomes really important to do.
Now that you've established payback period, as one of the most important leading indicators for your acquisition efforts. Let's talk about the entire funnel. One of the things you might've heard about before is called theory of constraints. Theory of constraints, states that in any one system, there's one and only one bottleneck at any one time. And if you start to think about your company, your growth state startup as a system, then that starts to make sense.
The question becomes where's the bottleneck in your funnel. Where's the bottleneck in your pirate model. The bottleneck changes over time. For instance, it might be an activation currently. You might want to work on signups and downloads. You might want to work on early engagement in the product. Let's say you get activation figured out.
And now we have a lot more people actually in the product while the bottleneck just moves. There's always a constraint. There's always a bottleneck in your system, in the funnel. So let's say you have a bunch of more people in the product. Now we have a lot more retention issues, right? We have more people coming in, but they're not actually getting to success.
We have more churn, more monthly active users that don't come back the next month. I want you to think about where is the bottleneck? Where is the constraint currently in your funnel. You probably know intuitively, but do you have the metrics set up? Do you have the analytics set up to instrument all these different parts of the funnel so that you can know every week, every month, where is the bottleneck?
How do we go from lagging indicators that the pirate metrics model gives us to leading indicators? Leading indicators is what you and your teams really need to move the needle and make the impact. You're hoping. How do we find leading indicators? To get to leading indicators, we need to look somewhere else.
Question here is what perspective are we using right now? What view of the world, or we're using the customer's perspective or the company's perspective? 100% we're using the company's perspective. It's a funnel. We're trying to pull people through. This isn't the customer's perspective.
To get to leading indicators, we really need to get back to how does the customer view us in the market? What's going on in their lives? Let's take a look. If you think about how the customer actually purchases your product, other products. Let's walk through the steps. How does your customer look at you and look at the greater market landscape?
Well, they go through a buying cycle. Here's what the buying cycle actually looks like. Here are the stages. One is urgency. They have to have awareness that there's a problem. Or that life could be better somewhere else. They have to have urgency to act. They have to actually want to change.
That's urgency. Then they start researching and looking at different options, evaluating them, narrowing them down. Figuring out which one's a top contender, doing demos, doing evaluations, stuff like that, doing trials. Trying something out. That's selecting. That right there is the marketing side of the house, but it's really this navigating the market and the ecosystem. And again, that's why differentiation and positioning is going to be so important. It's the very first part of that journey for the customer. Your experience often starts with onboarding, showing them around, showing how to use your product, your tool, how to navigate, where things can be found and where they can find value.
Then it's repeated usage and them getting core value, using it on however frequency basis that that you'd like them to. And then when they need help, they reach out for support. These are the major phases that the customer kind of evaluates and looks at products and uses products. So the customer is going through these major phases of purchasing, evaluating, and starting to actually use your product. And while these phases are helpful for us to categorize the phases of the experience, they're not enough to give us the leading indicators that we need to be successful with our teams. We have to go deeper. Where do we go from here?
Well, we actually need to recognize that the customer doesn't go through these just major phases. The customer actually goes through these individual behaviors. Individual behaviors and behavior change is one of the most important things to realize when building products and developing and designing the right metrics.
Let's step through an example. Let's say the first step in your onboarding is - a customer has got to register, whether it's email or phone number or whatever customers got to get logged in somehow, then maybe customer will create their profile. You notice a pattern here as we're starting to use these customer will statements.
These will statements are behaviors that you expect your customer to do and perform in your product. You're trying to pull them from where they are today through your product to get through to where they want to go to get to that promised land that you've promised them to get to the value.
Will the customer actually create the profile? Will the customer actually create their first project? This could be uploading data spreadsheets, could be uploading personal health information, could be connecting and integration, whatever those first steps are. And all of these individual steps are behaviors the customers have to do in order to achieve the value.
You'll notice that there's actually a lot of steps here. And that tends to be the case is that behavior change is actually really hard. So designing a super, a wonderful and great experience for the customer becomes absolutely critical because there are so many micro steps the customer has to actually do in order to get the promise, get to the value that you're promising them.
So you really want to map out and understand what journey are we pulling our customers through? What's the experience that we're providing for them? What are those individual behaviors that we're actually expecting them to make? And then you'll find that not all behaviors are created equal. Some behaviors lead to these magic moments.
Think of the magic moments in your life. Like getting the keys to your first car, to your first home, having your first baby, your first kiss, whatever those magic moments are, right? Those have make impressions on you. You want to make magic moments. Now they're not going to be life-changing moments like the ones I just mentioned, but you want to create magic moments, excitable moments inside your product as early as possible in your journey.
A magic moment is a moment that creates joy - is a moment that gives someone an aha moment. Like, oh, I get it now. This is incredible. It's that kind of emotion we want to convey in these magic moments.
Now to get to the right metrics, you have to be able to not only identify what this journey is, what those behaviors you're expecting, but you have to be able to measure them each and individual one and set up your analytics in a way where it's simple enough to understand where people are in the journey.
Are they getting to these first two steps, and then they get stuck? Or do they get a little further and they actually have problems? How far are your target customer segments, and the personas in those customer segments, getting in these customer journeys? And you'll find that people aren't getting as far as you'd like, and the experience is either more difficult or not as smooth or not as intuitive as you'd expect.
And this is why looking at things as an individual behavior level and from the customer's perspective, not the business's perspective is so important to build the right metrics.
The next thing to realize is that you don't just have one journey. You have multiple personas, right? If you have a two-sided market, you definitely have multiple personas.
If you're B2B, you have a buyer, you have a user. Often you have multiple personas that you're targeting. Even if you're targeting just one really narrow customer segment that we suggest. Each of these different people have different contexts. And you'll find that they actually have different journeys. Let's amend to one of these customer journeys.
So we've come a long way. We started with annual recurring revenue, monthly recurring revenue. We started with these level pirate metrics funnel, which is all well and good, but it didn't give us the leading indicators that our teams need to be successful in make impact for the. What we've done here is found a leading indicator along a specific customer journey that really will unlock a ton of value.
And as we zoom in to that customer behavior, that's where we want to really make impact. The first thing to do is start measuring its effectiveness. Where is it today? Let's say that the customer behavior is that the customer views their first data visualization. That's the magic moment that gets them super excited.
Wow. This is amazing. Well, we're only getting 20% of the people to that magic moment. If we can go from 20% and increase that to 80% now we've defined a metric. That's going to be really powerful and really helpful for our product teams to be successful and actually move the needle. This is what gets teams excited.
This is what unlocks potential for you and your company when you get to this. That's how you set up the right metrics to grow your startup.
Which SaaS Metrics really matter? Corey Haines shares his hot takes on startup growth KPIs with Ryan and Robert in this video interview on the Exploring Product podcast.
Analytics tools can be handy, but they’ll really only provide you with averages when not used properly. Learn how customer segmentation can impact how you see your Saas analytics.
It seems logical to run a survey to find out the answers to your questions. After all, it’s quick, easy, and provides a lot of data — right? Not so fast. Learn the risk of surveys: Bad data is worse than no data.
Startup and product growth leadership starts with you. Learn how to remove bottlenecks and make more progress by structuring your product teams for scale.
Now that you know what part of the customer journey will have the biggest impact, the question is, how will you get there?
Do you focus on delivery or do you focus on outcomes? We need to understand that shipping is not value.
Now that you know what part of the customer journey will have the biggest impact? The question is, how will you get there? Do you focus on delivery or do you focus on outcomes? So we need to understand that shipping is not value, right? Shipping alone isn't going to impact this metric. If you've been in a product team for any period of time, you know that you've built things that never get used, right?
Shipping feels good. It sounds good. As creators ourselves, I always joke around and say, strategists want to do more strategy designers want to do more design, developers, want to do more development, and none of that helps your cause as a business. So the whole goal around aligning around a metric chasing that is to give your team the ability to use that as a lens for the level of craft that they put in.
At Headway, we call it craft within constraints. So it's understanding how we attack this metric together, and understanding that going faster in the wrong direction means that we're going to have to pay off more tech debt later. It means there have to support more things. And so understanding that features won't save you, and a lot of features are built with the best intentions, right? They usually come from ideas from the team. They might come from customers, but they never get used and they don't move the needle.
So when we think about, making an impact as success and not actually shipping and sure you can celebrate the small wins, shipping big releases, but when we can celebrate impact for our customers, that's really where the most fulfillment happens.
So the thing that we need to ask ourself is what's expected of this new. Let's take a look at everything on the roadmap. What impact are we trying to make? And if we can, in this example, take customer views, that the customer views the first data visualization and increase that to 80%. We know that we're going to help them on that journey.
We're going to be more successful as a product. So as we move forward, though, we need to know. If this is a metric who's accountable for that? Who is going to take ownership and make sure that gets reached, because if everybody takes ownership of that metric, nobody does. And so what we normally see in a lot of teams is department structures. So structured around different sets of hard skills. We have strategy, we have design, we have development and those in their own right are super valuable. But many of you watching in many, product teams are structured in this product team structure doesn't mean everything's figured out just because you're working closely with strategy, design and development doesn't mean all of your problems go away.
So the question is. How can we align that team around a metric for a specific customer? How can we empower the team to take ownership, to build more empathy and really understand the journey that they're on? So where we'd like to start is with the customer at the center of everything. So the customer is just a person that we're, that we have to serve.
Right. It's, it's very good and useful to have, um, someone specific in a specific set of customers that journey they're on, but ultimately all of your customers and the different segments, as Ryan mentioned last. Are going to have these key jobs. Some jobs are going to align. Some jobs are going to be different.
Some jobs might not be met at all. And so when you put the jobs around the customer, they're usually for a specific segment and the features are the things that we're building your teams building that are assumed to actually deliver on that job to actually get them to that outcome for that specific customer.
And the last part of it is really this platform concept where, you might have different platform teams, you know, you might have a team that's in charge of platform as a whole. You might have teams that are divided up by mobile by web, but there's ultimately nuance in that. And I think one of the things that's really interesting, as we dig deeper into this model is how it aligns with other models out there. So if you empower your team to be aligned around a customer and moving a metric, moving the needle for them, we have to figure out. As I mentioned how we build that understanding so that team can actually make decisions with that context.
So this reminds me of, or is very closely aligned with Simon Sinek's "Start with Why", and that really outlines the customer and the job. So making sure that we understand who we're serving, what journey they're on, and then moving out from there, understanding how are we going to solve. What are the things, the features, the solutions that we're going to create that are ultimately going to deliver on that.
And ultimately, where does that happen? What sort of platform does that? Mobile? Is that web, is that a wearable? Is that a service? Right. And all of these things build upon each other. And so our recommendation, as you try to empower your teams is to orchestrate them around customers and jobs.
And I'm going a little bit deeper into that. We get to see a specific customer with a job, and then you layer in, okay, how are they experiencing our platform? How are they experiencing the features? What are those metrics that we're trying to move for them? And in this case was making sure that they're able to view that data visualization.
That first one is a key moment in this entire journey. So going back to. Going back to our product team. That's structured around a specific vision and outcome and a mission with common goals. We see that. A lot of teams can be structured this way. And that's kind of best practice. When you think about agile or you think about cross-functional teams.
But the question we have to answer is how can we create a matrix organization essentially like this, where we have leadership in those key pillars, we have career development, we have career paths, progression all of those things. But then we have these key teams that are empowered with specific metrics that are potentially empowered to own an experience for a customer, and then they can go solve and they know what's gonna move the needle for those specific users.
So as we think about the difference between feature teams and empowered teams and really start to pick those apart, what's the difference?
So feature teams. Get features to build, right? These are things that come down from leadership, from management, from, you know, the heads, the stakeholders ultimately prioritize and decide what gets put on the roadmap and the feature teams are there to design and build.
So, Hey, whatever you say right there. They're following orders essentially. And when you do that, you kind of turn off the half of your brain. That's analytical, that's critical. That thinks about how can I use my creativity to ultimately move the needle for our customers. And they're not really involved in product discovery.
Also with feature teams, product managers are project managers and they're focusing on schedules and deliveries. They're not really accountable for the results and they're measured by feature output and roadmap, velocity. Hey, what's our velocity. How, how many points can we do this sprint? Not really asking themselves, are we making the right impact with the points we have?
Most teams would be much better off cutting their velocity by a quarter, slowing down and making sure that they're able to actually, produce impact for their people or for the customers they serve. So what that means is your feature teams are waiting for direction, so they don't have any autonomy to go and make decisions on their own.
Ultimately categorize as ticket takers. What we find usually in, in talking with folks from these teams, as they're less fulfilled, you have less control over the impact and value that you can provide. You can't really show up as your full self here. You're taking the next ticket in the queue and moving it forward.
Now for many feature teams, their roadmap looks like this, which is six months of features planned out. None of them really backed by behavior or customer intent. None of them really checked back through with customers. A lot of them are just ideas. Hey, we think this will move the needle.
We think this will make an impact. We think this will move things forward. The reality is if you've been a part of, one of these teams, that's rarely the case. I mean, it doesn't mean it can't happen. There's no absolutes here, but when we think about increasing the likelihood of success in what we do, it's about making sure that we can empower teams to own something.
And so, as we look at the differences between a feature team and empowered, They get problems to solve, right? Like the metric we're talking about the stakeholders partner with them, the stakeholders give their key insights about the industry or about the customer or things that we can't quite get access to.
But the teams are there to discover the best way they see fit to solve the problem. And that's not them just saying, Hey, we think this is good. That's them taking them through this journey, that ultimately checks things with customers and gets it in front of them. They're so involved in product discovery as well, meaning they start to understand how customers think.
As we think about that last diagram, they're really in tune with the needs of the customer, the goals that those customers have, and they can start to really use their brain to think through how can we serve them better, not only the metric, but overall. Also on empowered teams, product managers act as CEO of the product.
They focus on value. They focus on viability and they're accountable for the results. And these sorts of teams are measured on their value and outcomes. The biggest thing is that they're on a mission. They know what the mission is. They know how to, how they can impact it. They know what's valuable to the business. They know what's valuable to the customer and they're able to act on that.
So with that, understanding that these teams are data-driven and they're always in motion, right? They're never waiting because they have a mission that they're on and they know how they can use their skills in concert with each other to make an impact.
They can take ownership and we see more meaning and fulfillment there. And I think that's important because so many teams can be structured in a cross-functional way, but still have this feature team aspect to them essentially, where they're waiting for tickets, they're waiting for someone to really tell them the next step, when really they could show up and take ownership over it if they had the right environment and if they had the right capabilities.
So part of that is taking a look at this empowered roadmap, which is kind of an adaptation of that feature roadmap, where you see a lot of the things are blurred out. Things start to fade. Hey, we might know what we're doing in March and April. But some of those things are discovery, right? We're doing discovery, we're doing prototyping.
This is not really the full feature roadmap of what we're getting into delivering into production. But this is what things we're going to test with the customer. And so we know two to maybe four sprints out what things we'd like to test on the roadmap, the backlog of things that we're going to produce and really figure out how we can hone what the future looks like with our customers.
And really co-create. So as we think about what does this look like in practice? We start to look at this model, which blends in discovery and delivery, and you might've seen it in different ways, right? Double diamonds before, right where you go through it and ultimately get to a solution. And then you have to figure out how to iterate.
But a lot of teams, a lot of startups that we know do that early validation and they cut off discovery, meaning they're focused on delivery, right? They did all this research upfront, all these customer interviews, all of this first person, primary research, they figured out where they could solve a problem for them.
And as they started getting traction, they went okay. Back to kind of what we mentioned earlier in this series. We know what our customer wants and really we need to get into the details. And so if you just do delivery, you're going to be missing out on the entire experience, because the reality is you're serving your customers through a product, not through a person, right?
If you had an objection, if you had a question face to face, you could ask me and I could explain why, but your product needs to do all of that explaining. So as we look at this model that. We're going to go through next. It's all about putting the metric at the center of this journey, right? Increasing that effectiveness to 80%.
So that means we're doing discovery and delivery around this. We're going, we're exploring potential options based on what the data is telling us. We're going into prototypes and saying, how might we solve this? What are the different solutions and pathways we have then once we prove those out with customers, we move it into a pilot where we do a small test.
Then ultimately we're scaling that up to all of our users. So using this model, you can start to look through the lens of metrics and of your customer, really at the center of this, to make sure that you're making decisions on the roadmap that are going to have impact that you want.
Learn how to make a better onboarding process with a plan that can positively impact your employee retention and improve the culture of your organization.
Learn how to foster a welcome environment within your team, incorporate different types of feedback, and how to leverage your tools to keep the collaborative momentum going with your design team.
Continuous delivery isn't just about shipping features. It's about learning and behavior change. Learn how to apply what you learned in the discovery phase by deploying pilots for testing with users before you implement it across your entire platform.
Jon also walks through:
As we get into the delivery loop, let's remember what Billy said. Delivery isn't just about shipping features. It's about learning and customer behavior change. What we're trying to do is measure whether or not a solution we've found and deployed will actually increase the number of customers viewing that first data visualization.
Did they have that aha moment and to learn properly? We believe it requires in product experimentation, which is so important, but so often not done. So let's dig into the delivery loop, and discuss how to ship and test features with a subset of your users in a sustainable way. At a high level, the delivery loop starts with a pilot, which is a limited release of new features to a certain number of customers.
How do you determine who gets in the pilot? Certain personas, like team accounts with more than 50 users who would benefit from seeing the new data? Visualization might be a good example or users who have applied for and are opted into receiving beta features, otherwise internal team members that also use your product in their day-to-day work can test out the edges of the system as well.
But a pilot, shouldn't just be the next set of features being used by the internal team or some random testers. To make a pilot useful, it should be at least three things: a well-defined experiment, released to a limited pool of users, collecting data and analytics for analysis and action.
So let's talk about experiment design. It's really important that we define upfront the success or failure criteria. Is it 10%? Is it 20% ? Use whatever threshold for success is appropriate for your scenario to ensure everyone's on the same page with what it'll take to keep the feature. You also need to be willing to pull out the feature if it doesn't succeed. We don't want unused features bloating the code.
We'll do this through feature flags, allowing in progress features to live side by side, with the full production release at the same time. AB testing these new features in that limited release, alongside those current public features. After the experiment is designed, we'll move into a limited release. To whom? Well, the folks that we already talked about getting into that pilot previously. Again, this is done at runtime alongside the stable code base.
So whenever possible ship, all the experiments to production by using feature flags, like I mentioned before. There are some limitations or instances where it is impossible, like software as a medical device or infrastructure code for planes or trains, et cetera. But even the electric car manufacturer, Tesla is doing beta testing of their full self-driving or FSD beta software through limited releases over the air to vehicles at customers' homes. Whether or not this is a good idea or believed to better software or safer roads remains to be seen.
But if Tesla can do it with some forethought, it's highly likely that your product can too. After our limited release, we need to know how the experiment performed. This is where the data and analytics come into play, and we can analyze our experiment results.
If the success criteria were met, great, it's time to scale up. If not, then remove your bad feature, pull it right out of production. This prevents you from taking on all the code maintenance, support burden, and it simplifies your code base. So that brings us back to our center metric. Of these two experiments that we ran, only one of them was successful. So we'll eliminate that failed experiment, but for the successful experiment, it's time to double down on it and scale up. Scaling up is an exciting thing, because not only are we hitting our metric, but we're actually helping people. As we scale up, we want to move into the full feature build.
This is where we'll add missing pieces of functionality. We want to flesh out these features based on actual feedback from our pilot customers. This leads to a release in promotion. So now it's time for marketing to go tell the world about this through an official announcement. Maybe you have folks that applied to be alerted when the latest features are released, it's time to hit their inboxes. And then we need to support at scale.
Some customers will struggle with new features along with communicating the changes through marketing promotions. Create guides FAQ's and training videos to help people be self-serve as they explore new features and functionality being there in the first 60 days to really ensure customers are happy with the result is critical just because the feature worked with 2% or 10% of your user-base doesn't mean you won't have issues at scale in order to stay on top of that use in-product help or chat platforms on the web and in-app tools to collect user feedback and engage with your customers to ensure they feel supported and heard.
But for all of this to be possible, there are some technical foundations for continuous delivery that need to be in place. We truly believe if we don't do all of this well, none of the other things we've talked about are possible. We'll talk about four main areas: version control, build pipeline, deployment, and production version control encompasses all things committing code.
Think about your Git repository. We need to establish a branching strategy, and I recommend having Maine be the integration branch and having a stable branch for every other environment. So you might have a production branch and a staging branch, a release 0.47 branch, for example, or a QA branch.
Maybe you need a sandbox branch because your product has an API that developers need to hit. We also want to make sure that after we have our branching strategy established, we're making atomic commits and feature focused branches that tell a story about why something changed, not just how.
If your commits are moving lots of things around in the code base in addition to adding the new feature, consider a technique called pre-factoring that will put the code base in an ideal state to accept that new feature. We also need effective pull requests and reviews. At a high level, these show the differences to the code that a developer is proposing to be added, to support a given feature or a bug fix.
If it's an involved feature or the code is confusing, we tended to favor pairing through a PR review rather than wasting a ton of time in asynchronous comments. You can actually set up your pull requests to run a lightweight version of the build pipeline , an optionally deployment, which we'll talk about here in a minute.
So that build pipeline - this is things like your automated test suites. In your automated test suites you need unit tests. These are designed to execute quickly and fail fast. If something fundamentally is flawed about the business logic of the system, we won't continue. Then we'll move on to integration tests, which check if the system, as a whole complies with the functional requirements, which would include things like hitting an external API, making a database call, and generally going through the system as an end-user would checking to see if all of the parts of the system are working together properly..
These are designed for fast feedback. We need to build up pipeline to be effective in our development phase so that we can have our tests set up to execute quickly. And then during deployment, if there's an error, we want the ability to roll back that build pipeline, roll back that deployment. It's important to automate the build because then we have a repeatable process which will help eliminate human error and optimize build times.
After our build pipeline has succeeded we're onto the deployment phase. We want to make sure we're doing automated deployment. This is often referred to as continuous delivery. It doesn't have to be in every environment, but it can help optimize testing when the latest is always available in a staging or a QA environment for example.
You could also do it at the pull request level per feature branch or bug fix branches as I mentioned before. So where does this get deployed? To ephemeral environments or containers. What are these? Well there environments that are short-lived, they're easy to set up and tear down. They're scripted through code and pull requests.
So any changes to those environments are able to be reviewed by your team and they allow for more easily- horizontable scaling. There are several technical considerations when configuring your deployment pipeline. But one of the larger ones is zero downtime. We want to make sure we have properly configured database migrations to help support zero downtime deployment, because there are multiple connections to the database reading and writing at once.
The code needs to be able to be deployed in a way that allows for the new features to start working with the new database columns while the old code is still running against the same database for awhile. Then to clean that up, we create a second migration to remove the old columns from the database once no more rights are happening to the applicable database tables.
In reality, these might live for quite some time. If you have a feature flag of a new feature turned on for 2% of your users, your database might be in a state of flux where the production code, the public code is hitting some aspects of a column, of one of your tables in your database, but the feature flag 2% is hitting another.
And so there could be an additional migration at the end of that process to allow you to clean up that data once everybody gets that new feature. So now that we've deployed and we're in production, how do we limit the new code path? Again, this is through feature flags. So what are those? It's a fancy way of saying that we protect some new aspect of the code from executing with conditional logic, unless you're a user who's part of that limited release group.
So again, those could be folks that either are assigned as beta testers or have opted into beta features. These feature flags can also act as a safety valve, allowing us to fully turn off a code path if it's not working well, or if no one's using it, or if it has bugs itself or is causing bugs in the current public release.
Well, how would we know if there are any issues in production? Well, by monitoring, alerting and analyzing our production code and environment so that it will alert us when errors are caught surface, slow queries for analysis and debugging, and it'll have a server- load monitoring with alerts on memory usage or CPU overload as well.
There's lots of great tools that will show you bottlenecks in your system to help optimize your code and infrastru. Speaking of infrastructure, we want auto-scaling infrastructure with a containerized and ephemeral set of environments. We can choose or build a deployment platform that will automatically increase the number of nodes in our environments, horizontal scaling pool, on demand.
We want to be able to have it respond to things like requests per minute, or RPM, as well as memory and CPU utilization. A lot of teams only do one of the four loops. A lot of teams only do full-scale delivery. But having a solid foundation for continuous delivery is what makes him product experimentation possible.
So now that we've shipped the feature, we're going to take a look at how it's impacting our metric. Sure enough, it's actually working. We proved it in the pilot that it had impact. We scaled it up and sure enough, it's having an amazing impact to the majority of our users. This is great because customers are seeing success.
A lot more customers are getting past this point, which moves them further into the journey where new areas for improvement can be identified. That's okay because our team's up for the challenge. We're continuing to deliver a lot of value for our customers. And once we identify the next challenge, we can start the process over and figure out how to serve them best next.
You’ve got plenty of ideas. But it’s critical that you proceed with caution. To scale your product rapidly and successfully, you’ve got to figure out how to validate new ideas quickly. Then you will know which features are worth pursuing and when.
Learn best practices tech leads and software development managers use to build healthy and happy teams. Jon, CTO & Partner, and Tim, Development Lead at Headway, share their real-world experience and advice.
One of the best parts about creating great software is collaborating with others. It's also one of the hardest things to get right. Learn how to enhance collaboration and communication across your team.