Recently we spoke with Pancrazio Auteri, a renowned Product Management expert, about how product teams can 10x - 20x their experimentation budget to build the right things quickly and affordably. This is an excerpt from our podcast, GTM: Got Ten Minutes, the podcast for product teams. You can listen here: https://samelogic.com/podcast/pancrazio-auteri-board-member-at-contentwise/
Teams need to;
Quickly test the viability of new ideas before committing resources to their development.
Gauge customer interest in a product, feature, or service before writing a single line of code but running in-product tests.
Use tools like Samelogic.com, to focus more on outcomes and less on output.
Use the knowledge from Fake Door Tests to more confidently say no to requests that fall outside their scope and yes to the ones that do!
A passionate team builder, he likes achieving speed by empowering teams to take calculated risks through clarity, alignment, and trust. He's a product management executive and serial entrepreneur, helping startup founders to become investable, through a compact and challenging focalization program. He's hired and coached 100+ engineers, designers, and product managers, often covering the role of transition manager from a small size organization to a business at scale.
Pan moved from Italy to California, and he now lives in Berkeley with his family, sometimes hacking local ingredients to cook their favorite Sicilian food.
How did you end up in Product?
Sure, in a nutshell. So I was born and raised in Italy, then attended university in Milan in the north of Italy. In the nineties, the Internet was coming out of academia. So doing telecommunications and engineering, I started from scratch by learning the protocols, the Internet protocol, the TCP IP, and so on.
So I became a tech guy, very geeky, and I founded my first startup in 1996 with some friends from the university. We were focusing on bringing Internet technologies into the enterprise in Italy. Then fast forward ten years ago, my company was acquired by a company in San Jose, California. I live in the Bay Area, and they told me, look, you sound like a Polyglot, so we think you can be more successful as a product manager instead of a tech person. So they offered me the job, and I was fortunate because they taught me many things about product management.
They allowed me to reorganize design engineering and the business side of the Product, the feature selection, the market research, and so on. So I learned product management in a medium-sized company, and from there, I learned a lot of things from my teams.
And then that's how I became a product management professional, essentially.
I think in terms of lessons, if I can say by looking at the most significant mistakes more than less, first of all, it's hire people by attitude so that you can have people that really can participate in the culture instead of hiring people just because they bring in a skill. And the other one is really to trust people, but trust itself. Trust alone is not enough. You have to empower them to take all those micro-decisions that need to be taken every day.
And then, when you give them the ability to make decisions, you are responsible for providing clarity and alignment so that their choices all go in the same direction. So I made these kinds of mistakes many times where I was imposing my solution or micromanaging things people every day by looking at little progress, so the lesson is to trust people.
If you hire my attitude, it's easier to trust people, be very clear, give them constantly repeat the alignment, the vision, or the real goal, make goals actionable and then let them take all those hundreds of decisions every day they have to take without keeping you in the loop.
And then you achieve speed and bring in a lot of exciting ideas where if you are the only idea generator, things will not get very creative and effective. So trust people, give them the context, and back them to take risks.
Yeah, one thing we learned on the job is that there is something that motivates the creation of a product or a service independent of the Product itself or from the current technology used to create it. And that's what we call the job to be done.
So, for example, a theater general manager needs to manage the theater to serve the community and elevate the spirit of the community. And so each feature or tool, like a ticketing system, donation management system, CRM, and so on, are all devices that are touched. At the same time, the team in the theater is trying to create the best season, manage the best ticketing experience, select the correct prices, and so on. Or to create the best newsletter, print the best invitation, or print the best campaign bills.
So it's essential to understand what the job is to be done and then figure out with your skills and your competence which part of that job you can support with the best tools to make it cheaper, better, faster, and so on.
If you reach this awareness, you can start looking at what you can do now, have the tension, and identify the overarching teams where you want to create value. So if you understand the job to be done, then you can understand the themes and the pillars of that job. Then you can decide voluntarily that you want to start creating value in a specific area, and then you create a tool, a product, for example, as a service. But because you know that invariant job to be done, once you are done with that specific value creation, you have a clear clue of where to go, what adjacent area to explore or what is essential to be refined, and so on.
You understand the feedback you get from the clients because you know the context. And so, the whole team must understand the job to be done by the client. There was one joke, one of my teams; there was a joke that was saying that everyone from the customer success team and the product team could go work for our clients because they were doing their job so well that our clients could hire them to do their job instead of being product managers or customer success managers.
And some of them were coming from that world. In that case was the theaters and the ticketing, the box office discipline. So the whole team must understand the user's context; otherwise, all the micro-decisions will not be aligned with what matters.
We’ll assume that the product team knows what's the overarching job to be done. And then this job, which can be very macroscopic, has been divided into job steps, micro steps, and micro steps. And then, each step is analyzed to understand the outcomes that the user or the client wants to achieve from each stage and the metrics that determine the success that work has been completed. And at that point, when this is clear, the questions are about how to implement things or about the details.
For example, in the theater ticketing discipline, there is a strong connection with money management account management accounting because you can imagine a lot of money is coming when tickets are purchased, and so on. So we didn't realize at the time that the accountant, the financial guys, were outside working outside the software that was managing all the ticketing stuff. And so there was a lot of back and forth with exported data, CSV files, Excel files, and a lot of processing into Excel. When we discovered that process by looking at their everyday life, we found that.
Many things were already occurring in our software, but they were not available to customers or clients. So one product manager made a list of things that could be useful for the people. But we had no idea how many clients we had, more than 3000 theaters at the time. So how many clients were using a specific tool for accounting?
Another one thinks about QuickBooks or Xero or others. And we didn't have time to do surveys and other things. And when we did surveys, many clients, 90% of the people, didn't respond or were responding very slowly. And so what she did was take a developer and create some options in the menus and additional buttons that were essentially doing two things. The first one was leveraging our product analytics infrastructure to see when they were viewed in the viewport on the screen.
That means you have the stimulus in the UI and when they were clicked. And so that was the first thing. So collect usage data. But because those buttons were not functional, they created simple pop-ups asking what you were trying to do.
Because in the job to be done, the important thing is not describing what the user does but understanding what the user is trying to get done, which is the crucial distinction. And so they included essentially two or three options. For example, exporting data to accounting software showed a menu that was saying to QuickBooks into its Xero or custom CSV file. And so, we found segments of people that were choosing custom CSV files.
Fake doors allowed us not only to measure the interest in terms of clicks but also to gather the information that was collected in context. And as you can imagine, questions asked in context are more likely to be answered versus questions sent out of context, like an email survey. So the user was really in that mood, so it was easier to answer a quick question about that topic.
Yeah, the short answer is that what happened to us is that we multiplied by ten the effectiveness of the time budget of the team. This means that if you have a couple of developers and designers designing and implementing new features using fake doors, you can test, let's say, ten times more things than you would do by implementing the functionality. And this is not enough. The actual effect is when you realize that most of the features that you enforce without proper validation are ineffective and not valuable. But you already spent the time budget of the team.
So you can experiment by using a few minutes with tools like Samelogic.com; you don't even need any more developers because the designer or the product manager can test those things without developer interventions.
This means you kill bad ideas earlier and focus the team's time on what will generate outcomes. And so that's the first effect.
That's an excellent question. So another big lesson we learned is never to take user feedback at face value. OK?
That's very important because they may be biased by the context, the way we pose the question, and many other things. So yes, you have to read between the lines, but of course, you have to rely on standard frameworks like adoption metrics and how many people attempted to use the functionality. And then, you have to look at usage levels.
Was the usage sustained or not?
Of course, with a fake door, you can only measure the adoption intent because once they realize that the functionality is no anymore available, it's not available. They're not going to try again. So we had to develop also small techniques like adding a small badge, like new buttons or pop-ups or whatever the designer created in the design system.
There are many options, but in general, you have to re-attract attention to the bottom or on the menu item, and so on. Or some people explicitly put a message on the page where they say new feature, try this. So there are many techniques.
So you must gather attention to that new feature if you use fake doors and release the critical feature. Otherwise, it can be problematic because people may think it's fake. So I'm not going to go back to the button. So use visual cues to ensure that when you deliver the feature, the users are aware of that. But in general, I would say fake doors are incredibly precious for the adoption metrics and surveys. You have to experiment with the language, but they are more effective than email surveys.
Our data proved this because a question asked in the In what context in that specific intent mood is way more likely to be answered than an email survey.
OK, so outcome-driven for the user means that outcome-driven for the business is very analogous, which means that when you have clarity on the metrics that are important for the company, then you measure each feature, each product change that you make in terms of how much it is contributing to adoption, retention or conversion and so on. So each product manager has a hierarchy of metrics, but in general, we can consider adoption, retention, profit, and conversion as the critical families of metrics. And so being outcome driven in this case means that we made a product change, and we saw this reflected in two weeks or four weeks into this conversion metric of this funnel. Or we saw an increasing frequency of usage of some features.
For example, when you change the UI by putting features in a more prominent place or using notifications or other things, you must see that reflected in metrics that matter to the business. And this also helps product managers communicate with executives because one tool we always use is a metrics metric chain where the KPIs that are important for the company think about a scorecard owned by the COO or the CEO where there are very high-level things. For example, revenues or profit or arr and other things. Those are not actionable. No product intervention can directly move the high-level metrics. OK? So those metrics are driven by behaviors.
A specific metric measures each behavior, and product changes can influence behaviors, and then the behaviors are connected to the high-level metrics. So for a product manager having visualized this chain of metrics, starting from the company leveled metric down to the behavior metrics where user interactions are measured, you can quantify the outcomes of product changes. And in practical terms, this has a significant impact, for example, for companies organized around agile development. Agile development is organizing sprints.
And so, at the end of each sprint, when you are building a feature, you will likely build a feature across multiple sprints. OK, but every sprint usually delivers the software to the users—almost every sprint. But what you have to decide for the product manager now is whether I should invest in another iteration of this feature or switch the team to another part. So that's another dilemma. How do you decide if your first iterations are already moving the metric? Maybe it is a good idea to double down and invest another sprint on that feature to go on and go on instead of experimenting with too many things.
Because when you enjoy this approach, you're going to maximize the iterations that bet on things that are working because you want to increase the metrics, and you will kill the experiments because experiments don't produce any advantage immediately. Instead, if the experiments cost a small amount of effort, you can have some capacity continuously work on experiments. When an investigation works, you go into another track where you start, like investing in that feature with multiple iterations.
This way, you have a dual track where you make product discovery that includes market research and product experiments, typically excluded from the study. And then you have the big way of implementation that bets on experiments that are moving the needle. And then you keep out all the feature requests that come from different sources that are not validated and not proven critical.
And also, when it comes down to the ability that teams have to say no constantly, that's very important—knowing precisely what things to say no to, what things to approve, and exactly what to do. Deciding if you should say yes or no is very important.
Yeah, that's one of the golden questions.
Based on what we already said, one kind of no can be given. So first of all, every no must be polite and professional. OK? So this means providing context and feedback because the request's source is a permanent entity. I mean, it's a colleague. It can be a colleague from sales, a colleague from customer success, a colleague from customer support, or a colleague from engineering. So from any from marketing, from anywhere can be the CEO.
The first thing is to be polite and professional, and the first kind of no can be given against the roadmap themes, the pillars. So this year, we are trying to create value around three or four pieces. OK? So if the request is outside of that is very likely not going to keep going in the right direction. It's a distraction.
A second category of now is anything that cannot be connected to a chain of metrics we discussed. So this improvement you're suggesting will act on which behavior, on which user behavior, OK?
If you can connect that intervention with a behavior that is important for the chain of metrics up to the company scorecard, then that's a good candidate for a yes, OK? Otherwise, if you cannot connect to that web of metrics, it's a good candidate for a no with the context. Because you show the behavior map, you show the metrics; you establish the connection of the operational metrics to the Company Metrics. And you say, even if we did it with a magic wand, there would be no effect on what matters.
That's another exciting candidate for a no. Another possible knowledge is when something that is not trivial to implement cannot be formulated as an experiment because something trivial, like a slight change in the UI, can be done without much fuss. But if something is not insignificant and requires research, maybe a designer has to create three options or something complex. If you cannot formulate a set of experiments, think about making a micro roadmap for that request. It's likely to become a big distraction for the team. And so it's a candidate to be a no. And if you provide the proper context with your negative, maybe the source will do some homework and reformulate the request with more information to create a validation experiment.