ASOS Tech Podcast

Episode 3.8 – Apps experimentation @ ASOS Tech

Si Jobling talks to Callum Trounce and Elizabeth Nuttall about experimentation at ASOS Tech.

Jan 26, 2024

You may have shopped on ASOS, now meet the people behind the tech.

In this episode of the ASOS Tech Podcast, Si Jobling talks to Callum Trounce and Elizabeth Nuttall about experimentation.

Featuring...

  • Si Jobling (he/him) - Engineering Manager
  • Callum Trounce (he/him) - Senior II iOS Engineer
  • Elizabeth Nuttal (she/her) - Associate Product Manager

Show Notes

Credits

  • Producer: Adrian Lansdown
  • Editor: Adrian Lansdown
  • Reviewers: Lucy Wilson and Catherine Conyard

Check out our open roles in ASOS Tech on https://link.asos.com/tech-pod-jobs and more content about work we do on our Tech Blog http://asos.tech

Transcript
Speaker A:

Welcome to the AsoS tech podcast, where we continue to share what it's like.

Speaker B:

To work inside a global online fashion company.

Speaker A:

You may have bought some clothes from.

Speaker B:

Us, but have you ever wondered what happens behind the screens?

Speaker A:

Hi, I'm cy jobbling, engineering manager at ASOS, and with me today, I have two wonderful guests from the world of experimentation. We'll get into that later, but before we do all that, Callum, could you introduce yourself, please?

Speaker B:

Hi, sire. My name is Callum Trout. My pronouns are he, him, and I'm a senior iOS engineer at ASOS, and I've been at asos for just under five years.

Speaker A:

Amazing. And, Elizabeth, how about you?

Speaker C:

My pronouns are she, her, and I've been at asos for just over a year, and I am an associate product manager.

Speaker A:

Wow. Both wonderful journeys. Both wonderful roles. Very valuable, very important to what we do at asos. And we brought you together today to talk a bit about how we do experimentation at asos. I can't wait to hear your stories. We've had a bit of preamble, and it's fascinating stuff. But before I do all that, shall we do a little bit of an icebreaker? I know this gets us warmed up for the conversation. Could you tell me what was your favorite meal abroad that you have had in your time?

Speaker B:

I can go first. It must have been. Yeah, it's been about four or five years ago when I went to Rome. It's the first time I've ever been to Italy, so I wanted the full experience, and so I thought, okay, let me go to a local restaurant. And very authentic, and it was probably the best meal I've ever had abroad. I didn't skimp. It was like, the full four courses. It's the first time I've had four courses. I didn't really expect that, but I managed to get through it, two hour sitting or whatever it was.

Speaker A:

The italian courses blow my mind. You start with lasagna before we go into the main. I was like, wow, what's that all about?

Speaker B:

Yeah, it was like, first it was pasta, but then which I had, it was large enough for a dinner, and then there was a separate meat course, and I was like, I'm full already. And somehow I managed to struggle through dessert, but I think in the end, it was worth it, and my tummy was happy. Very good experience.

Speaker C:

Definitely worth it.

Speaker A:

Yeah, sounds great. Liz, what about you? What's your favorite meal?

Speaker C:

I think I've had a lot of good meals over my life, but I think my most recent one, I did a west coast trip of America earlier this year and we stopped in Vegas for a couple of days, but we went to a restaurant called tau, which I see a lot on the american sort of social media circuit. But I got sticky orange chicken and duck fried rice, and it was honestly the best thing I've ever had. I think about it daily.

Speaker A:

It was just amazing. Sounds delicious.

Speaker C:

It was delicious. And I don't think we can get it the same in England. So I'll have to go back.

Speaker A:

Yeah. When we tried to translate it to the UK palate, it never works for my.

Speaker C:

Not the same.

Speaker A:

My mom was a bit more about columns, actually. I love greek food and we normally go back to the same hotel every year, give or take, and they just spoil us with homemade greek food. It's not like one dish. I think one dish, it'd probably be the musaka, but it's the combination they bring in. I'll try this without and do this. What do you think of this?

Speaker C:

Yeah. You always have to get the recommended.

Speaker A:

Yes. And always just say, sure, bring it. It sounds great. So, yeah, unlike Callum, always food coma afterwards. I'm never ready for the rest of.

Speaker B:

The night, how it goes.

Speaker A:

I know, I love it. And european food, you got my vote there, man. Let's talk shop. Experimentation, what is it? What does it even mean? Because I think a lot of people hear this phrase and they kind of go, that sounds a bit radical or left field, but in your words, Liz, what would you say experimentation comes down to?

Speaker C:

Well, I feel like experimentation in the product world is the measuring of, when you're putting a feature out to customers and you're measuring the success of it, how the kind of learnings we get from them, how customers interact with them, it's almost like a science experiment, but in the technical world, like, seeing how different things perform against each other, and we can get some really great customer learnings from that.

Speaker A:

It's very well put, actually. I like the fact it's got that scientifical angle. It's not just do it and see what happens. No, we want to try different things to see what works.

Speaker C:

Yeah, I think it's getting the data. It's like helping you to make data informed decisions based on putting something out there. You split that. You're running it as an experiment to x amount of people. And, yeah, you're getting a lot of learnings and stuff from that, that you can see what customers interact with, what works better, how that kind of can go.

Speaker A:

And I think a lot of organizations suggest that they are data driven. They always use the numbers, the metrics, to quantify what they do next, but by getting to the science a little bit later. But just having multiple sources of data, and you can't make any decisions without the data results that we get from experiments, it's so much harder to make those informed decisions sometimes.

Speaker C:

Yeah, exactly.

Speaker A:

And I think as we mature as an organization and with our tech, it's good to try new things and not quite know the answer sometimes. This is where I think the experimentation angle really comes to full force.

Speaker B:

Right?

Speaker C:

Yeah, I think there's definitely a difference between it's helping you along the journey of constantly improving and bettering what we are offering to customers. And I think there's pure experimentation, but there's also the way of you're using it as a way to measure success. So I think it's a very good kind of form for us to put things out, see how they perform, see what customers like, what they get into, and, yeah, just keep constantly improving.

Speaker A:

And as an associate product manager, how do you feel that experimentation fits into your role now?

Speaker C:

Yes, I think in the kind of product team at the moment, our kind of key goal is delivering, and you almost need the data behind that to know what the customers are benefiting from, what they're enjoying, what works with them. I think there's a lot of you kind of have hypothesis about, oh, this might be a good feature, or maybe customers will like this, or maybe this will improve their experience. But without actually putting that out there and seeing impact of that on an experiment, you don't know, like, we've had some kind of wild results. That is the opposite of what you thought. And there's a lot of people out there, and you want to gauge that. So, yeah, I think in product, as the product manager, we are constantly trying to improve that experience with people, and experimentation is a really key part of using that to make the decisions on what we move forward, what we will roll out.

Speaker A:

Exactly. And being very app focused with our generation that we target. I think that's why we like to tap into that and experiment with different concepts. Right?

Speaker C:

Yeah, 100%. There's a whole range of customers out there, and they all have very different buying behavior. We've got. The differences between customers on web are very different to purchasing behavior on iOS, Android, I think. Yeah, we're trying to tackle all of that and make something that works for everyone, but also they can have kind of differences between the different platforms depending on the customers. Like, you just want to bring every customer. We have the best experience.

Speaker A:

Totally. And callum, you've been on this journey for the best part of the last five years, I imagine. So. Have you seen it evolve over that time in the iOS and apps world?

Speaker B:

Yeah, it has been a wild four and a half, five years working under experimentation. I would say over the past four and a half, five years, we've seen a lot more maturity in terms of that experimentation culture, especially within the apps team, understanding how it might impact, how we architect certain solutions, what it means in terms of the scientific process, what experimentation actually means. We don't think about it day to day, but it helps us think more about how the company chooses to drive value to different customers. And we're more frequently thinking about, okay, what is the data saying? How can we do a b test to improve the customer experience? What is the impact of our work? So I've seen that culture increase a lot over that time. There's a lot more collaboration between engineers and product managers regarding the metrics and how to get the data that the product managers need. And, yeah, I would say that when I joined, there were only a few individuals that really had expertise in the experimentation space. We call them experimentation champions. I'm an experimentation champion, and there was only a handful of us back when I joined asos. But now that role has been massively expanded and we've got contributors from different team members into our experimentation framework and things like that. I would say over the past four or five years, it's improved massively. And I think it was one or two years ago we won the experimentation award for our improvement in our culture around experimentation. So it's something that we need to definitely keep going with, and I'm really proud of how far it's come.

Speaker A:

Absolutely. And I'm glad you're shouting about those sort of recognitions in the industry. This is not something that's new by any standards, but by doing it properly in a mature way. I remember when I was an engineer back in the day in the web world, and I was doing a little bit of experimentation there. It was very casual, let's say it wasn't the most structured organization, but we had that mentality. And our product managers, we want to experiment with this concept. How do we get it into the web experience? Just to test the waters first? And it was very, I want to say hacky, but it was very close to being in that way. We tried to put some good structure around it from an engineering perspective, but just the culture was planted in then, like 2015, 2016, and then felt like in the last five years, it's really ramped up and matured in this way.

Speaker B:

Yeah. It's now very much baked within our entire process. Like you said, we still try and see, okay, if we're going to experiment on something, maybe we can make an mvp, like a minimal viable product. Maybe it won't be the fully fledged feature, but can definitely be used as a way to indicate what the customer sentiment is. It might actually enable us to deliver something quicker than we might otherwise do. So definitely seen a massive. You can now see that experimentation is fully integrated within from conception through to the engineering and delivering of the work.

Speaker A:

Yeah, it's great to see that part of the full SDLC, the software development lifecycle. Now, it's not just a kind of afterthought as it could have been in the early days. It's definitely planted in that early stage of going. Would this work? Can we try it first rather than fully going out with it? But I gather that's not always been the case. Have you had experiences where it's gone full feature, tweak it later thing?

Speaker B:

Yeah. So there's been a few features over the years. Ratings and reviews was a big one. We initially rolled it out as a fully fledged feature, but we wanted to do some, what you might call back testing or retroactive testing to see how it actually impacts customer buying behavior and how looking at other analytics information, such as how it impacts returns and combining those two data sources. And that's why you still see ratings and reviews today, because it's a positive impact.

Speaker A:

Yeah.

Speaker B:

So that's an example.

Speaker A:

Are there any other examples where we've just had to experiment first before we went to full feature mode? Do you want to talk about?

Speaker B:

Yeah, so there was a really interesting one that we ran recently on the iOS app. The name that we call it is image galleries on the product listing page. So if you open the iOS app today and you search for whatever you like, you'll land on the listing page and you might be able to see that the thumbnail images on the different products on the product listing page can be swiped between. So that is something that we initially ran an A B test on. So the control was there was none of that behavior, and the variation was that customers could actually swipe between those different thumbnail images. And we saw massive success with that. Customers really loved it. I imagine it's because that customers can have a quick glimpse at the other images on the product. They can quickly add to their saved items if they want to, I guess. Liz has more details on what the feedback was from the customers on that?

Speaker C:

Yeah, I think image galleries is a really good example because that was one where we were trying to determine if customers benefited from seeing more images, like earlier on in their journey, because obviously ASOS has. We have loads of incredible products on there, but we do have a lot. So if you are constantly having to go into product detail pages and then come back out, you are disrupting your customer journey. So, yeah, it was a really good one to see that. If people did benefit from seeing more images as they were going, without having to dip in and out of pages, and, yeah, customers really responded to it. The kind of learnings we got from that is that they do benefit from seeing more photos earlier on in their journey and then they can use them to go on and make their buying decisions. And as a massive ASOS user myself, that was a feature that I personally wanted us to put in, just because I think it's as you're going through and you're making those buying decisions and you're comparing different products and you do need to go into the detail sometimes, but a lot of your kind of initial decisions are made off of what the product looks like. Yeah, that was a really great one. So will this work with customers or will it not? And yeah, it was very positive.

Speaker A:

What were the main conversion metrics that you were looking for on this? Was it like the actual add to bag or is it the actual sale? What were you looking for with this one?

Speaker C:

Yes, I think so. From the product listing page. We don't have add to bag on there. So say for later. Product views are the kind of two we were mainly monitoring. But as you're pushing people through that journey, you're hoping that then add to bag conversion would also be impacted because they found products that they're interested in. So, yeah, mix of kind of both, but yeah, all very valuable.

Speaker A:

And this is getting into the detail a little bit more. But were we measuring the interactions with the swipe gestures as well? Is that another factor took into consideration?

Speaker C:

Yes, we had to implement specific tracking to see how people interacted with the swiping. So the way the feature is live at the moment is there is like an onboarding animation when you enter the product listing page, so it does a little bounce and then you can see that there are more photos available. But people obviously need to see that to know that they can interact with it. And I think it depends how familiar you are with shopping apps. There are other apps out there doing similar features, but yes, we had to implement specific tracking that would look at if people were swiping through, and then it came up with questions like, how far are they swiping through? Then you start thinking, do they just want to see one photo? Do they want to see multiple photos? So I think it raises a lot of further questions on potential different iterations that we could do in the future.

Speaker A:

Totally. I advise anyone that's listening to this right now, go and load up the app, look at the product listing page and see what happens. Because that's a good example of introducing the experiment and the impact. Because we could just turn it on.

Speaker C:

Yeah, exactly.

Speaker A:

That's nice. Are there any other experiments you'd like to talk about as well?

Speaker B:

So the home page is a very interesting part of the app because you've got all sorts of different content and it's all competing for the same real estate. And also our lovely content team have always lots of different ideas on how to make the content as engaging as possible. We have an entire framework dedicated to experimenting on the homepage. Internally we call it homepage injection. It allows us to inject different content in different positions on the homepage and measure the success of that you could imagine. And this happened. We run homepage injection experiments. We've run dozens or hundreds of them over the past few years, and it could be for different sales. They might want to test a grid of images versus a full screen video. That could be an example, see what customers prefer better. You're usually measuring things like click through rate, so it might not necessarily be out of bag, but we want to encourage customers to embark on a shopping journey. That's probably been one of our biggest successes because it's such a robust framework and we've had it in the code base for years and it's still serving us to this day. And we're almost always running experiments on it.

Speaker A:

Obviously, when we come to the homepage injection concept, there's lots of demands, lots of requests coming through. How do we make sure that customers are only going to a specific experiment, or a bucket, as we call them sometimes?

Speaker B:

Sure, yeah. Okay. So taking a step back, there's a few ways we can ensure that customers aren't being accidentally biased by two different experiments. You've got setting up the experiment, or configuring the experiment, and then you've got the analysis after the fact. So for greater context, we pull down a list of all the experiments at the launch of the app, and it comes back in just a configuration file, and it contains all of the experiments, all the potential variations, things like that. And if you're running the same two different experiments on the same screen at the same time. We can make use of something called a mutual exclusion group. What a mutual exclusion group means is that if two or more experiments are within the same mutual exclusion group, a customer within the same audience will not be entered into or bucketed within more than one of those experiments. Okay, so that, in short, is how it would work. It would just mean to clarify what I mean by audiences. Maybe a customer shopping in the UK store is one of the audiences, maybe a logged in customer is another audience, stuff like that. So that's just how we segment our customers. So that sort of prevents customers being incorrectly bucketed, but then taking a look at the analytics after the fact, when a customer is entered into a bucket or bucketed into a particular variation or served a particular experience, we also track that. So that means that our product managers and whoever's looking at the analytics dashboards can see, look, okay, this customer is being biased by this experiment. And then we'll, in essence, come up with a lovely Venn diagram looking thing where they can see what experiments are overlapping. It might have implications that if we do know that there's overlap that I know product managers go a lot of effort to plan, which I guess I'll leave to Liz to explain. But if that is known, then it might impact how long we run the experiment for to get the data that we need, stuff like that.

Speaker A:

And liv, can you give us a bit more understanding of how you orchestrate and manage things from the product perspective as well?

Speaker C:

Yes, I think from a product side, we all have our own areas that we look after and we have a couple of prioritization like processes in place that will allow basically, like, these are the experiments we've got planned to run on these pages or in these area. The sites coming up we put in when you think they're going to be, how long you think they'll last, and obviously they get prioritized in. I think we have had cases before where we've had a lot of people wanting to run experiments on the same part of site. So then you do have to get into more of the details in terms of what does take business priority. But yeah, we're constantly managing where we want the resources to be focused and we can't have too many, for example, running on the PLP because you don't want those experiments interfere with each other. And I think it's also the challenges of you want your experimentation results to be as accurate as possible, so you don't want another feature potentially biasing that. But then we also all kind of progress and I think it's a balance of the two.

Speaker A:

I get it and I feel like from my experience as well in the past, res run experiments for like sometimes up to two months just to get clear data that we can trust and even just trying to customize and configure them. At the early stages there were little nuances that became a little bit complicated. Hang on a minute, we haven't considered that segment, so we have to rewind, do it again and then start the experiment from start again.

Speaker C:

Yeah, exactly. Because I think all experiments have different runtimes, they have different kind of audience reaches. I think there's a lot of different consider in there.

Speaker A:

Nice. Are there any other challenges that you two have faced with running experiments before?

Speaker C:

I think from our kind of side in product management, one of the things we've had recently, if we go back to the image galleries example, is, and I think that is a feature that is live and available and when it was an experiment it was there, but it's not necessarily like shouting in your face. I think if a customer doesn't notice that it's there, they might necessarily interact with it. So there's a thin line between. You can't bias a customer by telling them, here we've got a new feature, try it because it may not be successful. And also you don't want to impact those results, but you do want them to know there is a feature that we want to see how they organically interact with that.

Speaker A:

Sure. Are there any experiments where it's been too subtle for the customer to notice it's there so you can't tell if it's actually bias in the journey?

Speaker C:

Not off the top of my head. I think the interaction with image galleries was enough that we got significant results and we could see it was a positive result. But I think the interaction with that could have been higher.

Speaker B:

There are ones that aren't strictly related to whether or not a customer engages with a particular component. An example of that could be again on the search results page on the PLP where we can experiment on and the recommendations API team experiments on what content that they recommend to customers. We indirectly see or measure the engagement with better performing products, but you don't really know whether or not that's down to chance unless you look at all the data.

Speaker A:

My opposite was, were there any glaringly obvious experiments I've run in the past that you could tell didn't batch straight away? I'm thinking like the button color stuff I think you'd notice that difference straight away sometimes.

Speaker C:

I know we've been running some experiments on bag historically and more recently just to see if cross sell. Do you want to increase your delivery threshold potentially, or you might also be interested in this. This would go with something else in your bag. I know they've been running some tests to see if that distracts the customers from actually checking out because you're offering additional opportunities to customers at that point in their journey. But by that point, we do want customers to cover that and to go through to check out and continue that kind of conversion. But there also could be potential opportunity for more that we could get or we could optimize their kind of results or their journey, et cetera. So I think that's one where it's like we are putting that in front of a customer's face and is that a good thing to distract them or is that potentially a bad thing? We're always trying to improved their experience because we obviously want to do as a business, but we also do want the kind of customers to enjoy shopping with us and continue to come back.

Speaker A:

Cool. Talk a lot about the products and the consumer side of things. How about the technical approach? Callum? Are there any certain frameworks or tooling we've got in place to help us enable these experiments?

Speaker B:

Yeah, from a technical standpoint, over the years we've built our own framework which wraps around whatever third party library that we're choosing to use. And what it does is it provides a consistent way to interact with or opt your customers into an experiment. We're at the point where it's matured to the point where we've built our own tooling for our QA engineers. We've got in depth documentation. We've got loads of onboarding documentation as well. We run almost a quarterly presentation on what experimentation looks like. I know there's been one that we recently ran to to onboard a backend team into using our experimentation framework. And we've expanded this experimentation champions framework whereby each team, each sub team within the iOS team has their own experimentation champion who can be essentially the main person to consult with if you're building out an experiment. So they might be answering questions like, okay, what sort of impact does the hypothesis have on our solution design? Going back to customer bucketing, how do we only bucket the most relevant customers within this journey? Do we need to evaluate any data from the API response or whatever part of the user journey is to referral bucketing? What implications does that have on the analytics? Also, it involves discussing with the product manager what the hypothesis is, what that might mean in terms of how complex the solution would be, checking things like, do you want to actually variate on more than one thing at the same time? So if we use a very basic example of a button, if you want to experiment on the color of the button and the size of the button, that will, of course, involve a more complicated experiment setup. You can, of course, experiment on those things independently from each other. So changing the color and changing the size of the bun independently. But, yeah, our role is also skilling the PM and understanding what that means in terms of how it might impact how long it will take to deliver that piece of work. So, yeah, from a technical perspective, it's not just getting involved in the code, but also a very big culture point. It's also helping the other engineers understand how to write the best experimentation code. We're at the point now where we can treat our Africa's almost any other API taking us back four and a half years. If we were integrating an experiment into our code base, you would have the variant V One and V two hard coded into your app experience, and you'd have the variant behavior hard coded, which, if you're working on iOS or Android app, if you want to change the behavior of the button, say V two, making the color of the button green did well, and you want to make that a permanent rollout. You can't do that without a code change. Or if you wanted to also say, we've done this experiment, we've tested an orange button and a green button, but we also want to test a yellow button. If you want to add another variant, you have to make a code change. We've gone from the position where we've had that variation, that limitation, but now we're in a position where we can drive the color of the button remotely, completely. So instead of saying v one, v two, v three, we can say, okay, what's the hex color of the button? What are the dimensions of the button? And then if we want to roll out a completely new variant or increase the scope of our experiment, we can easily do that now without making any change to code. It also means that whenever we're setting up the experiment in whatever dashboard we're using, it's a lot behavior oriented. It's not just saying give v one, give v two. There's actually now more description. So I know that is beyond a little bit of a technical scope, but it's a lot more involved than you might think.

Speaker A:

It's interesting to hear you say that, though, because I think what engineers are very mindful of is bloats and disposable code that you put into production. If you're doing three or four variations of the same UI component, and you only use one in the future, what's happening to the other three? But the fact you shift into that config driven approach, you don't need to actually do that. You can literally just say it's all controlled by some variables, and we're not putting it all into the actual app, we're putting it through the config changes instead. It's just cleaner code for the engineers to look at as well.

Speaker B:

Absolutely. And what we're doing next is we have our own configs as well, which is separate from our own experimentation framework configs. But what we can do is, now that we're starting to model our experimentation configs around our in house config, it means that we're using a lot of the same data models. It means that we're bringing our own config and the experimentation config closer together. Meaning, eventually, we'll be at the point where they are more or less the same thing, and we'll be at a position where we can experiment on whatever we want without any additional developer cost. So we've still got a fair way to go on that journey. But I think we've made a lot of progress in the past few years to get there.

Speaker A:

Sounds like it's been a great journey for the last five years. What do you think is the future for experimentation at asos?

Speaker C:

Yeah, I think it's a good question. I think we definitely need to continue to work, to be making decisions, like, better, smarter, more data driven. I think, from an experimentation perspective, I think there are different frameworks that you could use in terms of. We've had some conversations with different companies about maybe doing them. Like, do you just test. Do you test every little thing? Do you test, like, a big chain? See how that goes? I think there's different ways you can do it. I think it's called, like, a blue door test. Is that what it's called? Something like that way it's like you put a feature in front of people, and then you are seeing if it works, rather than building the fullback stuff behind it. I think then you can do things a bit faster. So I think, yeah, I feel like we just need to continue to improve on what we've been doing.

Speaker A:

What about you, Callum? Any thoughts for the future of experimentation at Azos?

Speaker B:

I think we need to keep going on the journey of making introducing experimentation into our code base as seamless as possible. I think there's still a little bit of a misconception that there's a large developer cost when integrating an experiment into our code base. That's simply not the case anymore. Another thing is, continue the journey of upskilling people external to the apps team on how to use experimentation platform, what experimentation means. Hopefully that means that there will be stronger hypothesis, there will be hypotheses, and there will be more encouragement to iteratively improve and develop the customer experience. There is also the aim to upskill the entire apps team to understand what experimentation means to them. We need to keep continuing to build out our QA tooling, stuff like that, that can always improve.

Speaker A:

We're in a really good place now when it comes to experimentation. Asos, we've matured over the last 510 years of how we do it. Got some great tooling and mindsets in place. Now it's just growing that culture out into other departments, other ways of thinking. Don't think mini, think big, or find that variance in between. But I'm pretty positive that you're in a good place to just do more cool things with our apps experience now.

Speaker C:

Yeah, and I also think like getting more from the learnings that we read from Ted. You put out something and you're obviously hoping a feature will be positive or be received. But also the learnings we get are so valuable if something doesn't go the way you think. And I think doing more with the learnings we do get from experiments, and then what do you do after that and following on from things, rather than necessarily moving on to the next thing? I think we can definitely improve upon.

Speaker A:

And benefit from totally iterating on the failures as well as successes.

Speaker C:

Yeah, exactly.

Speaker A:

Love it.

Speaker C:

We learn a lot.

Speaker A:

We do. What excites you the most about experimentation going forward? Tell them.

Speaker B:

So what is really interesting is that a lot of the tools that we're working on enable the other parts of the business. So with homepage injection as the example, the content team were never able to experiment on content like that before. And now the work to develop that framework has meant that it's a massive enabler for the content team to serve what they want. Another more recent example, something very similar is something called URL injection, which allows us to experiment on any outgoing network request. We can a b test on that now. And we've recently onboarded another team there. We've onboarded the trading optimization team that look at slightly related to recommendations on the product listing page and the browse experience. And we recently that before with the recommendations API team. So I'm really looking forward to introducing more teams to this and also building more tools to help enable other parts of the business because the app is the main part in which all of the APIs and it's where the user experience is. So we're usually like the first point of contact or we're like the entry point, which means that we have to build tools to allow other teams that we pretty much depend on to serve the right content. And I think if we build on what we've learned so far, we can go further to create make ASOS app a lot more data driven, a lot more backend driven, even push more experimentation into the network layer. So there's a lot to be excited about.

Speaker A:

Totally. What about you, Elizabeth?

Speaker C:

Yeah, I think leading on from what Callum said, I think experimentation gives us the opportunity to do some really exciting things. Like we can put fun features out there, we can see what customers enjoy and it just gives us a really great way of measuring the success with it. Yeah, I think there's a lot we can look into and we've got a lot to do.

Speaker A:

Love it. Thank you so much both of you, for your insights into experimentation and clearly very passionate about this area. So appreciate your time and hopefully it's inspired others to think about experimentation.

Speaker C:

Thank you very much.

Speaker A:

Check out the AsOS tech blog for more content from our Azos tech talent and a lot more insights into what goes on behind the screens at Asos tech search medium.com for the Asos tech blog. Or go to AsOs tech for more.

Behind the screens at ASOS Tech