Episode 3.1 – Generative AI & GitHub Copilot @ ASOS Tech
Lewis talks to Dylan, Aimee & Lakshmi about the rise of generative AI and how this is being used in tools like GitHub Copilot to support engineers
You may have shopped on ASOS, now meet the people behind the tech.
In this episode of the ASOS Tech Podcast, Lewis talks to Dylan, Aimee and Lakshmi about the rise of generative AI and how this is being used in tools like GitHub Copilot to support engineers in ASOS Tech.
Featuring...
- Dylan Morley (he/him) - Lead Principal Software Engineer
- Aimee Simmons (she/her) - Software Engineer
- Lakshmi Pradip Mukkawar (she/her) - Senior Machine Learning Engineer
- Lewis Holmes (he/him) - Principal Software Engineer
Credits
- Producer: Si Jobling
- Editor: Lewis Holmes
- Reviewers: Jen Davis, Paul Turner, Adrian Lansdown
Check out our open roles in ASOS Tech on https://link.asos.com/tech-pod-jobs and more content about work we do on our Tech Blog http://asos.tech
Transcript
Perfect.
Speaker B:Welcome to the Asos Tech podcast, where we're going to be sharing what it's like to work at an online destination for fashion loving 20 somethings around the world. You may have bought some clothes from us, but have you ever wondered what happens behind the screens? Hi. I'm Lewis. He him. And I'm a principal software engineer at Asos. In this episode, we're talking all about the use of artificial intelligence AI, how many different software products and technologies are really starting to use AI tools more frequently, and how that's impacting how engineers work, all the different great things that comes with AI, and also some of the things we have to think about now as well. On this episode, I've got some fantastic guests from Asos Tech.
Speaker A:They are hi, I'm Dylan molly hee him. I've been at Asos for about nine years now. I'm a lead principal engineer, all about continuous improvement of the engineering function.
Speaker C:Hi. I'm Amy sheeha I'm a software engineer working in our backend API platforms. I've been at Asos for a year and a half now.
Speaker D:Hi. My name is Lakshmi Pradal Shihar. I'm a senior machine learning engineer in EI pricing team. My role involves developing machine learning projects which are scalable and robust.
Speaker B:Thank you, everyone. Thanks all for joining as. You know, if you've heard the podcast before, we like to do an icebreaker just to get everyone feeling relaxed. So the icebreaker today and I'll kick off is going to be what was the last TV show that you binge watched? My one I recently binge watched was Ted Lasso. The thing I really liked about it, it was just like a really good feel good uplifting, kind of it has some great characters in it. There's some really great little lines in there like, Be a goldfish, which I loved, but I do now sometimes think of my life choices like, what would Ted Lasso do? He just always so positive, right? He's great. That was me. Who wants to go next?
Speaker C:So I'm actually rewatching Silicon Valley, which I think is probably quite apt for this podcast. I've just finished season one just after they do the 5.8 Wiseman score, which is such a great episode. I don't know if anyone's seen that episode, but there's a very funny scene where they discuss middle out compression. If anyone's seen it, they'll know exactly what scene I'm talking about, but it's hilarious. I'm really enjoying it.
Speaker B:Awesome.
Speaker D:I can go ahead. This is a tough one because I keep watching things where I forget their names and what I watched last week. Thing that comes to my mind is Made in Haven web series on Amazon Prime. It's about like, Wedding Planner Agency, where each episode talks about Indian weddings, basically. And it also mentions how conservative and the modern mindset of Indian people basically plays around.
Speaker B:Cool. Thanks, Dylan.
Speaker A:For me, the last one I really binge watched was Better call Soul. I was a big fan of Breaking Bad and I just loved the Backstory for Soul. I thought it was just such a well paced show, really interesting stories. I just loved the switching and pace kept me so interested. I love that show.
Speaker B:Yeah, I mean, everyone keeps talking about that one. Definitely. Thanks everyone. I've got some things to add to my watch list. So today we're talking about the use of AI and tooling and how we can use that in software engineering. Over the last five or six years, AI technology has really started to become more and more research going into it, different use cases for it, and we're starting to see more of the benefits from it. I think probably when Chat TBT launched in 2022, things just seemed to go crazy. Like Skyrocket, every conference I look at, every talk is about AI. Every different software tool is integrating features that are based off AI technologies. So it's just everywhere, you can't really get away from it. Maybe we could start there with Chat GBT and that type of technology, talking about what that is, how that works.
Speaker D:Before talking about generative AI, AI has been there. Most of the applications those we had seen were around like recommendation system, right? Companies were building amazing recommendation algorithms to give personalized recommendations to their users. Whether it's Netflix, asos Amazon Spotify, whichever company you think of. With last year's Chat GPT announcement, the whole area has gone in a different direction. Now, like Generative AI, this generative AI is leveraging machine learning models to produce text, image or even audio. And it feels like it was made by humans. Right? With generative AI, Chat GPT is one of the example application of it. It's using GPT-3 model that was developed by OpenAI. And mostly you keep hearing about this term called NLMs, large language models. Those are the models that are being trained on immense amount of data that you see on internet, whether it's books, articles, even web pages. And they learn patterns and make connections from there. It is also using self supervised learning. So basically it's trying to break what's the next token in a sentence or next sentence in a paragraph? And when you repeat this thing over and over again, you will come to model accuracy where you can say that okay, this model has learned a lot of things and that's why NLMs, they are doing an impressive job in terms of learning this language and understanding it. When you type anything on Chantivity, it has like a massive use case. Like I can get fitness program from it. Then there is also like you can ask for writing a poem. Have you used any other interesting thing?
Speaker B:Can play a song in the style of someone, right, which is very good. Actually, I've tried that.
Speaker A:Talking about the size of the data that's trained these models as well. I think that's an interesting topic. Microsoft partnered with OpenAI. I think back in 2019, OpenAI came to Microsoft and said, hey, we need some compute power. Can you help us do this, please? So Microsoft built them a supercomputer. It's 10,000 GPUs. This immense compute power that's needed to train the models that are I think it's 45 terabytes or more, is the data input into these models. So you can see this is huge amounts of data that's going into these to produce the results that we're seeing. Up until this point, that's been prohibitively expensive for companies where compute is not your primary business. You're not going to be able to have that much compute online, right? You need Microsoft at AWS, a cloud provider scale, to be able to do this kind of thing. And just the memory needed to train across 175,000,000,000 parameters is an incredible amount. And that's why we're seeing now these runs. When you train the model, it can run for 1415 days to produce the output. Cost $5 million or more to produce this stuff, right? So it was up until recently out of reach of most companies, of most people. But now that we do have these pretrained transformers that are available to us, now we can start to consume the output of those. And I think that's why they've come to the forefront now. It's within our grasp without needing to do all of that compute work, all that training has been done for us and we can focus now on how do we apply that, what can we do with that and how can we bring that into our business process.
Speaker D:I think that's why there are so many new startups in this AI world and focusing on different use cases, right? Like how can we generate an AI tool that can write documentation for our developers or even capture meeting notes from our meetings, or gather all the information in one place where people do stand ups and all and then people don't have to go and ask others?
Speaker B:Exactly. And I think the thing about Chat GBT and that type of AI tool where you can have a conversation with it like a person, the thing I love about it is it knows about the previous things you said in the conversation, right? So you can say change this little bit to be a bit more like this, or can you take that out or can you also do this? And you can tweak to get to exactly what you want very quickly, which is just I think it's so efficient and it's such a more natural way to interact.
Speaker A:I think this is going to be very interesting for businesses as a whole, though, where they start to build on these products. So you've got the baseline, you've got GPT model, and then you can start fine tuning it, feeding in your business context, your tone and style, so you can fine tune these models, get it exactly how you want, and getting something out at the end of it that is really useful for your business.
Speaker D:One thing to mention here is like all these models, they have been trained on huge amount of data, but not everything that you get out of these models is accurate, right? So there is a term called LLMs hallucinations. So basically when you ask a question to a Chad GPT, it's never going to say I don't know this, it's more like I have this information until 2021 and based on my information, this is the answer. And sometimes, even if Chad debris doesn't know the answer, they make it up and then you use it. Sometimes it might not be the accurate one. And that's where now people are focusing on how to reduce this hallucinations.
Speaker B:Why does that happen? Is that because of for like an example of some code? It's seen a lot of examples which are maybe actually not accurate and it's used that on its original data is trained on and maybe it's not had the feedback to say that this is not the right way to do something or this doesn't work.
Speaker D:I think you can say with supervised learning models you have input and output defined. If you know, you have to classify an image as a cat or dog, you know, define output and you can say the classification from the machine learning model is accurate because it identified it correctly. But for these generating AI, there is a very different way of measuring their accuracy. So it's very difficult to say what is the answer that Tech GBD gave, is it correct or not? It's more about like how many times people are accepting the suggestions and how often they will train these models. Of course, model needs to see some positive and negative sides of the use case that you're looking into. So that's why on GitHub, like when they copilot or any other applications that has been trained on large amount of codes that are available in GitHub, not every GitHub code is like the accurate way of doing something. So it has learned some patterns and some patterns might not be the correct ones. Basically.
Speaker C:I think another problem is when Chat GPT asks for feedback. I think there was a spate of people on the internet telling Chat GPT that it was wrong. So I think a very well known one is people were telling Chat GPT that two plus two equals five. And with enough people telling it that that is the truth and that is correct, suddenly Chat GPT is now sending out that information to other people that don't know that two plus two should be for it's. That kind of thing. I think a lot of it is user feedback as well as to why things are sort of said by Chat GPT inaccurately.
Speaker B:This is now like misinformation. This you can see, could be a very big thing for people to intentionally do to kind of lower the effectiveness and the accuracy of the responses, right? Which is something that's going to be quite a big challenge, imagine to stop or minimize. So what other types of AI powered technology are we seeing really prominent right now?
Speaker A:I think two of the main ones that we're seeing a lot of are voice replication, so voice AI and image AI. And in particular, that's been at the forefront recently with some of the Hollywood actors going on strike. That was partly because of this, right? So with the recent Indiana Jones film, they used Facewap technology there to de age Harrison Ford, so he looked young throughout the whole film. So that just shows you the kind of things you can start to do with this technology. Now, there was some concern from the Screen Actors Guild there that you would be able to sign over the rights to your image and then the studio would just be able to put your image into films and not have to pay you residuals, not have to pay you your royalties off the back of that. So there's some real concern about the ethical use of AI when you're a big studio, when you can just replicate these images. And it's the same on voices as well. If you can replicate someone's voice, who's the owner of that, what can you do with that? If I could replicate a dead singer's voice and bring out some new material for them, what's to stop me doing that? And what's what are the ethics behind that? So there's some interesting questions in that space around these two types of technology.
Speaker B:I think we see that legislation has to keep being updated and to be able to cope really with the advance in AI technology. So we mentioned before about these OpenAI models and these large language models. Cloud computing providers are providing this now as a service that you can then take it as a base model and then train it more with your own data or hook it into your own software products and tools. So Microsoft Azure is doing this now. They provide the OpenAI model for you to use. So what kind of things does this give us?
Speaker A:They've made it easy for us to start interacting with the models via API, endpoints via prompt engineering, where we can start to fine tune interact and get the results back that fit our requirements and the pricing models as well. I think for some of the API, it's like a token based approach. You pay per X tokens for inference that you're running against their APIs. And so you could quickly build on top of it, build your own product, taking the power of the work they've done, and then put your slant on it, exactly how your product is going to function. And that could be a way of you getting to market quickly with an AI product and get it in front of customers. And you could then start charging them if you wanted to monthly fees. And that's kind of what we're seeing as the pricing model on some of this. It's normally a per user per month billing model is a lot of the kind of things we're seeing from different companies that are coming into this space.
Speaker D:That's definitely a massive improvement in terms of you going from nothing to creating a POC within days.
Speaker B:Kind of, yeah, 100%. And I guess like I said, it just gives you so many new capabilities now to try and hook that in. Before you'd probably have to sign some sort of deal with an AI model provider and go through a whole process of doing that. But now you can just integrate as a service, pay what you use kind of thing. And I guess to get started, it's probably not a huge amount of money just to get going, right? Just to try it out as well. Be interesting to see what kind of tools come out of that, what kind of experiences in different software products.
Speaker A:Check out the Azos Tech blog for more content from our Azos Tech talent and a lot more insights into what goes on behind the screens at Asos Tech search Medium for the Asos Tech blog, or go to Asos Tech for more.
Speaker B:So, we've talked a lot about the latest developments in AI and these things like Chat, GBT. Now within Asos Tech, what kind of things have we been looking at in particular around this area and things that we can do to help our engineers make them more effective when they're working on our services.
Speaker A:So in particular in this space, we're looking at GitHub Copilot. So if you don't know about GitHub Copilot, it will sit in your IDE and it brings the power of those models right alongside the context where you're working. So as an engineer, when you're working in a code file, it's sending information to Copilot and giving you suggestions back as you're writing code, it's predicting the code that you might now write next. And we'll give you back some suggestions that might fit your use case. So that's kind of where we started looking at could this be a benefit to us at Asos? Could this help us be more productive and be more efficient? And these are the terms that you'll hear talked about when people are selling these tools, right? It's going to make you more productive, it's going to make you x percent more productive. Now this is interesting when you start to think about what does it mean to me to be productive? And if I asked you what's a productive day for you? What's an efficient day for you?
Speaker D:For me, productive means getting in a flow where I can just keep coding for at least 2 hours or 3 hours, where my focus is on whatever the functionality I have to deliver, whether it's writing code, testing code, deploying code. Sometimes when you are writing code you just have to switch back and forth about finding a syntax or looking at how this has been implemented in other projects. If I get in that flow, which means I can say at the end of the day, my day was productive, basically that satisfaction when you have done the crude for two, 3 hours and come to a conclusion that this piece is done.
Speaker C:Yeah, I think flow is definitely the most important thing for me and I think sort of similar. But for me right now I'm getting pulled in a lot of different directions with my work, which is always a struggle for us tech people. So there's a lot of context switching and anything that shortens the amount of time that I need to spend figuring out how I'm going to tackle a problem, it makes me more productive.
Speaker A:Absolutely. And this is what we see from a lot of people that are fed back in this space, that it's really, that when you're in a really good flow state and you can lose hours, right, you're not even aware of the passage of time. You're working on a problem and you're trying to get to the solution and that zone, staying in, that not breaking that concentration by having to leave your IDE, by having to go to a meeting, by any of the other distractions that we get, teams, messages, all this other stuff that can take you out that flow state. And Copilot is trying to help in that space by keeping you in the IDE, keeping the context, then you don't have to go to another browser tab and look something up, right. All the information that you might need is there. So that's one of the real benefits I think here. It kind of keeps you in that mode, in that zone.
Speaker B:Yeah. And how do we go about trying out Copilot and introducing that across asos tech?
Speaker A:So we ran a trial of Copilot over two months. We asked for a group of people that wanted to take part in this and we eventually had 90 people that signed up. We wanted to see really if the tooling works particularly well in different areas of the business, in different languages, asos we use all different kind of languages. We know Net, we've got Python, we've got Java, we've got all sorts of languages here depending on the business domain that you're working on. And because of that you've got different Ides that people are using as well. So we were interested to see, does it work well in PyCharm for Python, does it work well in IntelliJ for Java? Right. So whatever you happen to be working on, we wanted to see how effective this tool was at keeping you in the zone, keeping you happy and helping you be more efficient. And that was kind of some of the stuff that we were asking people to feedback on.
Speaker B:And what did we find then over that trial?
Speaker A:I think the one that really stood out for us was that as you adapted your workflow and got used to the tooling, favorability increased over time. And so the impact of Copilot increases over time, and that correlates with GitHub's findings as well. So you've got to understand how to give it context to get the best results out of Copilot. And context is all about what's happening in your IDE, what code files you have open, what other tabs you have open, because it's getting context from those tabs as well. The comments that you write in your code to explain to it in natural language what you're about to do. If you give it as much context as possible, then Copilot will give you a better result back. So understanding how to manipulate the tool to your advantage is one of the learnings that we got out of this.
Speaker B:So it's all about learning how to use it the best way. There's articles around all these different prompts for chat GPT to get the most out of it. And actually, if you don't know those, like, I've found some of them myself and tried them out, they're really, really effective, some of them. Things you didn't know that it could do. If you prompt it a certain way, you can really get different things back from it and it gives you different use cases that you can now use it for, which is really interesting. And so with anything at Asos, we always like to measure what we're doing so we can see if something's effective, if it's making things better or worse. How do we go about measuring the success of the tool?
Speaker A:In the first month we were asking people to feedback once a week, and in the second month it was once a fortnight. And it was just basically a survey that we would send out at that point and say, hey, just can you fill in these basically one to five NPS scoring where we could look at the percentage of people that responded favorably? If you responded a four or five you were favorable to the question that we were asking, then we could boil down those responses into percentage of people that thought favorably about this. So that's kind of how we got some feedback on the tool. Measuring efficiency, measuring productivity. It's an interesting area that there's been quite a lot of talk about recently. Particular people working in this area are nicole Fosgren of GitHub Research, gurgate Oritz, and Kent Beck as well, talking about how we can measure efficiency and productivity. Should we? Because as soon as you introduce measurements here, people will want to make those measurements go in their favor, right? So you've got to be careful with introducing metrics. But I guess all we want to know is, are we introducing a good tool that people like that helps them stay in the flow, stay in the zone, that gives them a better developer experience? But eventually helps us deliver more because that is what we're trying to do. We're trying to get something in front of our customers that gives us value as quickly as possible. We are looking at certain metrics, some of the Dora metrics look at flow, cycle time, things such as how quickly from a work item getting deployed into production. Do we see any changes in those over time? Now, for the trial, we think there was a little bit too slow and a little bit too small a group to really draw any conclusions out of that. But now we've expanded this out to a wider set of people, we can look at this over a longer time period and really determine if the tool is having a strong impact, good outcomes for our engineering function.
Speaker B:So after that trial period, we did collect this data via surveys and getting feedback from the people that are using Copilot. What do we generally find after that trial period?
Speaker A:In general, the feedback was super positive. So of those favorable responses, 93% came back and said that they preferred working with Copilot. 87% of people saying that it worked well with their existing set of tools because that was one of the things that we were concerned about. Did it start conflicting with things such as ReSharper or other IDE tools? I've noticed that a bit at times, but on the whole people found it acceptable, 91% saying it helped them solve repetitive tasks, and 72% of people saying the quality of the code that it produces is higher. So you can see there of these quite important things that we were asking, there were some really favorable opinions and feedback on those. Based on that, we thought it was worthwhile to take this to a wider group, roll out across engineering.
Speaker B:Yeah, it's really interesting and I think a lot of people I spoke to, I know they really did find the tool very useful. So talking about how people found using Copilot, from your own point of view, how did you find using Copilot? Have you got any good examples of things that we found it was really good at? Maybe things we found it struggled with more.
Speaker C:So I really enjoyed Copilot. For me, I think the biggest benefit was that you could use it for repetitive tasks. One of the things that I was doing a lot at the time was creating classes using the builder pattern. For those of you that don't know, builder pattern is essentially you have a class and then you have these methods that are called something like with ID or with product name, with price, things like that. So that you can create a chain of method calls that will then build out an object for you which is really useful in testing to create scenarios to put under test. But to actually build out those classes, it can take ages. At the time, what I was doing was, you start the class, call it a product builder. Start typing one method out of the chain that you want to create, and if you keep pressing tab, it'll just keep adding methods and that is super useful for me. So I think that was the best bit. I think the only downside I had with Copilot was just teething problems. Just getting to know the tool and knowing how to use it correctly. I think that was the only downside. Honestly, I'm struggling to think of downsides because it's just such a useful tool and it saved me so much time. So, yeah, I'm really happy with it.
Speaker B:The builder pan sounds like such a good use case. Actually, that's something I need to try myself, I think. Like that one. Anyone else?
Speaker D:Yeah, I have been using Copilot for more than two months now and I am really impressed with the tool and how it's helping me code better and faster as well. I totally agree with what Amy said. It's helping in defining classes and writing test cases and all. I personally use Python for everything. One thing that makes my life easier is writing documentation for all the functions. I can just ask Copilot to literally write the documentation for the whole file and it writes correctly. And in our projects, there's a lot of data preprocessing and all going on. So we write a lot of unit tests around what kind of data we are expecting, and after transformation, what is the output. So this is helping us generate lots of unit test cases. So I can just select a function and ask Copilot to write a test case for me and I can even prompt it to write multiple test cases. That's amazing to see. Even like the Copilot chat application, it's really helpful. You can select a piece of code and ask it to fix the bug or find a bug from the code and it really helps in that case. One annoying thing I have seen is sometimes it stops in between. So if it's trying to complete the function name and the parameters that I passed it, and in between it just stops. So I just have to restart it again. It's also solving the problem of when you're using libraries like Pandas, ML, Flow and machine learning, you don't remember all the time the syntax of how to group by data. For instance, whenever I write a comment to group by this data, it knows by default how to fill that up. So my time to go back on stack overflow or documentation of these libraries is reduced massively. I can just write it there.
Speaker B:That's like regexes. It's like regex, right? I mean, imagine I haven't tried it on Regex, actually. Can it do custom queries like my new regex? Like, I always forget how to do stuff in custom. I need to use Copilot, see what it comes out with. That'd be very good.
Speaker A:As mentioned there, we enabled. Copilot chat quite recently. So for the trial we were working with the chat list version. It was working within your IDE, but it was as an autocomplete, felt like you'd write some code, it would try and complete it for you. It was taking that as the prompt to give you some suggestions, but not that experience that we'd kind of gotten used to at the OpenAI chat service Copilot Chat, you've got a chat window directly in your IDE. And you can say, explain to me the code at line 60. Fix the bug at line 70. Write me some unit tests for the code at this line. You can ask it all sorts of things and it will either explain or generate code to you. And it's really brought that experience that we got used to in the browser. It's brought it into your IDE and that's super powerful. I think we're going to see a.
Speaker D:Real benefit here to build up on top of that diamond. Like when you mentioned there is a chat window, the new Copilot version, you don't even have to use the window. You can literally ask Copilot in the middle of your sentence by just introducing it like interacting chat thing. So I tried that and it's so nice. You don't even have to click on the chat window now.
Speaker A:It's moving really quickly, this area. There's new developments coming all the time in it. So keep your IDE up to date, keep your plugins up to date because there's new features coming all the time. It's good fun.
Speaker B:And I guess does this give us opportunities maybe for less experienced engineers to learn as well by asking Copilot for what a class does or how to do different things as well? So I'm not sure if we've seen that much feedback from more junior engineers like how they can learn. With Copilot, we didn't see much split.
Speaker A:On seniority levels in terms of favorability on the responses. So that was one of the things that we were interested to understand. Does a junior engineer like it more than a senior engineer? What was the thinking there? And on the whole, the scores and the favorability responses were really aligned across seniority levels. Though I think you got to be careful with the results that come back from the code that's being suggested. Right. You got to not trust it implicitly, but it gets you to a good place quite quickly, particularly with generating classes. That repetitive boilerplate kind of work that we have to do, the plumbings of applications.
Speaker B:Yeah, you got to do a quick review and keep an eye on it. One thing I looked at as well, I don't know if it can do this, is around looking at your Git commit history to see like you could say, okay, what happened in that commit? Or generate the description for a PR from a commit. I don't know if you can integrate with your git history yet. It's not really got access to that. I was also thinking about things like code coverage. If it could call something to get the code coverage, it can maybe go. I've generated some tests for this area that wasn't covered. I don't know if there's things like happening in that area.
Speaker A:There are indeed. So what has been titled Copilot X so GitHub's product that will be coming soon. Release date? We're not sure. On towards the end of this year, early next year, I think, but yeah, it's going to be AI enabled across the entire GitHub experience eventually. So you've got it in the IDE, you've got that context there. You're getting the feedback loop locally where the engineer is, and then at PR time, at commit time, all the other times you're interacting with GitHub, you're going to have options to interact with AI models at that point as well. Generate new PR comments. Feedback into PRS. There's going to be all sorts of new experiences that are going to light up in GitHub as well.
Speaker B:I'm getting feedback on your PR from the ignore that. Don't tell me that I know what I'm doing. Yeah, you can definitely see how there's going to be a lot more use cases for it. So it's just the beginning, really.
Speaker A:PRS take up a lot of a team's time. Right. You open a PR, you ask one of your colleagues to review it, so you're taking their time to do it. And if you could have an automated reviewer from the model first, just to pick up on all the basics, and then once it's at a good standard, then your colleague can review it. So you can have one passing reviewer from the model, one passing reviewer from your colleague, and you could really speed up that section. Right?
Speaker D:That's an amazing improvement.
Speaker C:Yeah, absolutely.
Speaker B:That's going to be really interesting to try out, actually, on some PRS giving Copilot a review. And some engineers have seen the difference in what they come back with and I wonder if Copilot is going to find things that we didn't even think about that'd be really interesting to try.
Speaker C:I don't think I'm going to volunteer my PRS for that.
Speaker B:The training data.
Speaker C:I think for me in particular, I find PRS, if it's over X amount of files, it's very difficult to sit down and take it all in. I think one of the things we talked about was Copilot chat can take a bit of code and sort of explain how it works. I think it'd be really good if it could also do that with a PR and just say, this is what this code is doing. High level. I think that would help massively just to get sort of a base level comprehension of what the actual change is.
Speaker B:Yeah, it's just getting into the context, isn't it? And that takes that little bit of time just to understand what's going on. In addition to Copilot in our ides. I've heard we started to look at Copilot for Office 365. Can you tell us more about that?
Speaker A:Absolutely. So this is a product that's going to be coming from Microsoft soon. I think a lot of businesses are going to be interested in adopting this that chat GPT experience that's bringing us some benefits in our IDE already that's going to be available as part of the Office 365 suite in things like Excel, PowerPoint, Word. You're going to have access to these models and you're going to have access to the context of the files that you're looking at. So it's going to understand your business terminology and you'll be able to ask it all sorts of things that could be really useful. Imagine you've got a five page document that's been produced for something and say, hey, executive summary for this please, and let it produce and boil it down. Or you can ask it to produce you data to help you create information as well. So there's both sides of it, the explaining side as well as the generative side, right. So there's some really interesting use cases for this. We're starting to look at how we can adopt this. So we're on the early access program with Microsoft. We're going to look at adopting this in a couple of our different business areas. We've still got to finalize the plan for how we're going to adopt this and start rolling it out. But it comes with some things that you've got to consider as well around data privacy. If you've got something running in a teams, call that's recording and learning from your speaking so you can say at the end, hey, produce a summary of what we talked about. Well, there's going to be some cases where you don't want it to do that. Right. Or there's some information you're not going to want it to have. And you've got to be really careful with your permissions models so that people don't get access to information they shouldn't have. So there's some technical concerns that I think a lot of people in the industry are thinking about now and I'm sure Microsoft will help us get over those when we go to adopt it.
Speaker D:I'm really happy to see that being used in Excel. Basically. I'm really bad at Excel, so analyzing data in Excel and with Copilot yeah, it's going to be changing how we use Excel basically.
Speaker B:Yeah, brilliant. I'd love it to be able to generate a PowerPoint deck for a topic, to just get me going, maybe give it some points to consider and then just generate me the structure. That would save me a lot of time I think. And I know there's a tool I don't know if you used before, it was like based off Cortana, Microsoft AI assistant, but you could use that before for organizing meeting. So you could say like you wrote an email, you copied in Cortana and you added all the people you wanted meeting and you could say, I want a 30 minutes meeting on teams in the next few weeks. And it would automatically look for everyone's calendar, figure out the best time and dump that meeting in there. Like send all the meeting requests out. It was really, really good. So I'm hoping there's going to be some tools like that that can just help with that kind of admin. Trying to organize meetings takes time sometimes, right when you've got a lot of people that are busy and looking through calendars and stuff. So if you can automate that, that saved me a lot of time as well. It so what do we think the future holds in this area? Are there any things that we're excited about?
Speaker A:I think, as mentioned, it's only going to get better from here. It's continuously evolving and improving as a product. So as that technology progresses, how will our roles as engineers change? And I think there is some apprehension not just in engineering roles, but in other business roles right, that people are worried about, does this replace me? I don't think it does. I think you can learn to control the tool and get better outputs from the tool. And I think that's what it's about. It's embracing the change and learning how to get the most out of the technology. It's going to be about less of us writing the code and eventually do we become the copilot at the minute it's out of copilot? Does that role reverse in the end? Can I just feed it a load of business requirements? Can I just say in plain language, in prompt engineering, create me this fully functioning, tested, NFR compliant API that meets all of my business requirements. Can I just point it a ticket number in DevOps and say, generate me that? Could I do that? So there's so much interesting stuff that could come here, but it's going to be more how can we get value to our customers as quickly as possible? These tools help us do that and I think we should embrace that change. I think as well, fine tuning that business context is something that I'm super interested in as we give it more access to our business information. Imagine you've probably got a wiki somewhere in your business that's got all of your business's information. We've got a big confluence at Asos that's got so much information in, but sometimes it's hard to find. So if we can train models on Confluence and I think Atlassian are bringing a product out sooner, atlassian Intelligence that's going to do some of this. Basically being able to have a conversation around your business's context, ask it things such as imagine there was a catalog conversion project happened some years ago, you don't know anything about it and you can just write what was the catalog conversion project? And it will come back with a it done this, it done that, you can say who was the lead engineer on that? Can you set up a meeting with them? You could really get to the answer quickly. At the minute, you probably have to go and ask a load of people to get level of information. Whereas once these models have access to our data and all of that information, you've got a PA, right, that's really helping you get to answers quickly.
Speaker B:You could generate onboarding information for a new engineer in a team, right. Based on the information around what that team does, the projects they work on, the standards they have in place, that'd be quite interesting for me.
Speaker D:We have seen so many business use cases for AI, right. I would love to see AI helping day to day life for us, not just the business. Maybe in home, we're doing so many chores, right? Cleaning, cooking and introducing AI over there could be interesting in making our life much easier here.
Speaker C:I think one of the things I'm interested in is how we can use AI, make things more accessible. So in particular, there's people in my life who are Dyslexic and they find writing emails or writing correspondence to people is sometimes really difficult. And I think one of the things that we've found is with Chat GPT, you don't even have to really think about that, you just sort of tell it. I wanted to write an email to this person telling them XYZ and it will pretty much all of the time will spit out something that is spelt correctly, has correct grammar. And I think it just makes everyday things that maybe some people take for granted a lot easier. So I'm really interested to see how we can use that to make things more accessible for other people as well.
Speaker B:Yeah, it's great. I love all those ideas. Can't wait to see actually how we can use it going forward. And maybe we need to do another episode next year on what's happening, maybe in six months, because it moves so quick. So, thanks today for all of the guests. It's been really interesting conversation around the use of AI, especially in the engineering space and asos tech and also just across the world, really. All the different use cases we're seeing and all the different great benefits it has. But also some of the things we need to think about privacy, the use of your voice or your likeness by other people, those kind of things. It was a really interesting episode and I look forward to seeing what happens in this space. So, thank you all.
Speaker A:Thank you.
Speaker C:Thank you.
Speaker D:Thank you.
Speaker B:Join us next time for more stories and insights from behind the screens at Asos Tech.