The Sales Engagement Podcast
The Sales Engagement Podcast

Episode · 3 years ago

Why Data Science Is Critical To Growth In Engagement w/ Pavel Dmitriev

ABOUT THIS EPISODE

We're talking all things data and testing with our VP of Data Science, Pavel Dmitriev. Pavel explains the how and why behind using data science and AB testing to grow engagement without the guesswork. Tune in!

Welcome to the sales engagement podcast. This podcast is brought to you by outreach dot I oh, the leading sales engagement platform helping companies, sellers and customer success engage with fires and customers in the modern sales era. Check out sales engagementcom for new episodes, resources and the book on sales engagement coming soon. Now let's get into today's episode. Everybody, welcome to the sales engagement podcasts on more COSTICO, vp of sales that outreach. On this podcast we talked all things sales engagement and one thing that's really interesting to me is how we get better at doing sales engagement, not just how we do more of it. And with me today I have Pavel Dmitriel, who's our VP of data science that outreach. Pavel, say hi to everybody. Hi Everyone. So tell me a little bit about what you do, like vp of is a VPA data science? Is that what you do here? Yeah, so I essentially the data science. We have an awesome team here with some data scientists, some data engineers and some have developers and via trying to build some features into outreach which will make life of friends easier. At the main some of the stuff that they don't like doing, makes them more effective use the things that they are doing as well as help to really improve that whole sales process. Right. So it's always interesting to me. Many me dinner. Our CEO told me one days like Hey, man, I'm about to hire some guys that are can go deep on data and on science and experimentation. I was like okay, sounds cool, but then when I met you and ef a, who are kind of our first two people on that team, became clear to me how important this was. So, as you're looking at what you have done in the past with data science, like how do you see applying to sales engagement and what we do here that reach? I think data...

...science really will become critical to optimize and sales engagement, because is that data science? What happens is that Wep of sales comes in, kind of sets up the process, content and so on, and then that's what it is. It just stays like that. Things do not improve or they improve in some kind of very at hoc kind of a way, based on, you know, anycdots and best practices and so on. It's fact that's not clear at all where if someone goes and changes certain. No, it's sequences or some content, whether it actually helps or not. Yeah, so what data science can do is it can bring kind of scientific river and measurement of that process that when we make changes, when we try to improve something, they can actually verify whether it really improves things on and then over time accumulate that learning about what other things we shall working, that we can do more of and what are the things which are not working, which we don't need to do, and what that enables is that this whole sales process keeps improving, constantly improving over time, rather than just remaining static as it is now. You used to be at Microsoft, right, yeah, how long were you there? For eight years. Eight years, and there's this interesting story that you told me, I think, with the developers of being that like really opened your eyes what your data signs get Friede. It Open Your eyes, but open other people's eyes to the fact that hey, this is important. Like tell me that story again. Yeah, no, that was an amazing story and it actually really opened my eyes on the power of this specific technique called a betasting. So what happened is, once, and now I developer at Microsoft had an idea about how to improve the way ads. I just played on the SEC results and improvement was very simple. He just wanted to...

...take an a first sentence from the text of the Ad and put it into the title. And it's very simple and just list really a couple of lines of code takes like an hour probably this nest to develop. So if you had that idea, but you know that are hundreds of developers and PM's. Everyone has ideas. When it all goes into this backlog and gets prioritized, this idea did not make it to the top and he waited for a month and two months and three months and after six months he was like this is never going to make it. So he pretty much just did it take over the weekend in his free time and then started this scientific experiment called a the test. And then what happened is that immediately an alert fired. Our system, as being put generated alerts whenever there is some kind of a strange movement, and this time a Lort was about we are making too much money. This can't be true. I've never seen a feature which makes so much money and this is a trivial change. So what was wrong? Well, it turns out nothing was wrong. This fish are actually increased, being the revenue by twelve percent, which was around a hundred million dollars a year at that time. The six months delay caused being fifty million dollars, and no one would have imagined that this Fisher would do it. Like everyone prioritized that low and it was somewhere down in the backlog. Yeah, I think we see that a lot with sales leaders. Bringing that back to sales engagement, which is a rap as an idea. A sales leader has an idea and it gets pushed to the bottom or they have an idea that they think it's important naked prioritize, that they tested, test on and you know, kind of get some off kilter where they should have been going. But it's interesting the middle. As you've been here for six months, you've talked to sales leaders, you've worked with our sales team. What is something that you're seeing that...

...sales leaders do wrong with AB testing, because most sales leaders understand a be testing and I should be testing something different against this thing that we've always done to get the results. But, like, give me a couple things that you feel like people have done wrong as sales leaders when it comes to a be testing from like a scientist perspective. Yeah, I think that, if you things, some of it has to do with is just really being able to properly run and aid the test, that it needs to have enough data. Actually need to have statistical tests to determine what works and what doesn't. We can just look at the difference because that could be due to noise. So just properly configuring and running and a be test is actually not as easy as it seems. And the Sol and the analyze the data of outreach and of outreach customers that very few a be test actually run correctly. Most of them are invalid for unreason or another. So that's one thing and another thing is, I feel like, a little bit under appreciating the power of Abit testing. Everyone is aware of aborite testing. Sales leaders know it existence, is a value it. However, they often think of it as kind of just the things that you used to maybe see which sub recline on a certain email is better or what kind of no call to action to put into a specific email template. But the power of a bit taste and actually is a lot more. You can use it to answer some high level business questions, such as things like you know, the the video in emails is effective or not, something that we talked about before, and you cannot do it with just one a bit test, but you can do it this kind of an AB tasting initiative, a set of coordinated tape test that's you run across different...

...scenarios and different aspects of the product and that really enables, I think, seals leaders to answer the questions they really care about, those high level business questions, not just small questions about whether, on a specific type of sequence, in a specific step, something is what is better than than something else. Right. So this is one thing that really struck me when we met and we started talking, is the idea of getting yourself team better and better through testing and experimentation, which is probably more technical term. Is it's a lot of work. Like it's not easy, it's hard and a lot of people are kind of dabbling in it and doing it sort of right, sort of wrong, maybe mostly wrong. What are the consequences of doing it incorrectly? Other consequencens can be very severe because if you use the results, the results of an AB test generally, I used to make some decision about changing maybe the process of the strategy, perhaps peak using the result of this a Bey test and incorporating it across and know your whole company. And if that is based on the wrong or incomplete data, then potentially that decision may be wrong and depend on how wildly that's going to be used, it may actually have really bad consequences. Yeah, and I think going back to story, you told me another story when you're at Microsoft about developers intention of creating a good feature and how many of those turned out to be features that actually help the users. Are Not like Tommy that story. Yeah, that's kind of the other side of the being stories that we just talked about that, on the one hand, you know, often good features to not get priorities very high. On the other hand, when we looked at all of the fishers that actually being built and ship to users and we evaluated them using a Bey...

...test, we found that only a third of them are actually good, like really benefit users, and the other third just neutral, and then another third were actually harmful. Is they will lose revenue or we degrade user experience, and that's kind of really I open in a sense and very humbling to yeah, that's when developers, pms have ideas and they both. They all belief that this ideas are good. Actually is, are as likely to help users as they perform them as I thought. This is so crazy, right, because on the sales floor get a couple reps talking and they like then they bring over a couple of their other buddies for lunch and then they talk to their manager and they all have the best intention of doing something that helps the sales team. But actually their idea has just as much ability or just as much of a chance to harm the sales team as it does to help the sales team. And, like we just we have to give it away from our gut, right, like we can't rely on our guts anymore, sales people, in order to get things right, because now that we have data, with data science, with machine learning, we have these things that we can use making wrong decisions based on your gut. I don't think people are going to tolerate it much longer. To you, yeah, no, absolutely not. And I would say as it is actually a very sort of natural combination of using your intuition, Yoga, your experiences to come up with ideas. We don't want to throw those ideas away. It's the only really difference that the scientific techniques a taste and make, because that it allows us to treat this ideas as hypothesis rather than the absolute truth, and then actually tells those hyppocesses. And then in the process of fest and those Typosiss, we learn a lot more, which gives rise to new ideas and the kind of virtuous circle is what a distant enables. Yeah, I think it's interesting.

I think some people think they're good at picking winning ideas, like if I'm a sales manager and five of my reps bring me an awesome idea, like I feel like my gut tells me, boom, this is the right idea. And and that's just not the case. Like there's no magic wand, like somebody doesn't have some kind of magic intuition or magical wondrous gut that just tells you that it's right. The only way really didn't know is the data. And as a data scientist, you you appreciate that, I'm sure. Yeah, yeah, absolutely, the only relevate on no is tested. So one thing that you taught me, and I remember, our friend Dye Faye, who's one of your engineers on your team, did a presentation and he got up there and he wanted to talk about how big of a sample size do you need in order to notice a specific size of effect that a faring of a test can have or that, like a making a change can have? So, for example, walk me through that thing that you guys taught me. If I wanted to see a change of x, my simple size has to be why? But if I want to see a sample size of only, you know, a my sample size has to be be. Can you like explain, to explain that for us? Yeah, that's kind of fun those that aspect stuff setting up as a test correctly you is that we need the right amount of data in statistical terms sample size. And the interesting and maybe a little bit counterintuiti think about it is that the bigger the difference that you care about detecting, the fewer samples we need, and the smaller the difference we care about, the bigger than number of samples that we need. So example, yeah, yeah, give me, give me some examples. So, for example, if ire writing email and I email goes from ten percent reply rate to eleven percent reply right, like, how big is my sample size need to be in order for that? The difference between those two tests to be relevant and in for me to trust them. Yeah,...

...so there is actually the exact formula to calculate it, but in this case it's about a ten percent relative difference, going from ten to eleven percent, and ten percent in its kind of in the middle. It's not very small, it's not very large. It's probably end up being something in kind of tens of thousands of deliveries. But on the other hand, if you tasting something, you expecting that it's going to bring the reply rate from ten percent up to twenty percent, which is a hundred percent improvement. You really don't. We need a few hundred of deliveries. So if I'm a sales leader and I'm running an AB testing in a sales engaging platform or from a marketer running it more in automation in our run say twozero people through a test, most people would say that's a good test. But if my reply rate or my engagement rate is gone from ten to twelve percent, we can't trust those results because the sample size just isn't big enough to say it. That's certainly what's made the change in the difference. Yeah, most likely this is not going to be what you call statistically significant difference in that case. But in the same vein, if I run a test and after a couple hundred people I see the floor fly rate is gone from ten to twenty percent, that actually might be statistically significant because the change is so large. The sample size can be super tiny and we can still know that we can trust those results. Yeah, exactly, and there isn't. This is happening is that there is always noise in the data. That are actually different people that we as sen didn't deliver is of say, emails of one type or another type, and they may have slightly different preferences and just by chance it may happen. Then, you know, within this first couple of hundred maybe perhaps there were a few of customers who would just be more likely to reply it, regardless...

...of what the sense for them. Yeah, compare to the other group. so that kind of noise can happen and it can cause some difference. However, you don't expect the noise to cause a very big difference. So if you see ad a big difference in less days, that will make sure convince us that it's not noise. While is the defense and small the kind of want to absorb more dayta to really convince us. Well, this is the deal. It's not noise, that's not because just this random variation that's cost it. Now one of the things that you've been tasked with here at outreach is developing machine learning features inside our platform, which is what we call amplify, that allow us, as sales eaters, to be more scientific. Right, we need help. Like, I'm no scientist. I actually think I remember I had chem fourteen at Penn State University. It was a four hour lab every Thursday evening. I had to think twelve weeks semester. So I did probably ten experiments. Guess how many experiments were my calculation for the number of grams that I was supposed to result in from the chemical reactions I was going to make, because how many of those calculations turn out to be true? Zero, you're going to get a chance with one. I was gonna give you an every time. I be like you're going to develop like its twenty three grams of sodium chloride. I'd have zero grams, like every time. Right, so I'm not a scientist at all. Starting to understand and appreciate the value of science. So so here it outreach. Like, tell us about one thing that you've created that is helping a sales eater become more scientific so that they can actually get off of their gut and get onto the Science Bandwagon. Yeah, but things. That's we do. This the developed this feature that makes a be tasting in outreach more scientific. We always had a bey tasting, we didn't actually have signs behind it. And you were in a study and one percent of all the AB testing at outreach was statistically...

...significant or was run the right way. We actually found that it was less than one percent. So us, one of the experts and email that developed a be testing when of the first sales ready a be testing things, we were only getting in one percent. Right, right. So what you created something to help us with that, which helps everybody. Yeah, so what we did is we created this experience that we call guided a be testing, and what it does is that behind the scenes it does statistical tests, it also does some other tests to ensure that the experiment is actually valid and correct and then it will tell the user in the APP when the experiment has a winner. It would also, in addition to that, try to prevent people from breaking experiments, because there isn't. Why we found many of those experience when correct, is because users would just break it. They would come and they would stop a certain template from sending emails in the middle of the test and we started again, although something like that. So now we have the warnings whenever you try to break a currently running experiment and window will pop up with a red side. And yes, there, that's never popped up big it still hasn't popped up on me. Not Actually has. Yeah, it's just harder. So I really think that for sales engagement, Pavel and his team are doing some unbelievable things to help us as sales leaders bring the science into the art of sales, and we like to say like Amplifi's job is to bring science to the art of sales and I think he's doing a great job of I want to thank you for your time today on the podcast, Pavel. What's the best way for people to get in touch with you? They have questions, they quotes, you mail me, I build me three if I would page of that, I all or connect with me on looping. Yep, great. So if you want to talk to the master, he doesn't have a lot of time because he's off baking experiments to make us better. But he'll always get back to you and let you know some stuff...

...and of course we can keep you a prize of what we're doing here at outreach to make things easier for you. But thanks free time today, Pavel. Thanks, monic. It's a better all right. Cool, and that's it for this one. Hey, we'll talk to you on the next sale engagement podcast. This was another episode of the sales engagement podcast. Join US at sales engagementcom for new episodes, resources and the book on sales engagement coming soon. To get the most out of your sales engagement strategy, make sure to check out outreached. I own the leading sales and usement plant. See you on the next episode.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (331)