The Sales Engagement Podcast
The Sales Engagement Podcast

Episode · 3 years ago

Why Data Science Is Critical To Growth In Engagement w/ Pavel Dmitriev

ABOUT THIS EPISODE

We're talking all things data and testing with our VP of Data Science, Pavel Dmitriev. Pavel explains the how and why behind using data science and AB testing to grow engagement without the guesswork. Tune in!

Welcome to the sales engagement podcast.This podcast is brought to you by outreach dot I oh, the leading salesengagement platform helping companies, sellers and customer success engage with fires and customers inthe modern sales era. Check out sales engagementcom for new episodes, resources andthe book on sales engagement coming soon. Now let's get into today's episode.Everybody, welcome to the sales engagement podcasts on more COSTICO, vp of salesthat outreach. On this podcast we talked all things sales engagement and one thingthat's really interesting to me is how we get better at doing sales engagement,not just how we do more of it. And with me today I have PavelDmitriel, who's our VP of data science that outreach. Pavel, sayhi to everybody. Hi Everyone. So tell me a little bit about whatyou do, like vp of is a VPA data science? Is that whatyou do here? Yeah, so I essentially the data science. We havean awesome team here with some data scientists, some data engineers and some have developersand via trying to build some features into outreach which will make life offriends easier. At the main some of the stuff that they don't like doing, makes them more effective use the things that they are doing as well ashelp to really improve that whole sales process. Right. So it's always interesting tome. Many me dinner. Our CEO told me one days like Hey, man, I'm about to hire some guys that are can go deep ondata and on science and experimentation. I was like okay, sounds cool,but then when I met you and ef a, who are kind of ourfirst two people on that team, became clear to me how important this was. So, as you're looking at what you have done in the past withdata science, like how do you see applying to sales engagement and what wedo here that reach? I think data...

...science really will become critical to optimizeand sales engagement, because is that data science? What happens is that Wepof sales comes in, kind of sets up the process, content and soon, and then that's what it is. It just stays like that. Thingsdo not improve or they improve in some kind of very at hoc kindof a way, based on, you know, anycdots and best practices andso on. It's fact that's not clear at all where if someone goes andchanges certain. No, it's sequences or some content, whether it actually helpsor not. Yeah, so what data science can do is it can bringkind of scientific river and measurement of that process that when we make changes,when we try to improve something, they can actually verify whether it really improvesthings on and then over time accumulate that learning about what other things we shallworking, that we can do more of and what are the things which arenot working, which we don't need to do, and what that enables isthat this whole sales process keeps improving, constantly improving over time, rather thanjust remaining static as it is now. You used to be at Microsoft,right, yeah, how long were you there? For eight years. Eightyears, and there's this interesting story that you told me, I think,with the developers of being that like really opened your eyes what your data signsget Friede. It Open Your eyes, but open other people's eyes to thefact that hey, this is important. Like tell me that story again.Yeah, no, that was an amazing story and it actually really opened myeyes on the power of this specific technique called a betasting. So what happenedis, once, and now I developer at Microsoft had an idea about howto improve the way ads. I just played on the SEC results and improvementwas very simple. He just wanted to...

...take an a first sentence from thetext of the Ad and put it into the title. And it's very simpleand just list really a couple of lines of code takes like an hour probablythis nest to develop. So if you had that idea, but you knowthat are hundreds of developers and PM's. Everyone has ideas. When it allgoes into this backlog and gets prioritized, this idea did not make it tothe top and he waited for a month and two months and three months andafter six months he was like this is never going to make it. Sohe pretty much just did it take over the weekend in his free time andthen started this scientific experiment called a the test. And then what happened isthat immediately an alert fired. Our system, as being put generated alerts whenever thereis some kind of a strange movement, and this time a Lort was aboutwe are making too much money. This can't be true. I've neverseen a feature which makes so much money and this is a trivial change.So what was wrong? Well, it turns out nothing was wrong. Thisfish are actually increased, being the revenue by twelve percent, which was arounda hundred million dollars a year at that time. The six months delay causedbeing fifty million dollars, and no one would have imagined that this Fisher woulddo it. Like everyone prioritized that low and it was somewhere down in thebacklog. Yeah, I think we see that a lot with sales leaders.Bringing that back to sales engagement, which is a rap as an idea.A sales leader has an idea and it gets pushed to the bottom or theyhave an idea that they think it's important naked prioritize, that they tested,test on and you know, kind of get some off kilter where they shouldhave been going. But it's interesting the middle. As you've been here forsix months, you've talked to sales leaders, you've worked with our sales team.What is something that you're seeing that...

...sales leaders do wrong with AB testing, because most sales leaders understand a be testing and I should be testing somethingdifferent against this thing that we've always done to get the results. But,like, give me a couple things that you feel like people have done wrongas sales leaders when it comes to a be testing from like a scientist perspective. Yeah, I think that, if you things, some of it hasto do with is just really being able to properly run and aid the test, that it needs to have enough data. Actually need to have statistical tests todetermine what works and what doesn't. We can just look at the differencebecause that could be due to noise. So just properly configuring and running anda be test is actually not as easy as it seems. And the Soland the analyze the data of outreach and of outreach customers that very few abe test actually run correctly. Most of them are invalid for unreason or another. So that's one thing and another thing is, I feel like, alittle bit under appreciating the power of Abit testing. Everyone is aware of aboritetesting. Sales leaders know it existence, is a value it. However,they often think of it as kind of just the things that you used tomaybe see which sub recline on a certain email is better or what kind ofno call to action to put into a specific email template. But the powerof a bit taste and actually is a lot more. You can use itto answer some high level business questions, such as things like you know,the the video in emails is effective or not, something that we talked aboutbefore, and you cannot do it with just one a bit test, butyou can do it this kind of an AB tasting initiative, a set ofcoordinated tape test that's you run across different...

...scenarios and different aspects of the productand that really enables, I think, seals leaders to answer the questions theyreally care about, those high level business questions, not just small questions aboutwhether, on a specific type of sequence, in a specific step, something iswhat is better than than something else. Right. So this is one thingthat really struck me when we met and we started talking, is theidea of getting yourself team better and better through testing and experimentation, which isprobably more technical term. Is it's a lot of work. Like it's noteasy, it's hard and a lot of people are kind of dabbling in itand doing it sort of right, sort of wrong, maybe mostly wrong.What are the consequences of doing it incorrectly? Other consequencens can be very severe becauseif you use the results, the results of an AB test generally,I used to make some decision about changing maybe the process of the strategy,perhaps peak using the result of this a Bey test and incorporating it across andknow your whole company. And if that is based on the wrong or incompletedata, then potentially that decision may be wrong and depend on how wildly that'sgoing to be used, it may actually have really bad consequences. Yeah,and I think going back to story, you told me another story when you'reat Microsoft about developers intention of creating a good feature and how many of thoseturned out to be features that actually help the users. Are Not like Tommythat story. Yeah, that's kind of the other side of the being storiesthat we just talked about that, on the one hand, you know,often good features to not get priorities very high. On the other hand,when we looked at all of the fishers that actually being built and ship tousers and we evaluated them using a Bey...

...test, we found that only athird of them are actually good, like really benefit users, and the otherthird just neutral, and then another third were actually harmful. Is they willlose revenue or we degrade user experience, and that's kind of really I openin a sense and very humbling to yeah, that's when developers, pms have ideasand they both. They all belief that this ideas are good. Actuallyis, are as likely to help users as they perform them as I thought. This is so crazy, right, because on the sales floor get acouple reps talking and they like then they bring over a couple of their otherbuddies for lunch and then they talk to their manager and they all have thebest intention of doing something that helps the sales team. But actually their ideahas just as much ability or just as much of a chance to harm thesales team as it does to help the sales team. And, like wejust we have to give it away from our gut, right, like wecan't rely on our guts anymore, sales people, in order to get thingsright, because now that we have data, with data science, with machine learning, we have these things that we can use making wrong decisions based onyour gut. I don't think people are going to tolerate it much longer.To you, yeah, no, absolutely not. And I would say asit is actually a very sort of natural combination of using your intuition, Yoga, your experiences to come up with ideas. We don't want to throw those ideasaway. It's the only really difference that the scientific techniques a taste andmake, because that it allows us to treat this ideas as hypothesis rather thanthe absolute truth, and then actually tells those hyppocesses. And then in theprocess of fest and those Typosiss, we learn a lot more, which givesrise to new ideas and the kind of virtuous circle is what a distant enables. Yeah, I think it's interesting.

I think some people think they're goodat picking winning ideas, like if I'm a sales manager and five of myreps bring me an awesome idea, like I feel like my gut tells me, boom, this is the right idea. And and that's just not the case. Like there's no magic wand, like somebody doesn't have some kind ofmagic intuition or magical wondrous gut that just tells you that it's right. Theonly way really didn't know is the data. And as a data scientist, youyou appreciate that, I'm sure. Yeah, yeah, absolutely, theonly relevate on no is tested. So one thing that you taught me,and I remember, our friend Dye Faye, who's one of your engineers on yourteam, did a presentation and he got up there and he wanted totalk about how big of a sample size do you need in order to noticea specific size of effect that a faring of a test can have or that, like a making a change can have? So, for example, walk methrough that thing that you guys taught me. If I wanted to seea change of x, my simple size has to be why? But ifI want to see a sample size of only, you know, a mysample size has to be be. Can you like explain, to explain thatfor us? Yeah, that's kind of fun those that aspect stuff setting upas a test correctly you is that we need the right amount of data instatistical terms sample size. And the interesting and maybe a little bit counterintuiti thinkabout it is that the bigger the difference that you care about detecting, thefewer samples we need, and the smaller the difference we care about, thebigger than number of samples that we need. So example, yeah, yeah,give me, give me some examples. So, for example, if irewriting email and I email goes from ten percent reply rate to eleven percentreply right, like, how big is my sample size need to be inorder for that? The difference between those two tests to be relevant and infor me to trust them. Yeah,...

...so there is actually the exact formulato calculate it, but in this case it's about a ten percent relative difference, going from ten to eleven percent, and ten percent in its kind ofin the middle. It's not very small, it's not very large. It's probablyend up being something in kind of tens of thousands of deliveries. Buton the other hand, if you tasting something, you expecting that it's goingto bring the reply rate from ten percent up to twenty percent, which isa hundred percent improvement. You really don't. We need a few hundred of deliveries. So if I'm a sales leader and I'm running an AB testing ina sales engaging platform or from a marketer running it more in automation in ourrun say twozero people through a test, most people would say that's a goodtest. But if my reply rate or my engagement rate is gone from tento twelve percent, we can't trust those results because the sample size just isn'tbig enough to say it. That's certainly what's made the change in the difference. Yeah, most likely this is not going to be what you call statisticallysignificant difference in that case. But in the same vein, if I runa test and after a couple hundred people I see the floor fly rate isgone from ten to twenty percent, that actually might be statistically significant because thechange is so large. The sample size can be super tiny and we canstill know that we can trust those results. Yeah, exactly, and there isn't. This is happening is that there is always noise in the data.That are actually different people that we as sen didn't deliver is of say,emails of one type or another type, and they may have slightly different preferencesand just by chance it may happen. Then, you know, within thisfirst couple of hundred maybe perhaps there were a few of customers who would justbe more likely to reply it, regardless...

...of what the sense for them.Yeah, compare to the other group. so that kind of noise can happenand it can cause some difference. However, you don't expect the noise to causea very big difference. So if you see ad a big difference inless days, that will make sure convince us that it's not noise. Whileis the defense and small the kind of want to absorb more dayta to reallyconvince us. Well, this is the deal. It's not noise, that'snot because just this random variation that's cost it. Now one of the thingsthat you've been tasked with here at outreach is developing machine learning features inside ourplatform, which is what we call amplify, that allow us, as sales eaters, to be more scientific. Right, we need help. Like, I'mno scientist. I actually think I remember I had chem fourteen at PennState University. It was a four hour lab every Thursday evening. I hadto think twelve weeks semester. So I did probably ten experiments. Guess howmany experiments were my calculation for the number of grams that I was supposed toresult in from the chemical reactions I was going to make, because how manyof those calculations turn out to be true? Zero, you're going to get achance with one. I was gonna give you an every time. Ibe like you're going to develop like its twenty three grams of sodium chloride.I'd have zero grams, like every time. Right, so I'm not a scientistat all. Starting to understand and appreciate the value of science. Soso here it outreach. Like, tell us about one thing that you've createdthat is helping a sales eater become more scientific so that they can actually getoff of their gut and get onto the Science Bandwagon. Yeah, but things. That's we do. This the developed this feature that makes a be tastingin outreach more scientific. We always had a bey tasting, we didn't actuallyhave signs behind it. And you were in a study and one percent ofall the AB testing at outreach was statistically...

...significant or was run the right way. We actually found that it was less than one percent. So us,one of the experts and email that developed a be testing when of the firstsales ready a be testing things, we were only getting in one percent.Right, right. So what you created something to help us with that,which helps everybody. Yeah, so what we did is we created this experiencethat we call guided a be testing, and what it does is that behindthe scenes it does statistical tests, it also does some other tests to ensurethat the experiment is actually valid and correct and then it will tell the userin the APP when the experiment has a winner. It would also, inaddition to that, try to prevent people from breaking experiments, because there isn't. Why we found many of those experience when correct, is because users wouldjust break it. They would come and they would stop a certain template fromsending emails in the middle of the test and we started again, although somethinglike that. So now we have the warnings whenever you try to break acurrently running experiment and window will pop up with a red side. And yes, there, that's never popped up big it still hasn't popped up on me. Not Actually has. Yeah, it's just harder. So I really thinkthat for sales engagement, Pavel and his team are doing some unbelievable things tohelp us as sales leaders bring the science into the art of sales, andwe like to say like Amplifi's job is to bring science to the art ofsales and I think he's doing a great job of I want to thank youfor your time today on the podcast, Pavel. What's the best way forpeople to get in touch with you? They have questions, they quotes,you mail me, I build me three if I would page of that,I all or connect with me on looping. Yep, great. So if youwant to talk to the master, he doesn't have a lot of timebecause he's off baking experiments to make us better. But he'll always get backto you and let you know some stuff...

...and of course we can keep youa prize of what we're doing here at outreach to make things easier for you. But thanks free time today, Pavel. Thanks, monic. It's a betterall right. Cool, and that's it for this one. Hey,we'll talk to you on the next sale engagement podcast. This was another episodeof the sales engagement podcast. Join US at sales engagementcom for new episodes,resources and the book on sales engagement coming soon. To get the most outof your sales engagement strategy, make sure to check out outreached. I ownthe leading sales and usement plant. See you on the next episode.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (315)