Case Study: Subway, a journey to digital transformation through personalization

Categories Adobe Summit 2018 Case Studies

 

Most ideas I hear are not based on data, but rather on what people believe. You need to have a model to make those big decisions objective.

 

Subway undergoes an important digital transformation, in a world where personalizing your customer’s experience is paramount. When each and every one of your customers has hundreds of options, you need to be as precise and accurate as possible when you offer him a personalized experience, one that tailored specifically to his needs.

 

The Subway digital transformation was powered, in part, by Accenture Interactive, which helped Subway turn the A/B tests into personalization.

 

We did that by creating the whole structure of the operating model. We also brought with us our philosophy for personalization. Personalizing uses customer data to improve the customer experience at every touchpoint.”, said Jeff Larche, Sr. Manager, Personalization & Customer Analytics at Accenture Interactive.

 

 

One of the conclusions drawn after the digital transformation begun and came to fruition was that you needed to have a system to discover what the strong elements on your site are. Those are the ones that should be personalized. Of all the possible great ideas they could possibly launch, which should have been launched first.

 

You need to have a system to analyze your data to take advantage of all the work you’ve done. If your analysis is not good, it’s all pointless. The most important part of successful testing is a scalable, repeatable process. You’re not going to implement a strong personalization system and make millions of dollars right away. It’s a process”, said Chad Sanderson, Digital Optimization & Experimentation Lead at Subway.

 

Jeff Larche believes that in some ways, we’re all collections of cognitive biases, invisibly influencing our actions.

 

 

“The optimizations we make with Adobe Target are based on math and are agnostic towards these biases, as they help improve user satisfaction. We believe that some idea will result in a measurable outcome because of reasons backed by data and can be measured by a concrete metric”

 

 

Studies suggest that loss-averse behavior is a very general feature of economic choice. These results suggest that loss-aversion extends beyond humans, and may be innate.

 

One of the most important aspects that we, as analysts, should consider, is that “most ideas we hear are not based on data, but rather on what people believe” (Chad Sanderson). This is extremely important for our industry because it’s a huge opportunity to help companies and managers understand that data-driven decisions are what drive the business forward.

 

Chad explained that of all the data we have, the biggest one is quantitative data. However, qualitative data answers the “why” question, which is really important. “In any business, you need to have a model that makes those big decisions objective: A Prioritization Model. This is a very simplified type of a model that we use at Subway”

 

 

Any test above the fold gets the value of 1, any test below the fold gets the value of 0. If you’re testing something that’s harder to notice, you’ll give that a lower rating.

 

Some example of business metrics: ROI, what is the predicted revenue or engagement for this test. If you do a test that’s minor, maybe the result won’t be a 50 million dollar change. But if you’re changing a funnel down to its very nature, it’s a strong chance it will affect something strongly.

 

 

Jeff Larche says that, as you prioritize, if each of the circles above were tests, those on upper right should be the sweet spots you should start with. By the time you’ve finished those tests, you’re gonna have others that float there as well.

 

Just because you build a model, it doesn’t mean it should be static. You should always change it, always be dynamic and update it. If we run 10 tests and we think that being above the fold and below the fold means something, and when we run 100 we don’t see anything important, we might remove that column. You can turn this into a predictive model fairly easily. You can start making a prediction based on your inputs”, said Chad Sanderson.

 

Sometimes we can look at data too hard. We can look until we find something that’s not really there, he added.

 

One of the most frequent questions in analytics is: If optimization during A/B test works, why sometimes when I implement it, the results go away? And the answer, as Sanderson explains, is simple: If you’re looking at a huge number of segments and datasets, you’re gonna find some results that were not there.

 

 

Before you run a test or a personalization, you say “this is my hypothesis”, these are the metrics, segments I’m going to use and observe”. 

 

“I encourage you to do an exploratory analysis. I don’t want you to regard those results as amazing and share them with your shareholders. Look at them as “interesting”, formulate a hypothesis and run the test again”.

 

Sebastian is a journalist and digital strategist with years of experience in the news industry, social media, content creation & management and web analytics.