- Stig Rosenlind
Removing the friction
You've all experienced it, haven't you? When browsing a web page, finding a product or an article of interest and you decide that you want to get more into this. Your eyes start scanning the page, but the only thing you find is a CTA - button who tells you to buy or order here. Insecurity comes crawling. Not ready to order! Not yet anyway.
As a customer you experience friction that ultimately makes you stop. The CTA button does not clarify what the next step is in a satisfying way, so then you interrupt and move on elsewhere instead. Slightly annoyed. Customer dissatisfaction and another possible conversion going down the drain for the business. Double trouble. Money lost.
The last years we have acquired the necessary expertise when it comes to continuously test our customer journeys, both regarding the technical competency on how to use our A/B-testing tool Adobe Target, but equally important, also on the learning of using behavioral insight as a key factor when creating different test and experimentations. To achieve the best possible customer experience and conversion.
By the way, did you know that us humans in the modern world are having over 700 choices to make during a normal day? That's a whole lot of choices and a whole lot of decisions to be made. A whole lot of thinking to do. What's worse is that only 5% of these decisions we humans make are based on deeper reflections and rationality. Or based on an actual thinking process if you like. Which leaves a staggering 95% of our decisions purely based on intuition and instinct. Frightening when you think of it. Let's hope our politicians have learned to increase this percentage for their decisions. Anyway, we will return to this topic further down.
A rocky road
DNB acquired Adobe Target as A/B - testing & experimentation tool 5-6 years ago, but it has been a long and rocky road to travel so far. When creating the new DNB open pages over the last couple of years, Adobe Target hasn't worked well with the new technical setup, and we have experienced a big variety of challenges to get it to work properly.
I will not try to document all the technical issues here but need to mention that one of the main problems have been the flickering issue, which basically means that the B-version in an A/B - test uses up to a full second to appear on the page where the test is set up. This issue can create confusion for the customers, who might suspect that there are some kind of fraud going on when they witness elements on the page they enter suddenly change.
This flickering issue has not been solved entirely yet, but by implementing an "anti-flickering script" on the pages where we run tests, we seem to have this issue under control. The script is actually making the whole page experience flickering, or delay if you like. This means that both the original content and the actual test is appearing approximately at the same time when loading the page. This solution is not perfect, but we have learned through the last 10 months that customers doesn't seem to be too alarmed by this.
What to test?
When it comes to A/B - testing there are different types of testing. We have the regular A/B testing, also known as split testing. This is a method of website optimization in which the conversion rates of two versions of a page — version A and version B — are compared to one another using random live traffic. By tracking the way visitors interact with the page they are shown, the videos they watch, the buttons they click, you can determine which version of the page is most effective.
Within DNB we have defined it as "simple A/B - testing" when we want to do small changes for optimization and the elements already exists within Eufemia. This includes changing and cropping images, changing text and header, CTA button, changing text, adding an extra CTA button and rearranging sections.
Further, the "advanced A/B - testing (or experimentation)" is when we want to do changes to existing UX elements to deliver on our hypothesis. This involves both a developer and UX resources, and typical examples of advanced testing could be menu, navigation cards icons and login button.
How to do tests – the battle between feelings and knowledge
We are going to look more into simple A/B - testing since this is what we currently do the most. The first step in our A/B - testing journey.
So, do we really need this A/B - testing stuff? Or is A/B - testing just another buzz thing that gives us little to nothing in the long run? Well, the answer is quite obvious, isn't it? If we are implementing new pages or elements on our pages without testing first, how are we supposed to know if we are keeping the customers happy? Or if we earn or lose money?
We now have the perfect tool for measuring our traffic, using Customer Journey Analytics from Adobe, and rapidly increasing the number of skilled people using this tool. We must constantly monitor our pages and processes to have an idea on what level each step in a process or page is expected to perform. To set goals and find out the places where customers are experiencing friction that are preventing us from reaching these goals. What exactly makes this page to convert below what is expected, and then decide to make changes.
So then. To be data driven is the first topic we need to wrap our heads around. It is tiresome to be "data driven". That kind of means that you no longer are allowed to randomly shout out your personal views and meanings regarding this stuff. You actually have to support your meanings with existing data and convert the findings to some sort of knowledge you can base a hypothesis on. Duh...! No fun anymore then. When did knowledge suddenly become a thing?
So, a hypothesis? What is that? There is a difference between generating a random test based on no findings whatsoever (only what you "believe" is the problem) and running a test scientifically to prove something that are based on different analytical observations. This something is your hypothesis.
You should never change anything without having an idea on why the current site isn't converting. A change should be based on a hypothesis. The hypothesis should be based on the data you collect, insight and understanding our customers and why they hesitate. The change should then influence our customers behavior and hopefully increase conversion rates.
When having your hypothesis ready, and setting up the A/B - test, there are some other issues you also should be aware of:
- How long should the test run? You need enough traffic on the page, and the test need to run until the results are statistically valid. At minimum a week.
- Never stop a test after two or three days, even if it looks like you have a winner. Your A/B testing tool (Adobe Target) will tell you when the test is significant, and the result are valid.
- Try to avoid more than one test on a page at the same period. If you test several changes simultaneously, you will not be able to tell which change is providing good results and which is not. Do several iterations and make only one change at a time.
When the test is valid, you'll have a result. The test might give you any result. Your hypothesis might be wrong, even if you have put big effort in it. As data-driven marketers, it seems difficult sometimes to make your peace with negative lifts.
But honesty is important. To not share a failure and learn from it is the greatest sin. Embrace and share the result, no matter if you fail or succeed. Start over and make a new hypothesis and a new iteration. And then once again. And again. What’s important is to analyze why your visitors behaved in a certain way. If the test reveals significant customer learning, it can pave the way for huge conversion lifts in the future. You and your colleagues often learn more from a failure than a success.
To document our tests we use the Miaprova tool, both to document goals and measurements. Miaprova is seamlessly integrated with Adobe Target, and have a user interface that simply enables our testers to use it. This is off course important. If we don't document our work, we are bound to do the same mistakes repeatedly.
The twisted human mind
And then we are returning to the human mind, and its many choices. We have brilliant UX designers, great service designers and excellent editors, all contributing trying to create these ultimate customer journeys. But still, with all this expertise aboard, we see that some of our pages and customer journeys are not converting on the expected level.
Behavioral insights investigations are teaching us, as mentioned earlier, that the human race is taking 95% of their daily-life decisions based on intuition and instinct, and only 5% of our decisions are based on deeper reflections and rationality. Since we have determined that we humans are pretty lousy when it comes to take rational decisions, to have more than 700 decisions to make every single day could be quite overwhelming. Both for the humans (which basically are our customers), and for our marketeers.
This documented scientific knowledge shows how difficult it is to create good customer experiences, both on web and on other areas in life. Luckily, there are a lot of insight and documentation on different methods that helps us reach our main goals. To create "customer experiences in world class".
You can find countless behavioral science reports to learn more about the human brain, and how it works. The biggest and most thorough might be the MINDSPACE report, which the British government conducted in order to find out the best ways to communicate to and influence the British people. This report is available for free on net and are identifying nine of the most robust influencing factors on behavior. MINDSPACE is an acronym for these effects.
Within each of these nine factors there a lot of interesting learning on how to effectively communicate with our customers, and it could be very useful for any marketeer to use the tools in the report to understand our customers and why they make certain choices. And then try to use these findings on our webpages to nudge our customers choices in a specific direction.
One example using the "S" in MINDSPACE which is Salience. Within the Salience chapter there are a lot of different angels, but one is that our attention is drawn to communication that we can relate to. So, when during the pandemic, the government telling us to "practice social distancing" it didn't have the desired effect. Social distancing is not a term we could relate to. But when they changed it to "Stay at home, shop once a week" the understanding was much better.
There is off course some "sunshine" A/B-testing stories on how minor changes has made a huge impact. Danish CRO specialist Michael Aagard was working with a client and by simply changing the original CTA from "Start YOUR 30-day free trial" to "Start MY 30-day free trial" resulted in a 90% increase in sign-ups. Exploding!
Kim Goodwin is VP Design and General Manager at Cooper, and in 2019 she stated that "Internet products & services are the largest human-subjects experiment ever conducted". With that in mind, we should not expect such exceptional results all the time, but we should strive to implement a culture where testing is the new normal. If "Internet products & services" are the largest experiment ever, we sure need a lot of testing to succeed.
If we do it right, identifying the relevant customer frictions and finding the good hypothesis, we should be able to increase our conversion rates. And by that getting closer to execute "customer experiences in world class". And in the end increasing our revenue. That’s quite cool when you think of it! And it’s pretty much what our life in DNB is all about.
Our DNB A/B-testing journey starts for real now, and the future is looking bright. Be sure to bring your sunglasses and a positive mood!
I will leave you with an example using another letter from the MINDSPACE model. The P which stands for Priming. The two sentences below are literally equal. So, I simply ask you:
- How important is it to you to do A/B - tests?
- How important is it to you to be an A/B - tester?
Behavioral insights teach us that one of these two sentences could expect more than double conversion compared to the other one. Which one will make you want to engage in A/B - testing? (Hint: People tend to like being part of something….)