Source- bigcommerce.com/ google image

Conversion Rate Optimization Minidegree — CXL Institute Review #9

Introduction to experimentation

Mohit Singh Panesir
6 min readNov 16, 2020

--

As I mentioned in my first few blog posts, in this series, I am going to talk about conversion rate optimization (CRO). This blog is part 9 of the 12 reviews that I would be publishing based on my learnings from the CXL Institute’s Conversion Rate Optimization Minidegree program. This post is all about Google Analytics.

CXL Institute offers some of the best online courses, mini degrees, and certifications in the field of digital marketing, product analytics, conversion rate optimization, growth marketing, etc. I am a part of the Conversion Rate Optimization Minidegree program. Throughout the series, I would be discussing the content of the course as well as my learning and thoughts about the same.

If you are unemployed, underemployed, or interested in learning more about marketing from some of the best in the industry, look into the 12-week Minidegree scholarship program from the CXL institute while the offer is still available.

Who should apply for the CXL Institute Scholarship?

If any of these describe you, you should apply:

  • You are looking for a serious transformation in your career, and are willing to put in the hours to accomplish that transformation.
  • You are not afraid of hard work.
  • You embrace any challenge you face and are determined to do whatever it takes to succeed.
  • You are a self-driven individual that takes the initiative to learn.

Introduction to experimentation

During Barack Obama’s 2008 presidential campaign, his campaign committee had the goal of optimizing every aspect of his campaign. They figured that the right web design could mean raising millions of dollars in campaign funds. But how did they decide on the right design? They tried as many as 24 different variations of the web page using a mixture of images and CTA buttons and finally arrived at a combination that brought about the best results. The result? A 40% increase in signup rates, which led to the fundraising of $60 million! These experiments they conducted with the website is exactly what A/B testing is all about.

Source: Optimizely

First things first — What is an A/B test? It’s all in the name — an A/B test is where you test two or more different versions of your product to find out which one performs best. But this doesn’t mean the two versions (A and B) are poles apart. They need to be identical, except for a few minor tweaks, which we suppose will change the user’s behavior. Version A (Control) is the currently used version and Version B (Treatment) is the one with minor modification.

Planning

In what cases do you want to run an A/B test?

  • When you deploy a change, you want to learn if it has a negative impact on your measurement KPIs.
  • For conversion signal map research: the idea is to remove an element and see if it has a negative impact. If not, it is then a useless element and you just don’t need it.
  • On the other hand, you can add an element (such as social proof) and see if it has a positive impact. If so, implement it. You can also test this on a specific segment of users.
  • With the purpose of optimization, apply the change in a client-side way. If it is a win, you want to deploy it immediately.

Do you have enough data to run an A/B test?

  • If you have below 1000 conversions (transactions, clicks, leads…) per month, do not A/B test. Significance would be too low. At this stage, just focus on purely growing. Optimization will come later. Above 1000 conversions per month, you can start A/B testing.
  • If you have above 10000 conversions per month, you can run 4 A/B tests per week. 10000 conversions are the “DNA border”. If you are below, take more risks, grow your company above 10000 conversions, and then create a real proper structure. Once you reached that mark, you will need to get more optimization teams and A/B testing will become part of the DNA of your company.

How to do proper research?

The CXL’s course gives us a framework for proper A/B testing research and it’s called the 6V model. It’s quite similar to the ResearchXL framework which you can find here.

So the 6V model, as you might figure it out has 6 stages, all starting with a “V”. They are:

  1. Value — company and customer goals
  2. View — web-analytics and web-behavior data
  3. Versus — competitors
  4. Voice — surveys, customer support, feedback
  5. Verified — scientific research, insights
  6. Validated — insights validated in previous testing and analyses

How to set a hypothesis?

Before you start the experiment, it’s essential to write a hypothesis. With this, you will get everyone aligned. Everyone will know what you will be doing, why you will be doing it, and what will be the proposed outcome.

Your hypothesis should follow this framework:

If I apply this (psychology), then this behavioral change (data) will happen, among this group (data), because of this reason (psychology).

How to design and develop the test?

The first thing that I would like to emphasize is when designing the test, use only A and B. So just default and a challenger. Adding more variations will have an implication in your minimal detectable effects. That means that you will be only able to detect the impacts of more than 20%. It’s better to run two simultaneous A/B experiments on the full population of the websites in the same location.

When developing an experiment, one rule of thumb is that you don’t use the code editor in the testing solution. If you need to make just some small changes, then ok. But it’s always better to write proper code and inject that into the website. That way you won’t run into unpredictable problems and also, you’re not limiting yourself to just drag and drop in the testing solution.

One piece of advice would also be, that you input the same code into the default that you put into the challenger. This way, both sites would have the same loading time and this wouldn’t interfere with the outcome of the experiment.

How to understand that the test is done? Here are the main indicators:

  1. Enough sample size
  2. Multiple business cycles
  3. Statistical significance reached

Some tips to help you avoid some common mistakes of A/B testing:

1. Don’t waste time on stupid tests

2. Don’t think you know what will work

3. Don’t copy other people’s tests

4. Don’t use a too low sample size

5. Don’t run tests on pages with very little traffic

6. Run tests long enough

7. Test full weeks at a time

8. Send test data to third-party analytics

9. Don’t give up after your first test for a hypothesis fails.

10. Be aware of validity threats

11. Don’t ignore small gains

12. Run tests at all times

And always remember to track the next key metrics:

  • Numbers of variants tested per week or per month;
  • The win rate

Review —

I find the CXL CRO Minidegree very insightful. The instructors are champions in their fields and they know exactly what they are talking about. Being an experimentation analyst, I understand the importance of experiments (A/B testing) and I have seen numerous examples where the outcome of the test was contradictory to public opinion. The emphasis on testing and learning from it is something that I admire the most about the course.

The material that I went through for the ninth week helped me understand the basics of experimentation plays a vital role in optimizing the conversion of a website. Everything about the course is so descriptive. Something, that can be easily implied in real-world scenarios/ and business problems.

The detailed walkthrough of performing experimentation and quantifying the results into actionable insights. I am eager to learn more about experimentation statistics and advanced experimentation during my week #10

That’s all folks. See you next week!

Source: Warner Bros

--

--

Mohit Singh Panesir
0 Followers

Experimentation Analyst | Conversion Rate Optimizer | Growth | Product Analyst | Insights