Source- bigcommerce.com/ google image

Conversion Rate Optimization Minidegree — CXL Institute Review #10

Introduction to experimentation statistics

Mohit Singh Panesir
6 min readNov 23, 2020

--

As I mentioned in my first few blog posts, in this series, I am going to talk about conversion rate optimization (CRO). This blog is part 10of the 12 reviews that I would be publishing based on my learnings from the CXL Institute’s Conversion Rate Optimization Minidegree program. This post is all about Google Analytics.

CXL Institute offers some of the best online courses, mini degrees, and certifications in the field of digital marketing, product analytics, conversion rate optimization, growth marketing, etc. I am a part of the Conversion Rate Optimization Minidegree program. Throughout the series, I would be discussing the content of the course as well as my learning and thoughts about the same.

If you are unemployed, underemployed, or interested in learning more about marketing from some of the best in the industry, look into the 12-week Minidegree scholarship program from the CXL institute while the offer is still available.

Who should apply for the CXL Institute Scholarship?

If any of these describe you, you should apply:

  • You are looking for a serious transformation in your career, and are willing to put in the hours to accomplish that transformation.
  • You are not afraid of hard work.
  • You embrace any challenge you face and are determined to do whatever it takes to succeed.
  • You are a self-driven individual that takes the initiative to learn.

Introduction to experimentation

During Barack Obama’s 2008 presidential campaign, his campaign committee had the goal of optimizing every aspect of his campaign. They figured that the right web design could mean raising millions of dollars in campaign funds. But how did they decide on the right design? They tried as many as 24 different variations of the web page using a mixture of images and CTA buttons and finally arrived at a combination that brought about the best results. The result? A 40% increase in signup rates, which led to the fundraising of $60 million! These experiments they conducted with the website is exactly what A/B testing is all about.

Source: Optimizely

First things first — What is an A/B test? It’s all in the name — an A/B test is where you test two or more different versions of your product to find out which one performs best. But this doesn’t mean the two versions (A and B) are poles apart. They need to be identical, except for a few minor tweaks, which we suppose will change the user’s behavior. Version A (Control) is the currently used version and Version B (Treatment) is the one with minor modification.

Statistics for A/B Testing

Theodor Andrei in one of his blogs explained statistics for A/B testing (CXL) by saying that data is a proxy for reality. Before we can discern if we can make a change to a website, we need to understand if the change will hurt or improve our KPIs. We also need to be aware of the danger of misinterpreting our data. For example, let’s say you make a change to your website, and your bounce rate almost halves. You might assume this amazing effect is due to the change you made, but what if the tracking got messed up in the process? Correlation does not imply causation. What we observe might not be representative of the truth.

There are several ways of addressing variability issues in A/B testing. Here are Georgiev’s tips:

  • Use a proper experimental design with a randomized selection which is meant to deal with issues such as confounding variables.
  • Run A/A tests (instances where you compare the same version of a website against itself) so you can spot variability in testing tools.
  • Observe the performance of the versions at selected points in time, and find out if the two converge towards zero. If this isn’t the case, it may indicate an issue.

You will need to formulate a hypothesis based on a business question and test whether certain variations are worse than or equal to the control (this is called the null hypothesis, H0). Rejecting the null hypothesis will mean that a variation is significantly better than the control (H1), from a statistics standpoint.

Running tests is not only about choosing a winner decided by a testing tool. It is about recognizing why certain variants are winning while taking variability into account, and whether there is an actual effect. Understanding this is key in seeing the real picture through your data. How can you best understand variability? You calculate the standard deviation, which is the square root of the variance. The variance is the squared sums of the differences between the sample mean and the sample observation, divided by the number of observations. A larger standard deviation will mean a larger variability of the data points. Z-score and p-value also help indicate how much a given value differs from the standard deviation.

The p-value of less than 0.05 is common in establishing statistical significance in A/B tests. However, it can mean 3 different outcomes:

  • The null hypothesis is not true
  • The null hypothesis is true, but a rare outcome affected the results
  • The statistical model is inadequate, which means that the nominal p-value is not an actual p-value

Statistical power is also a tricky concept. Georgiev illustrates a great metaphor to make it more clear. If statistical power were a fishing net, low power would mean a net with big holes knitted in between, and a high power net would be more tightly-knit, with smaller holes. Naturally, the low power net will only be able to catch big fish, while the high power net would catch big and medium-sized fish while still missing very small ones. The same goes for a test’s ability to capture an effect based on its statistical power. The right power for tests is somewhere in the middle.

The rest of Georgiev’s course is filled with examples of common statistics “traps”, explanations of methods on how to deal with them, and examples of how you can run different types of tests while also accounting for these problems. It is one of the more complex and theory-dense courses in the CRO Minidegree (surprised?), which is why I couldn’t include every detail in this week’s short review. That being said, it is definitely an essential course that every optimizer should go through (repeatedly!) in order to understand the statistics that will influence growth decision-making.

Review —

I find the CXL CRO Minidegree very insightful. The instructors are champions in their fields and they know exactly what they are talking about. Being an experimentation analyst, I understand the importance of experiments (A/B testing) and I have seen numerous examples where the outcome of the test was contradictory to public opinion. The emphasis on testing and learning from it is something that I admire the most about the course.

Proper knowledge of statistics is essential for spotting mistakes, understanding your data, and getting closer to the truth. Running A/B tests without accounting for several of the pitfalls associated with statistics and running experiments can be incredibly costly.

The detailed walkthrough of performing experimentation and quantifying the results into actionable insights. I am eager to learn more about advanced experimentation during my week #11

That’s all folks. See you next week!

Warner Bros

--

--

Mohit Singh Panesir
0 Followers

Experimentation Analyst | Conversion Rate Optimizer | Growth | Product Analyst | Insights