A/B Test refers to a randomized experiment to determine which two or more variants (ex. web page, page element, etc.) performs better for a given conversion goal. It is not only used in web pages, but also used in mobile apps, games, etc.
The purpose of A/B test is to find out more probable variant from correlation that may be a causation relationship. Finding out whether the variant that we are considering is a causation relationship helps us change the result to the direction that we want.
Let’s say that our goal is to reduce the accidents in a pool. After some research, we find out that the sales of ice cream and the frequency of accidents are strongly correlated (positive correlation). If we assume a causation relationship in these two factors, it becomes:
“Sales of ice cream → Frequency of accidents”
However, this does not make sense logically, and this would be a typical example where we say “Correlation does not imply causation.” Thinking more logically, there would be a third element that is in a causation relationship with the frequency of accidents, such as the weather or temperature.
The above example is a very typical example, however, data analysts generally meet more complex situations.
Let’s think of a more practical situation. An e-commerce website was doing a project to change the general website design, and finally successfully applied the new design. The next day, the revenue increased by 10%. Is this increase in sales due to the new web page design?
“New design of a web page → Increase in sales”
It is dangerous to imply causation just because the two events happened at a similar time since there might be other external factors that caused the increase in sales, such as the competitor’s server was closed. There can be so many other possible reasons the sales increased.