Imagine you have a cookie jar with different types of cookies. You want to taste them all and make sure they are yummy, but your mom said you can only eat three cookies. You decide to pick three cookies randomly and eat them.
Now, let's say you have a bunch of statistical tests to perform on a dataset, each test checking to see if some relationships exist between different aspects of the data. In each test, you're checking to see if something seems to be there, like a clever scientist.
However, when performing multiple tests, it's highly likely that one of them may show a relationship by chance alone. This is why in science, we use a "Family-wise error rate" (FWER) – to calculate the chances of at least one false positive result.
If we set a significance level of 0.05, it is like setting the rule that you can eat only three cookies from a cookie jar. If you perform ten tests in the dataset, it is like you are picking ten cookies from the cookie jar. And, as a norm, the goal is to keep the chances of eating any two rotten cookies (or false positives) as low as possible (less than 5%).
Family-wise error rate (FWER) is a statistical concept that determines the probability of making one or more false findings in multiple tests performed simultaneously. It helps to set the rate at which you can reject the null hypothesis and still have a low probability of false positives. It is like eating three cookies from a cookie jar so that you don't eat any rotten cookies (false positives).