I’m excited to announce the launch of my newest book, Elements of Data Science. As the subtitle suggests, it is about “Getting started with Data Science and Python”.
I am publishing this book myself, which has one big advantage: I can print it with a full color interior without increasing the cover price. In my opinion, the code is more readable with syntax highlighting, and the data visualizations look great!
In addition to the printed edition, all chapters are available to read online, and they are in Jupyter notebooks, where you can read the text, run the code, and work on the exercises.
Description
Elements of Data Science is an introduction to data science for people with no programming experience. My goal is to present a small, powerful subset of Python that allows you to do real work with data as quickly as possible.
Part 1 includes six chapters that introduce basic Python with a focus on working with data.
Part 2 presents exploratory data analysis using Pandas and empiricaldist — it includes a revised and updated version of the material from my popular DataCamp course, “Exploratory Data Analysis in Python.”
Part 3 takes a computational approach to statistical inference, introducing resampling method, bootstrapping, and randomization tests.
Part 4 is the first of two case studies. It uses data from the General Social Survey to explore changes in political beliefs and attitudes in the U.S. in the last 50 years. The data points on the cover are from one of the graphs in this section.
Part 5 is the second case study, which introduces classification algorithms and the metrics used to evaluate them — and discusses the challenges of algorithmic decision-making in the context of criminal justice.
This project started in 2019, when I collaborated with a group at Harvard to create a data science class for people with no programming experience. We discussed some of the design decisions that went into the course and the book in this article.
I’m a math graduate and am partially self taught. I am really frustrated with likelihood and probability density, two concepts that I personally think are explained so disastrously that I’ve been struggling with them for an embarrassingly long time. Here’s my current understanding and what I want to understand:
probability density is the ‘concentration of probability’ or probability per unit and the value of the density in any particular interval depends on the density function used. When you integrate the density curve over all outcomes x in X where X is a random variable and x are its realizations then the result should be all the probability or 1.
likelihood is the joint probability, in the discrete case, of observing fixed and known data depending on what parameter(s) value we choose. In the continuous case we do not have a nonzero probability of any single value but we do have nonzero probability within some infinitely small interval (containing infinite values?) [x, x+h] and maximizing the likelihood of observing this data is equivalent to maximizing the probability of observing it, which we can do by maximizing the density at x.
My questions are:
Is what I wrote above correct? Probability density and likelihood are not the same thing. But what the precise distinction is in the continuous case is not completely cut and dry to me. […]
I agree with OP — these topics are confusing and not always explained well. So let’s see what we can do.
I’ll start with a discrete distribution, so we can leave density out of it for now and focus on the difference between a probability mass function (PMF) and a likelihood function.
As an example, suppose we know that a hockey team scores goals at a rate of 3 goals per game on average.
If we model goal scoring as a Poisson process — which is not a bad model — the number of goals per game follows a Poisson distribution with parameter mu=3.
The PMF of the Poisson distribution tells us the probability of scoring k goals in a game, for non-negative values of k.
Now suppose we don’t know the goal scoring rate, but we observe 4 goals in one game.
There are several ways we can use this data to estimate mu.
One is to find the maximum likelihood estimator (MLE), which is the value of mu that makes the observed data most likely.
To find the MLE, we need to maximize the likelihood function, which is a function of mu with a fixed number of observed goals, k.
To evaluate the likelihood function, we can use the PMF of the Poisson distribution again, this time with a single value of k and a range of values for mu.
To find the value of mu that maximizes the likelihood of the data, we can use argmax to find the index of the highest value in ls, and then look up the corresponding element of mus.
In [10]:
i=np.argmax(ls)mus[i]
Out[10]:
4.0
In this case, the maximum likelihood estimator is equal to the number of goals we observed.
That’s the answer to the estimation problem, but now let’s look more closely at those likelihoods.
Here’s the likelihood at the maximum of the likelihood function.
In [11]:
np.max(ls)
Out[11]:
0.19536681481316454
This likelihood is a probability mass — specifically, it is the probability of scoring 4 goals, given that the goal-scoring rate is exactly 4.0.
In [12]:
poisson.pmf(4,mu=4)
Out[12]:
0.19536681481316454
So, some likelihoods are probability masses — but not all.
Now suppose, again, that we know the goal scoring rate is exactly 3,
but now we want to know how long it will be until the next goal.
If we model goal scoring as a Poisson process, the time until the next goal follows an exponential distribution with a rate parameter, lam=3.
Because the exponential distribution is continuous, it has a probability density function (PDF) rather than a probability mass function (PMF).
We can approximate the distribution by evaluating the exponential PDF at a set of equally-spaced times, ts.
SciPy’s implementation of the exponential distribution does not take lam as a parameter, so we have to set scale=1/lam.
The PDF is a function of t with lam as a fixed parameter.
Here’s what it looks like.
In [14]:
plt.plot(ts,ps)decorate(xlabel='Games until next goal',ylabel='Density')
Notice that the values on the y-axis extend above 1. That would not be possible if they were probability masses, but it is possible because they are probability densities.
By themselves, probability densities are hard to interpret.
As an example, we can pick an arbitrary element from ts and the corresponding element from ps.
In [15]:
ts[40],ps[40]
Out[15]:
(0.5, 0.6693904804452895)
So the probability density at t=0.5 is about 0.67. What does that mean? Not much.
To get something meaningful, we have to compute an area under the PDF.
For example, if we want to know the probability that the first goal is scored during the first half of a game, we can compute the area under the curve from t=0 to t=0.5.
We can use a slice index to select the elements of ps and ts in this interval, and NumPy’s trapz function, which uses the trapezoid method to compute the area under the curve.
In [16]:
np.trapz(ps[:41],ts[:41])
Out[16]:
0.7769608771522626
The probability of a goal in the first half of the game is about 78%.
To check that we got the right answer, we can compute the same probability using the exponential CDF.
In [17]:
expon.cdf(0.5,scale=1/lam)
Out[17]:
0.7768698398515702
Considering that we used a discrete approximation of the PDF, our estimate is pretty close.
This example provides an operational definition of a probability density: it’s something you can add up over an interval — or integrate — to get a probability mass.
Now let’s suppose that we don’t know the parameter lam and we want to use data to estimate it.
And suppose we observe a game where the first goal is scored at t=0.5.
As we did when we estimated the parameter mu of the Poisson distribution, we can find the value of lam that maximizes the likelihood of this data.
First we’ll define a range of possible values of lam.
In [18]:
lams=np.linspace(0,20,201)
Then for each value of lam, we can evaluate the exponential PDF at the observed time t=0.5 — using errstate to ignore the “division by zero” warning when lam is 0.
In the first example, we evaluated a Poisson PMF at discrete values of k with a fixed parameter, mu. The results were probability masses.
In the second example, we evaluated the same PMF for possible values of a parameter, mu, with a fixed value of k. The result was a likelihood function where each point is a probability mass.
In the third example, we evaluated an exponential PDF at possible values of t with a fixed parameter, lam. The results were probability densities, which we integrated over an interval to get a probability mass.
In the fourth example, we evaluated the same PDF at possible values of a parameter, lam, with a fixed value of t. The result was a likelihood function where each point is a probability density.
A PDF is a function of an outcome — like the number of goals scored or the time under the first goal — given a fixed parameter.
If you evaluate a PDF, you get a probability density.
If you integrate density over an interval, you get a probability mass.
A likelihood function is a function of a parameter, given a fixed outcome.
If you evaluate a likelihood function, you might get a probability mass or a density, depending on whether the outcome is discrete or continuous.
Either way, evaluating a likelihood function at a single point doesn’t mean much by itself.
A common use of a likelihood function is finding a maximum likelihood estimator.
As OP says, “Probability density and likelihood are not the same thing”, but the distinction is not clear because they are not completely distinct things, either.
A probability density can be a likelihood, but not all densities are likelihoods.
A likelihood can be a probability density, but not all likelihoods are densities.
I met a basic problem in pdf and pmf. I perform a grid approximation on bayesian problem. After normalizing the vector, I got a discretized pmf. Then I want to draw pmf and pdf on a plot to check if they are similar distribution. However, the pmf doesn’t resemble the pdf at all. The instruction told me that I need to sample from my pmf then draw a histogram with density for comparison. It really works, but why can’t I directly compare them?
In Think Bayes I used this kind of discrete approximation in almost every chapter, so this issue came up a lot! There are at least two good solutions:
Normalize the PDF and PMF so they are on the same scale, or
As an example, I’ll solve an exercise from Chapter 4 of Think Bayes:
In Major League Baseball, most players have a batting average between .230 and .300, which means that their probability of getting a hit is between 0.23 and 0.30.
Suppose a player appearing in their first game gets 3 hits out of 3 attempts. What is the posterior distribution for their probability of getting a hit?
To represent the prior distribution, I’ll use a beta distribution with parameters I chose to be consistent with the expected range of batting averages.
Unlike probability masses, probability densities can exceed 1.
But the area under the curve should be 1, which we can check using trapz, which is a NumPy function that computes numerical integrals using the trapezoid rule.
In [6]:
area=np.trapz(ps,qs)area
Out[6]:
1.0
Within floating-point error, the area under the PDF is 1.
To do the Bayesian update, I’ll put these values in a Pmf object and normalize it so the sum of the probability masses is 1.
The result is a discrete approximation of the actual posterior distribution — so let’s see how close it is.
Because the beta distribution is the conjugate prior of the binomial likelihood function, the posterior is also a beta distribution, with parameters updated to reflect the data: 3 successes and 0 failures.
In [10]:
beta_posterior=beta(a+3,b)
Here’s what this theoretical posterior distribution looks like compared to our numerical approximation.
Oops! Something has gone wrong. I assume this is what OP meant by “the pmf doesn’t resemble the pdf at all”.
The problem is that the PMF is normalized so the total of the probability masses is 1, and the PDF is normalized so the area under the curve is 1.
They are not on the same scale, so the y-axis in this figure is different for the two curves.
To fix the problem, we can find the area under the PMF.
The curves are visually indistinguishable, and the numerical differences are small.
In [15]:
diff=ps-pdf_posterior.psnp.max(np.abs(diff))
Out[15]:
4.973799150320701e-14
As an aside, note that the posterior and prior distributions are not very different.
The prior mean is 0.267 and the posterior mean is 0.281.
If a rookie goes 3 for 3 in their first game, that’s a good start, but it doesn’t mean they are the next Ted Williams.
In [16]:
pmf_prior.mean(),pmf_posterior.mean()
Out[16]:
(0.2666666666666667, 0.28104575163398693)
As an alternative to comparing PDFs, we could convert the PMF to a CDF, which contains the cumulative sum of the probability masses.
In [17]:
cdf_posterior=pmf_posterior.make_cdf()
And compare to the mathematical CDF of the beta distribution.
In this example, I plotted the discrete approximation of the CDF as a step function — which is technically what it is — to highlight how it overlaps with the beta distribution.
Probability density is hard to define — the best I can do is usually, “It’s something that you can integrate to get a probability mass”.
And when you work with probability densities computationally, you often have to discretize them, which can add another layer of confusion.
One way to distinguish PMFs and PDFs is to remember that PMFs are normalized so sum of the probability masses is 1, and PDFs are normalized so the area under the curve is 1.
In general, you can’t plot PMFs and PDFs on the same axes because they are not in the same units.
I want to write a research article that has regression analysis in it. I normalized my independent variables and want to include in the article the results of a statistical test showing that all variables are normal. I normalized using the scale function in R and some custom normalization functions I found online but whatever I do the new data fails the Shapiro Wilkinson and KS test on some columns? What to do?
There might be a few points of confusion here:
One is the idea that the independent variables in a regression model have to follow a normal distribution. This is a common misconception. Ordinary least squares regression assumes that the residuals follow a normal distribution — the dependent and independent variables don’t have to.
Another is the idea that statistical tests are useful for checking whether a variable follows a normal distribution. As I’ve explained in other articles, these tests don’t do anything useful.
The question might also reveal confusion about what “normalization” means. In the context of regression models, it usually means scaling a variable so its range is between 0 and 1. Its sounds like OP might be using it to mean transforming a variable so it follows a normal distribution.
In general, transforming an independent variable to normal is not a good idea. It has no benefit, and it changes the estimated parameters for no justified reason. Let me explain.
As an example, I’ll use data from the World Happiness Report, which uses survey data from 153 countries to explore the relationship between self-reported happiness and six potentially predictive factors:
Income as represented by per capita GDP.
Social support
Healthy life expectancy at birth
Freedom to make life choices
Generosity
Perceptions of corruption
The dependent variable, happiness, is the national average of responses to the “Cantril ladder question” used by the Gallup World Poll:
Please imagine a ladder with steps numbered from zero at the bottom to 10 at the top.
The top of the ladder represents the best possible life for you and the bottom of the ladder represents the worst possible life for you.
On which step of the ladder would you say you personally feel you stand at this time?
I’ll select the columns we’re interested in and give them shorter names.
In [5]:
data=pd.DataFrame()data['ladder']=df['Ladder score']data['log_gdp']=df['Logged GDP per capita']data['social']=df['Social support']data['life_exp']=df['Healthy life expectancy']data['freedom']=df['Freedom to make life choices']data['generosity']=df['Generosity']data['corruption']=df['Perceptions of corruption']
Notice that GDP is already on a log scale, which is generally a good idea.
The other variables are in a variety of units and different scales.
Now, contrary to the advice OP was given, let’s run a regression model without even looking at the data.
Because the independent variables are on different scales, the estimated parameters are in different units.
For example, life_exp is healthy life expectancy at birth in units of years, so the estimated parameter, 0.035, indicates that a difference of one year of life expectancy between countries is associated with an increase of 0.035 units on the Cantril ladder.
Now let’s check retroactively whether this dataset is consistent with the assumption that the residuals follow a normal distribution.
I’ll use the following functions to compare the CDF of the residuals to a normal model with the same mean and standard deviation.
In [7]:
fromscipy.statsimportnormfromempiricaldistimportCdfdefmake_normal_model(sample):"""Make a Cdf of a normal distribution. sample: sequence of numbers returns: Cdf object """m,s=sample.mean(),sample.std()qs=np.linspace(m-4*s,m+4*s,101)ps=norm.cdf(qs,m,s)returnCdf(ps,qs)
In [8]:
defplot_normal_model(sample,**options):"""Plot the Cdf of a sample and a normal model. sample: sequence of numbers """cdf_model=make_normal_model(sample)cdf_data=Cdf.from_seq(sample)cdf_model.plot(color='gray')cdf_data.plot(**options)
The normal model fits the residuals well enough that the deviations are not a concern.
This regression model is just fine with no transformations required.
But suppose we follow the advice OP received and check whether the dependent and independent variables follow normal distributions.
If we ran a statistical test, most of these would fail. But we don’t care — regression models don’t require these variables to follow normal distributions. And when we make this kind of comparison across countries, there no reason to expect them to.
Nevertheless, there are reasons we might want to transform the independent variables before running a regression model — one is to quantify the “importance” of the different factors. We can do that by standardizing the variables, which means transforming them to have mean 0 and standard deviation 1.
In the untransformed variables, the means vary in magnitude from about 0.1 to 64.
And, as I’ve already noted, they are expressed in different units.
There is no meaningful way to compare the parameter of life_exp, which is steps of the Cantril ladder per year of life expectancy, with the parameter of corruption, which is in steps of the ladder per percentage point.
But we can make these comparisons meaningful by standardizing the variables — that is, subtracting off the mean and dividing by the standard deviation.
In [19]:
standardized=(data-data.mean())/data.std()
After transformation, the means are all close to 0.
The intercept is close to 0 — that’s a consequence of how the math works out.
And now the magnitudes of the parameters indicate how much a change in each independent variable, expressed as a multiple of its standard deviation, is associated with a change in the dependent variables, also as a multiple of its standard deviation.
By this metric, social is the most important factor, followed closely by log_gdp and life_exp. generosity is the least important factor, at least as its quantified by this variable.
An alternative to standardization is normalization, which transforms a variable so its range is between 0 and 1.
To be honest, I’m not sure what the point of normalization is — it makes the results a little harder to interpret, and it doesn’t have any advantages I can think of.
We’ll look more closely at those results soon, but first let’s look at one more preprocessing option, transforming values to follow a normal distribution.
The ps are the cumulative probabilities; the zs are the corresponding quantiles in a standard normal distribution (ppf computes the “percent point function” which is an unnecessary name for the inverse CDF).
Inside the loop, we sort transformed by one of the columns and then assign the zs in that order.
The results with standardized variables are the easiest to interpret, so in that sense they are “right”. The other transformations have no advantages over standardization, and they produce somewhat different parameters.
Regression does not require the dependent or independent variables to follow a normal distribution.
It only assumes that the distribution of residuals is approximately normal, and even that requirement is not strict — regression is quite robust.
You do not need to check whether any of these variables follows a normal distribution, and you definitely should not use a statistical test.
As part of your usual exploratory data analysis, you should look at the distribution of all variables, but that’s primarily to detect any anomalies, outliers, or really extreme distributions — not to check whether they fit normal distributions.
If you want to use the parameters of the model to quantify the importance of the independent variables, use standardization. There’s no reason to use normalization, and definitely no reason to transform to a normal distribution.
Last month Ryan Burge published “The Nones Have Hit a Ceiling“, using data from the 2023 Cooperative Election Study to show that the increase in the number of Americans with no religious affiliation has hit a plateau. Comparing the number of Atheists, Agnostics, and “Nothing in Particular” between 2020 and 2023, he found that “the share of non-religious Americans has stopped rising in any meaningful way.”
When I read that, I was frustrated that the HERI Freshman Survey had not published new data since 2019. I’ve been following the rise of the “Nones” in that dataset since one of my first blog articles.
As you might guess, the Freshman Survey reports data from incoming college students. Of course, college students are not a representative sample of the U.S. population, and as rates of college attendance have increased, they represent a different slice of the population over time. Nevertheless, surveying young adults over a long interval provides an early view of trends in the general population.
Well, I have good news! I got a notification today that HERI has published data tables for the 2020 through 2023 surveys. They are in PDF, so I had to do some manual data entry, but I have results!
Religious preference
Among other questions, the Freshman Survey asks students to select their “current religious preference” from a list of seventeen common religions, “Other religion,” “Atheist”, “Agnostic”, or “None.”
The options “Atheist” and “Agnostic” were added in 2015. For consistency over time, I compare the “Nones” from previous years with the sum of “None”, “Atheist” and “Agnostic” since 2015.
The following figure shows the fraction of Nones from 1969, when the question was added, to 2023, the most recent data available.
The blue line shows data until 2015; the orange line shows data from 2015 through 2019. The gray line shows a quadratic fit. The light gray region shows a 95% predictive interval.
The quadratic model continues to fit the data well and the recent trend is still increasing, but if you look at only the last few data points, there is some evidence that the rate of increase is slowing.
But not for women
Now here’s where things get interesting. Until recently, female students have been consistently more religious than male students. But that might be changing. The following figure shows the percentages of Nones for male and female students (with a missing point in 2018, when this breakdown was not available).
Since 2019, the percentage of Nones has increased for women and decreased for men, and it looks like women may now be less religious. So the apparent slowdown in the overall trend might be a mix of opposite trends in the two groups.
The following graph shows the gender gap over time, that is, the difference in percentages of male and female students with no religious affiliation.
The gap was essentially unchanged from 1990 to 2020. But in the last three years it has changed drastically. It now falls outside the predictive range based on past data, which suggests a change this large would be unlikely by chance.
Similarly with attendance at religious services, the gender gap has closed and possibly reversed.
The survey also asks students how often they “attended a religious service” in the last year. The choices are “Frequently,” “Occasionally,” and “Not at all.” Respondents are instructed to select “Occasionally” if they attended one or more times, so a wedding or a funeral would do it.
The following figure shows the fraction of students who reported any religious attendance in the last year, starting in 1968. I discarded a data point from 1966 that seems unlikely to be correct.
There is a clear dip in 2021, likely due to the pandemic, but the last two data points have returned to the long-term trend.
Data Source
The data reported here are available from the HERI publications page. Since I entered the data manually from PDF documents, it’s possible I have made errors.
“The Christian right is coming for divorce next,” according to this recent Vox article, and “Some conservatives want to make it a lot harder to dissolve a marriage.”
As always when I read an article like this, I want to see data — and the General Social Survey has just the data I need. Since 1974, they have asked a representative sample of the U.S. population, “Should divorce in this country be easier or more difficult to obtain than it is now?” with the options to respond “Easier”, “More difficult”, or “Stay as is”.
Here’s how the responses have changed over time:
Since the 1990s, the percentage saying divorce should be more difficult has dropped from about 50% to about 30%. [The last data point, in 2022, may not be reliable. Due to disruptions during the COVID pandemic, the GSS changed some elements of their survey process — in the 2021 and 2022 data, responses to several questions have deviated from long-term trends in ways that might not reflect real changes in opinion.]
If we break down the results by political alignment, we can see whether these changes are driven by liberals, conservatives, or both.
Not surprisingly, conservatives are more likely than liberals to believe that divorce should be more difficult, by a margin of about 20 percentage points. But the percentages have declined in all groups — and fallen below 50% even among self-described conservatives.
As the Vox article documents, conservatives in several states have proposed legislation to make divorce more difficult. Based on the data, these proposals are likely to be unpopular.
When do we use N and when N-1 for computation of sd and cov?
So I was doing a task, where I had a portfolio of shares and I was given their yields. Then I had to calculate [covariance], [standard deviation] etc. But I simply do not know when to use N and when N-1. I only know that it has to do something with degrees of freedom. Can someone explain it to me like to a 10 year old? Thanks!
If you look up the formula for standard deviation, you are likely to find two versions. One has the sample size, N, in the denominator; the other has N-1.
Sometimes the explanation of when you should use each of them is not clear. And to make it more confusing, some software packages compute the N version by default, and some the N-1 version.
Next we compute the sum of the squared deviations.
In [6]:
ssd=np.sum(deviations**2)ssd
Out[6]:
82.5
Then we compute the variance — we’ll start with the version that has N in the denominator.
In [7]:
var=ssd/N
Finally, the standard deviation is the square root of variance.
In [8]:
std=np.sqrt(var)std
Out[8]:
2.8722813232690143
And here’s the version with N-1 in the denominator.
In [9]:
var=ssd/(N-1)std=np.sqrt(var)std
Out[9]:
3.0276503540974917
With N=10, the difference between the version is non-negligible, but this is pretty much the worst case. With larger sample sizes, the difference in smaller — and with smaller sample sizes, you probably shouldn’t compute a standard deviation at all.
By default, NumPy computes the N version.
In [10]:
np.std(data)
Out[10]:
2.8722813232690143
But with the optional argument ddof=1, it computes the N-1 version.
In [11]:
np.std(data,ddof=1)
Out[11]:
3.0276503540974917
By default, Pandas computes the N-1 version.
In [12]:
pd.Series(data).std()
Out[12]:
3.0276503540974917
But with the optional argument ddof=0, it computes the N version.
In [13]:
pd.Series(data).std(ddof=0)
Out[13]:
2.8722813232690143
It is not ideal that the two libraries have different default behavior.
And it might not be obvious why the parameter that controls this behavior is called ddof.
The answer is related to OP’s question about “degrees of freedom”.
To understand that term, suppose I ask you to think of three numbers.
You are free to choose any first number, any second number, and any third number.
In that case, you have three degrees of freedom.
Now suppose I ask you to think of three numbers, but they are required to add up to 10.
You are free to choose the first and second numbers, but then the third number is determined by the requirement. So you have only two degrees of freedom.
If we are given a dataset with N elements, we generally assume that it has N degrees of freedom unless we are told otherwise.
That’s what the ddof parameter does.
It stands for “delta degrees of freedom”, where “delta” indicates a change or a difference.
In this case, it is the difference between the presumed degrees of freedom, N, and the degrees of freedom that should be used for the computation, N - ddof.
So, with ddof=0, the denominator is N. With ddof=1, the denominator is N-1.
So when should you use one or the other? It depends on whether you are describing data or making a statistical inference.
The N version is a descriptive statistic — it is quantifies the variability of the data.
The N-1 version is an estimator — if data is a sample from a population, we can use it to infer the standard deviation of the population.
To be more specific, the N-1 version is an almost unbiased estimator, which means that it gets the answer almost right, on average, in the long run.
Let’s see how that works.
Consider a uniform distribution from 0 to 10.
In [14]:
fromscipy.statsimportuniformdist=uniform(0,10)
Here are the actual mean and standard deviation of this distribution, computed analytically.
In [15]:
dist.mean(),dist.std()
Out[15]:
(5.0, 2.8867513459481287)
If we generate a large random sample from this distribution, we expect its standard deviation to be close to 2.9.
Yes! The N-1 version is a much less biased estimator of the standard deviation.
That’s why the N-1 version is called the “sample standard deviation”, because it is appropriate when we are using a sample to estimate the standard deviation of a population.
If, instead, we are able to measure an entire population, we can use the N version — which is why it is called the “population standard deviation”.
On one hand, it is impressive that such a simple correction yields a much better estimator. On the other hand, it almost never matters in practice.
If you have a large sample, the difference between the two versions is negligible.
If you have a small sample, you can’t make a precise estimate of the standard deviation anyway.
To demonstrate the second point, let’s look at the distributions of the estimates using the two versions.
In [20]:
sns.kdeplot(standard_deviations_N,label='N version')sns.kdeplot(standard_deviations_Nminus1,label='N-1 version')decorate(xlabel='Estimated standard deviation')
Compared to the variation in the estimates, the difference between the versions is small.
In summary, if your sample size is small and it is really important to avoid underestimating the standard deviation, use the N-1 correction.
Otherwise it doesn’t matter.
To me, the last [column], the “cumulative frequency” expressed in percentages is a percentile: it’s the percentage of scores lower then or the same as the score it’s calculated for.
However, in my class it’s explained that the percentile for score 14 would be 46, and that the percentile for score 17 would be 90, not 94.
The way they arrive at this is that upper and lower limits for the percentile calculation have to be set, in the case of score 14 that would be 14 and 13, and then the percentile would be calculated as (upper limit+lower limit)/2.
In the case of 14: (38+54)/2 = 46
This makes absolutely no sense to me: why are we introducing arbitrary limits to average out when the cumulative frequency in percentages, to me, seems to meet the definition of a percentile perfectly?
What’s at issue here is the definition of percentile and percentile rank.
As an example, let’s consider the distribution of height, and suppose the median is 168 cm.
In that case, 168 cm is the 50th percentile, and if someone is 168 cm tall, their percentile rank is 50%.
By this definition, the percentile rank for a particular quantity is its cumulative frequency expressed as a percentage, exactly as OP suggests.
If you are taller than, or the same height as, 50% of the population, your percentile rank is 50%.
However, some classes teach the alternative definition OP presents.
So, let me explain the two definitions, and we can discuss the pros and cons.
At the risk of giving away the ending, here is my conclusion:
Percentiles and percentile ranks are perfectly well defined, and I don’t think we should encourage variations on the definitions.
In a dataset with many repeated values, percentile ranks might not measure what we want — in that case we might want to compute a different statistic, but then we should give it a different name.
Now suppose we want to compute the median score and the interquartile range (IQR).
We can use the NumPy function percentile to compute the 25th, 50th, and 75th percentiles.
In [6]:
np.percentile(sample,[25,50,75])
Out[6]:
array([13., 14., 16.])
So the median score is 14 and the IQR is 16 - 13 = 3.
Going in the other direction, if we are given a score, q, we can compute its percentile rank by computing the fraction of scores less than or equal to q, expressed as a percentage.
In [7]:
q=14np.mean(sample<=q)*100
Out[7]:
54.166666666666664
If someone gets the median score of 14, their percentile rank is about 54%.
Here we see the first thing that bothers people about percentiles and percentile ranks: with a finite dataset, they are not invertible.
If you compute the percentile rank of the 50th percentile, the answer is not always 50%.
To see why, let’s consider the CDF.
Percentiles and percentile ranks are closely related to the the cumulative distribution function (CDF).
To demonstrate, we can use empiricaldist to make a Cdf object from the reconstituted sample.
A Cdf object is a Pandas Series that contains the observed quantities as an index and their cumulative probabilities as values.
We can use square brackets to look up a quantity and get a cumulative probability.
In [9]:
cdf[14]
Out[9]:
0.5416666666666666
If we multiply by 100, we get the percentile rank.
But square brackets only work with quantities in the dataset.
If we look up any other quantity, that’s an error.
However, we can use parentheses to call the Cdf object like a function.
In [10]:
cdf(14)
Out[10]:
array(0.54166667)
And that works with any numerical quantity.
In [11]:
cdf(13.5)
Out[11]:
array(0.375)
Cdf provides a step method that plots the CDF as a step function, which is what it technically is.
In [12]:
cdf.step(label='')decorate(ylabel='CDF')
To be more explicit, we can put markers at the top of each vertical segment to indicate how the function is evaluated at one of the observed quantities.
The Cdf object provides a method called inverse that computes the inverse CDF — that is, if you give it a cumulative probability, it computes the corresponding quantity.
In [14]:
p1=0.375cdf.inverse(p1)
Out[14]:
array(13.)
The result from an inverse lookup is sometimes called a “quantile”, which is the name of a Pandas method that computes quantiles.
If we put the sample into a Series, we can use quantile to compute the quantity to look up the cumulative probability p1.
By default, quantile uses interpolates linearly between the observed values.
If we want this function to behave according to the definition of the CDF, we have to specify a different kind of interpolation.
In [16]:
sample_series.quantile(p1,interpolation='lower')
Out[16]:
13
If the result from the inverse CDF is called a quantile, you might wonder what we call the result from the CDF.
By analogy with percentile and percentile rank, I think it should be called a quantile rank, but no one calls it that.
As far as I can tell, it’s just called a cumulative probability.
Percentiles and percentile ranks have a perfectly good definition, which follows from the definition of the CDF.
So what’s the problem?
Well, I have to admit — the definition is a little arbitrary.
In the example, suppose your score is 14.
Your score is strictly better than 37.5% of the other scores.
In [17]:
less=np.mean(sample<14)less
Out[17]:
0.375
It’s strictly worse than 45.8%
In [18]:
more=np.mean(sample>14)more
Out[18]:
0.4583333333333333
And equal to 16.7%.
In [19]:
same=np.mean(sample==14)same
Out[19]:
0.16666666666666666
So if we want a single number that quantifies your performance relative to the rest of the class, which number should we use?
The definition of the CDF suggests we should report the fraction of the class whose score is less than or equal to yours.
But that is an arbitrary choice.
We could just as easily report the fraction whose score is strictly less — or the midpoint of these extremes.
For a score of 14, here’s the midpoint:
I prefer the CDF-based definition of percentile rank because it’s consistent with the way most computational tools work.
The midpoint-based definition feels like a holdover from the days of working with small datasets by hand.
That’s just my preference — if people want to compute midpoints, I won’t stop them.
But for the sake of clarity, we should give different names to different statistics.
Historically, I think the CDF-based definition has the stronger claim on the term “percentile rank”.
For the midpoint-based definition, ChatGPT suggests “midpoint percentile rank” or “average percentile rank”.
Those seem fine to me, but it doesn’t look like they are widely used.
Is it correct to use logarithmic transformation in order to mitigate heteroskedasticity?
For my studies I gathered data on certain preferences across a group of people. I am trying to figure out if I can pinpoint preferences to factors such as gender in this case.
I used mixed ANOVA analysis with good success however one of my hypothesis came up with heteroskedasticity when doing Levene’s test. [I’ve been] breaking my head all day on how to solve this. I’ve now used logarithmic transformation to all 3 test results and run another Levene’s. When using the media value the test now results [in] homoskedasticity, however interaction is no longer significant?
Is this the correct way to deal with this problem or is there something I am missing? Thanks in advance to everyone taking their time to help.
Although the question is about ANOVA, I’m going to reframe it in terms of regression, for two reasons:
Discussion of heteroskedasticity is clearer in the context of regression.
For many problems, a regression model is better than ANOVA anyway.
Linear regression is based on a model of the data-generating process where the independent variable y is the sum of
A linear function of x with an unknown slope and intercept, and
Random values drawn from a Gaussian distribution with mean 0 and unknown standard deviation, sigma, that does not depend on x.
If, contrary to the second assumption, sigma depends on x, the data-generating process is heteroskedastic.
Some amount of heteroskedasticity is common in real data, but most of the time it’s not a problem, because:
Heteroskedasticity doesn’t bias the estimated parameters of the model, only the standard errors,
Even very strong heteroskedasticity doesn’t affect the standard errors by much, and
For practical purposes we don’t need standard errors to be particularly precise.
To demonstrate, let’s generate some data.
First, we’ll draw xs from a normal distribution.
OP mentions using the Levene test for heteroskedasticity, which is used to test whether sigma is different between groups.
For continuous values of x and y, we can use the Breusch-Pagan Lagrange Multiplier test:
Both tests produce small p-values, which means that if we generate a dataset by a homoskedastic process, there is almost no chance it would have as much heteroskedasticity as the dataset we generated.
If you have never heard of either of these tests, don’t panic — neither had I under I looked them up for this example.
And don’t worry about remembering them, because you should never use them again.
Like testing for normality, testing for heteroskedasticity is never useful.
Why? Because in almost any real dataset, you will find some heteroskedasticity.
So if you test for it, there are only two possible results:
If the heteroskedasticity is small and you don’t have much data, you will fail to reject the null hypothesis.
If the heteroskedasticity is large or you have a lot of data, you will reject the null hypothesis.
Either way, you learn nothing — and in particular, you don’t learn the answer to the question you actually care about, which is whether the heteroskedasticity is so large that the effect on the standard errors is large enough that you should care.
And the answer to that question is almost always no.
The dataset we generated has very large heteroskedasticity.
Let’s see how much effect that has on the results.
Here are the standard errors from simple linear regression:
In [14]:
ols_results.bse
Out[14]:
array([6.06159018, 0.20152957])
Now, there are several ways to generate standard errors that are robust in the presence of heteroskedasticity.
One is the Huber-White estimator, which we can compute like this:
And one more option is a wild bootstrap, which resamples the residuals by multiplying them by a random sequence of 1 and -1.
This way of resampling preserves heteroskedasticity, because it only changes the sign of the residuals, not the magnitude, and it maintains the relationship between those magnitudes and x.
The standard errors we get from different methods are notably different, but the differences probably don’t matter.
First, I am skeptical of the results from Huber regression.
With this kind of heteroskedasticity, the standard errors should be larger than what we get from OLS.
I’m not sure what’s the problem is, and I haven’t bothered to find out, because I don’t think Huber regression is necessary in the first place.
The results from bootstrapping and the Huber-White estimator are the most reliable — which suggests that the standard errors from quantile regression are too big.
In my opinion, we don’t need esoteric methods to deal with heteroskedasticity.
If heteroskedasticity is extreme, consider using wild bootstrap.
Otherwise, just use ordinary least squares.
Now let’s address OP’s headline question, “Is it correct to use logarithmic transformation in order to mitigate heteroskedasticity?”
In some cases, a log transform can reduce or eliminate heteroskedasticity.
However, there are several reasons this is not a good idea in general:
As we’ve seen, heteroskedasticity is not a big problem, so it usually doesn’t require any mitigation.
Taking a log transform of one or more variables in a regression model changes the meaning of the model — it hypothesizes a relationship between the variables that might not make sense in context.
Anyway, taking a log transform doesn’t always help.
To demonstrate the last point, let’s see what happens if we apply a log transform to the dependent variable:
In [22]:
log_ys=np.log10(ys)
Here’s what the scatter plot looks like after the transform.
The p-values are bigger, which suggests that the log transform mitigated the heteroskedasticity a little.
But if the goal was to eliminate heteroskedasticity, the log transform didn’t do it.
Heteroskedasticity is common in real datasets — if you test for it, you will often find it, provided you have enough data.
Either way, testing does not answer the question you really care about, which is whether the heteroskedasticity is extreme enough to be a problem.
Plain old linear regression is robust to heteroskedasticity, so unless it is really extreme, it is probably not a problem.
Even in the worst case, heteroskedasticity does not bias the estimated parameters — it only affects the standard errors — and we don’t need standard errors to be particularly precise anyway.
Although a log transform can sometimes mitigate heteroskedasticity, it doesn’t always succeed, and even if it does, it’s usually not necessary.
A log transform changes the meaning of the regression model in ways that might not make sense in context.
So, use a log transform if it makes sense in context, not to mitigate a problem that’s not much of a problem in the first place.
Bit of a weird one but I’m hoping you’re the community to help. I work in children’s residential care and I’m trying to find a way of better matching potential young people together.
The way we calculate individual risk for a child is
risk = likelihood + impact (R=L+I), so
L4 + I5 = R9
That works well for individuals but I need to work out a good way of calculating a combined risk to place children [in] the home together. I’m currently using the [arithmetic] average but I don’t feel that it works properly as the average is always lower then the highest risk.
I’ll use a fairly light risk as an example, running away from the home. (We call this MFC missing from care) It’s fairly common that one of the kids will run away from the home at some point or another either out of boredom or frustration. If young person A has a risk of 9 and young person B has a risk of 12 the the average risk of MFC in the home would be 10.5
HOWEVER more often then not having two young people that go MFC will often result in more episodes as they will run off together, so having a lower risk rating doesn’t really make sense. Adding the two together to 21 doesn’t really work either though as the likelihood is the thing that increases not necessarily the impact.
I’m a lot better at chasing after run away kids then I am mathematics so please help 😂.
Here’s one way to think about this question: based on background knowledge and experience, OP has qualitative ideas about what happens when we put children at different risks together, and he is looking for a statistical summary that is consistent with these ideas.
The arithmetic mean probably makes sense as a starting point, but it clashes with the observation that if you put two children together who are high risk, they interact in ways that increase the risk.
For example, if we put together children with risks 9 and 12, the average is 10.5, and OP’s domain knowledge says that’s too low — it should be more than 12.
At the other extreme, I’ll guess that putting together two low risk children might be beneficial to both — so the combined risk might be lower than either.
And that implies that there is a neutral point somewhere in the middle, where the combined risk is equal to the individual risks.
To construct a summary statistic like that, I suggest a weighted sum of the arithmetic and geometric means.
That might sound strange, but I’ll show that it has the properties we want.
And it might not be as strange as it sounds — there’s a reason it might make sense.
The following function computes the arithmetic mean of a sequence of values, which is the sum divided by n.
In [20]:
defamean(xs):n=len(xs)returnnp.sum(xs)/n
The following function computes the geometric mean of a sequence, which is the product raised to the power 1/n.
In [21]:
defgmean(xs):n=len(xs)returnnp.prod(xs)**(1/n)
And the following function computes the weighted sum of the arithmetic and geometric means.
The constant k determines how much weight we give the geometric mean.
The value 4 determines the neutral point. So if we put together two people with risk 4, the combined average is 4.
In [23]:
mean_plus_gmean(4,4)
Out[23]:
4.0
Above the neutral point, there is a penalty if we put together two children with higher risks.
In [24]:
mean_plus_gmean(5,5)
Out[24]:
6.0
In that case, the combined risk is higher than the individual risks.
Below the neutral point, there is a bonus if we put together two children with low risks.
In [25]:
mean_plus_gmean(3,3)
Out[25]:
2.0
In that case, the combined risk is less than the individual risks.
If we combine low and high risks, the discrepancy brings the average down a little.
In [26]:
mean_plus_gmean(3,5)
Out[26]:
3.872983346207417
In the example OP presented, where we put together two people with high risk, the penalty is substantial.
In [27]:
mean_plus_gmean(9,12)
Out[27]:
16.892304845413264
If that penalty seems too high, we can adjust the weight, k, and the neutral point accordingly.
This behavior extends to more than two people.
If everyone is neutral, the result is neutral.
In [28]:
mean_plus_gmean(4,4,4)
Out[28]:
3.9999999999999996
If you add one person with higher risk, there’s a moderate penalty, compared to the arithmetic mean.
In [29]:
mean_plus_gmean(4,4,5),amean([4,4,5])
Out[29]:
(4.6422027133971, 4.333333333333333)
With two higher risk people, the penalty is bigger.
The idea behind this suggestion is logistic regression with an interaction term.
Let me explain where that comes from.
OP explained:
The way we calculate individual risk for a child is
risk = likelihood + impact (R=L+I), so
L4 + I5 = R9
At first I thought it was strange to add a likelihood and an impact score,
Thinking about expected value, I thought it should be the product of a probability and a cost.
But if both are on a log scale, adding these logarithms is like multiplying probability by impact on a linear scale, so that makes more sense.
And if the scores are consistent with something like a log-odds scale, we can see a connection with logistic regression.
If r1 and r2 are risk scores, we can imagine a regression equation that looks like this, where p is the probability of an outcome like “missing from care”:
logit(p) = a r1 + b r2 + c r1 r2
In this equation, logit(p) is the combined risk score, a, b, and c are unknown parameters, and the product r1 r2 is an interaction term that captures the tendency of high risks to amplify each other.
With enough data, we could estimate the unknown parameters.
Without data, the best we can do is chose values that make the results consistent with expectations.
Since r1 and r2 are interchangeable, they have to have the same parameter.
And since the whole risk scale has an unspecified zero point, we can set it a and b to 1/2.
Which means there is only one parameter left, the weight of the interaction term.
logit(p) = (r_1 + r2) / 2 + k r1 r2
Now we can see that the first term is the arithmetic mean and the second term is close to the geometric mean, but without the square root.
So the function I suggested — the weighted sum of arithmetic and weighted means — is not identical to the logistic model, but it is motivated by it.
With this rationale in mind, we might consider a revision: rather than add the likelihood and impact scores, and then compute the weighted sum of means, it might make more sense to separate likelihood and impact, compute the weighted sum of the means of the likelihoods, and then add back the impact.
This question got my attention because OP is working on a challenging and important problem — and they provided useful context.
It’s an intriguing idea to define something that is intuitively like an average, but is not always bounded between the minimum and maximum of the data.
If we think strictly about generalized means, that’s not possible. But if we think in terms of logarithms, regression, and interaction terms, we find a way.