Browsed by
Category: Uncategorized

What size is that correlation?

What size is that correlation?

This article is related to Chapter 6 of Probably Overthinking It, which is available for preorder now. It is also related to a new course at Brilliant.org, Explaining Variation.

Suppose you find a correlation of 0.36. How would you characterize it? I posed this question to the stalwart few still floating on the wreckage of Twitter, and here are the responses.

It seems like there is no agreement about whether 0.36 is small, medium, or large. In the replies, nearly everyone said it depends on the context, and of course that’s true. But there are two things they might mean, and I only agree with one of them:

  • In different areas of research, you typically find correlations in difference ranges, so what’s “small” in one field might be “large” in another.
  • It depends on the goal of the project — that is, what you are trying to predict, explain, or decide.

The first interpretation is widely accepted in the social sciences. For example, this highly-cited paper proposes as a guideline that, “an effect-size r of .30 indicates an effect that is large and potentially powerful in both the short and the long run.” This guideline is offered in light of “the average sizes of effects in the published literature of social and personality psychology.”

I don’t think that’s a good argument. If you study mice, and you find a large mouse, that doesn’t mean you found an elephant.

But the same paper offers what I think is better advice: “Report effect sizes in terms that are
meaningful in context”. So let’s do that.

What is the context?

I asked about r = 0.36 because that’s the correlation between general mental ability (g) and the general factor of personality (GFP) reported in this paper, which reports meta-analyses of correlations between a large number of cognitive abilities and personality traits.

Now, for purposes of this discussion, you don’t have to believe that g and GFP are valid measures of stable characteristics. Let’s assume that they are — if you are willing to play along — just long enough to ask: if the correlation between them is 0.36, what does that mean?

I propose that the answer depends on whether we are trying to make a prediction, explain a phenomenon, or make decisions that have consequences. Let’s take those in turn.

Prediction

Thinking about correlation in terms of predictive value, let’s assume that we can measure both g and GFP precisely, and that both are expressed as standardized scores with mean 0 and standard deviation 1. If the correlation between them is 0.36, and we know that someone has g=1 (one standard deviation above the mean), we expect them to have GFP=0.36 (0.36 standard deviations above the mean), on average.

In terms of percentiles, someone with g=1 is in the 84th percentile, and we would expect their GFP to be in the 64th percentile. So in that sense, g conveys some information about GFP, but not much.

To quantify predictive accuracy, we have several metrics to choose from — I’ll use mean absolute error (MAE) because I think is the most interpretable metric of accuracy for a continuous variable. In this scenario, if we know g exactly, and use it to predict GFP, the MAE is 0.75, which means that we expect to be off by 0.75 standard deviations, on average.

For comparison, if we don’t know g, and we are asked to guess GFP, we expect to be off by 0.8 standard deviations, on average. Compared to this baseline, knowing g reduces MAE by about 6%. So a correlation of 0.36 doesn’t improve predictive accuracy by much, as I discussed in this previous blog post.

Another metric we might consider is classification accuracy. For example, suppose we know that someone has g>0 — so they are smarter than average. We can compute the probability that they also have GFP>0 — informally, they are nicer than average. This probability is about 0.62.

Again, we can compare this result to a baseline where g is unknown. In that case the probability that someone is smarter than average is 0.5. Knowing that someone is smart moves the needle from 0.5 to 0.62, which means that it contributes some information, but not much.

Going in the other direction, if we think of low g as a risk factor for low GFP, the risk ratio would be 1.2. Expressed as an odds ratio it would be 1.6. In medicine, a risk factor with RR=1.2 or OR=1.6 would be considered a small increase in risk. But again, it depends on context — for a common condition with large health effects, identifying a preventable factor with RR=1.2 could be a very important result!

Explanation

Instead of prediction, suppose you are trying to explain a particular phenomenon and you find a correlation of 0.36 between two relevant variables, A and B. On the face of it, such a correlation is evidence that there is some kind of causal relationship between the variables. But by itself, the correlation gives no information about whether A causes B, B causes A, or any number of other factors cause both A and B.

Nevertheless, it provides a starting place for a hypothetical question like, “If A causes B, and the strength of that causal relationship yields a correlation of 0.36, would that be big enough to explain the phenomenon?” or “What part of the phenomenon could it explain?”

As an example, let’s consider the article that got me thinking about this, which proposes in the title the phenomenon it promises to explain: “Smart, Funny, & Hot: Why some people have it all…”

Supposing that “smart” is quantified by g and that “funny” and other positive personality traits are quantified by GFP, and that the correlation between them is 0.36, does that explain why “some people have it all”?

Let’s say that “having it all” means g>1 and GFP>1. If the factors were uncorrelated, only 2.5% of the population would exceed both thresholds. With correlation 0.36, it would be 5%. So the correlation could explain why people who have it all are about twice as common as they would be otherwise.

Again, you don’t have to buy any part of this argument, but it is an example of how an observed correlation could explain a phenomenon, and how we could report the effect size in terms that are meaningful in context.

Decision-making

After prediction and explanation, a third use of an observed correlation is to guide decision-making.

For example, in a 2106 article, ProPublic evaluated COMPAS, an algorithm used to inform decisions about bail and parole. They found that its classification accuracy was 0.61, which they characterized as “somewhat better than a coin toss”. For decisions that affect people’s lives in such profound ways, that accuracy is disturbingly low.

But in another context, “somewhat better than a coin toss” can be a big deal. In response to my poll about a correlation of 0.36, one stalwart replied, “In asset pricing? Say as a signal of alpha? So implausibly large as to be dismissed outright without consideration.”

If I understand correctly, this means that if you find a quantity known in the present that correlates with future prices with r = 0.36, you can use that information to make decisions that are substantially better than chance and outperform the market. But it is extremely unlikely that such a quantity exists.

However, if you make a large number of decisions, and the results of those decisions accumulate, even a very small correlation can yield a large effect. The paper I quoted earlier makes a similar observation in the context of individual differences:

“If a psychological process is experimentally demonstrated, and this process is found to appear reliably, then its influence could in many cases be expected to accumulate into important implications over time or across people even if its effect size is seemingly small in any particular instance.”

I think this point is correct, but incomplete. If a small effect accumulates, it can yield big differences, but if that’s the argument you want to make, you have to support it with a model of the aggregation process that estimates the cumulative effect that could result from the observed correlation.

Predict, Explain, Decide

Whether a correlation is big or small, important or not, and useful or not, depends on the context, of course. But to be more specific, it depends on whether you are trying to predict, explain, or decide. And what you report should follow:

  • If you are making predictions, report a metric of predictive accuracy. For continuous quantities, I think MAE is most interpretable. For discrete values, report classification accuracy — or recall and precision, or AUC.
  • If you are explaining a phenomenon, use a model to show whether the effect you found is plausibly big enough to explain the phenomenon, or what fraction it could explain.
  • If you are making decisions, use a model to quantify the expected benefit — or the distribution of benefits would be even better. If your argument is that small correlations accumulate into big effects, use a model to show how and quantify how much.

As an aside, thinking of modeling in terms of prediction, explanation, and decision-making is the foundation of Modeling and Simulation in Python, now available from No Starch Press and Amazon.com.

The Overton Paradox in Three Graphs

The Overton Paradox in Three Graphs

Older people are more likely to say they are conservative.

And older people believe more conservative things.

But if you group people by decade of birth, most groups get more liberal as they get older.

So if people get more liberal, on average, why are they more likely to say they are conservative?

Now there are three ways to find out!

Since some people have asked, I should say that “Overton Paradox” is the name I am giving this phenomenon. It’s named after the Overton window, for reasons that will be clear if you read my explanation.

How Principal Are Your Components?

How Principal Are Your Components?

This post is an offshoot from Chapter 1 of Probably Overthinking It, which is available for pre-order now!

In a previous post I explored the correlations between measurements in the ANSUR-II dataset, which includes 93 measurements from a sample of U.S. military personnel. I found that measurements of the head were weakly correlated with measurements from other parts of the body – and in particular the protrusion of the ears is almost entirely uncorrelated with anything else.

A friend of mine, and co-developer of the Modeling and Simulation class I taught at Olin, asked whether I had tried running principal component analysis (PCA). I had not, but now I have. Let’s look at the results.

Click here to run this notebook on Colab.

The ANSUR data is available from The OPEN Design Lab.

Explained Variance

Here’s a visualization of explained variance versus number of components.

With one component, we can capture 44% of the variation in the measurements. With two components, we’re up to 62%. After that, the gains are smaller (as we expect), but with 10 measurements, we get up to 78%.

Loadings

Looking at the loadings, we can see which measurements contribute the most to each of the components, so we can get a sense of which characteristics each component captures.

I won’t explain all of the measurements, but if there are any you are curious about, you can look them up in The Measurer’s Handbook, which includes details on “sampling strategy and measuring techniques” as well as descriptions and diagrams of the landmarks and measurements between them.

Principal Component 1:
0.135 	 suprasternaleheight
0.134 	 cervicaleheight
0.134 	 buttockkneelength
0.134 	 acromialheight
0.133 	 kneeheightsitting

Principal Component 2:
0.166 	 waistcircumference
-0.163 	 poplitealheight
0.163 	 abdominalextensiondepthsitting
0.161 	 waistdepth
0.159 	 buttockdepth

Principal Component 3:
0.338 	 elbowrestheight
0.31 	 eyeheightsitting
0.307 	 sittingheight
0.228 	 waistfrontlengthsitting
-0.225 	 heelbreadth

Principal Component 4:
0.247 	 balloffootcircumference
0.232 	 bimalleolarbreadth
0.22 	 footbreadthhorizontal
0.218 	 handbreadth
0.212 	 sittingheight

Principal Component 5:
0.319 	 interscyeii
0.292 	 biacromialbreadth
0.275 	 shoulderlength
0.273 	 interscyei
0.184 	 shouldercircumference

Principal Component 6:
-0.34 	 headcircumference
-0.321 	 headbreadth
0.316 	 shoulderlength
-0.277 	 tragiontopofhead
-0.262 	 interpupillarybreadth

Principal Component 7:
0.374 	 crotchlengthposterioromphalion
-0.321 	 earbreadth
-0.298 	 earlength
-0.284 	 waistbacklength
0.253 	 crotchlengthomphalion

Principal Component 8:
0.472 	 earprotrusion
0.346 	 earlength
0.215 	 crotchlengthposterioromphalion
-0.202 	 wristheight
0.195 	 overheadfingertipreachsitting

Principal Component 9:
-0.299 	 tragiontopofhead
0.294 	 crotchlengthposterioromphalion
-0.253 	 bicristalbreadth
-0.228 	 shoulderlength
0.189 	 neckcircumferencebase

Principal Component 10:
0.406 	 earbreadth
0.356 	 earprotrusion
-0.269 	 waistfrontlengthsitting
0.239 	 earlength
-0.228 	 waistbacklength

Here’s my interpretation of the first few components.

  • Not surprisingly, the first component is loaded with measurements of height. If you want to predict someone’s measurements, and can only use one number, choose height.
  • The second component is loaded with measurements of girth. No surprises so far.
  • The third component seems to capture torso length. That makes sense — once you know how tall someone is, it helps to know how that height is split between torso and legs.
  • The fourth component seems to capture hand and foot size (with sitting height thrown in just to remind us that PCA is not obligated to find components that align perfectly with the axes we expect).
  • Component 5 is all about the shoulders.
  • Component 6 is mostly about the head.

After that, things are not so neat. But two things are worth noting:

  • Component 7 is mostly related to the dimensions of the pelvis, but…
  • Components 7, 8, and 10 are surprisingly loaded up with ear measurements.

As we saw in the previous article, there seems to be something special about ears. Once you have exhausted the information carried by the most obvious measurements, the dimensions of the ear seem to be strangely salient.

Taming Black Swans

Taming Black Swans

At SciPy 2023 I presented a talk called “Taming Black Swans: Long-tailed distributions in the natural and engineered world“. Here’s the abstract:

Long-tailed distributions are common in natural and engineered systems; as a result, we encounter extreme values more often than we would expect from a short-tailed distribution. If we are not prepared for these “black swans”, they can be disastrous.

But we have statistical tools for identifying long-tailed distributions, estimating their parameters, and making better predictions about rare events.

In this talk, I present evidence of long-tailed distributions in a variety of datasets — including earthquakes, asteroids, and stock market crashes — discuss statistical methods for dealing with them, and show implementations using scientific Python libraries.

The video from the talk is on YouTube now:

I didn’t choose the thumbnail, but I like it.

Here are the slides, which have links to the resources I mentioned.

Don’t tell anyone, but this talk is part of my stealth book tour!

  • It started in 2019, when I presented a talk at PyData NYC based on Chapter 2: Relay Races and Revolving Doors.
  • In 2022, I presented another talk at PyData NYC, based on Chapter 12: Chasing the Overton Window.
  • In May I presented a talk at ODSC East based on Chapter 7: Causation, Collision, and Confusion.
  • And this talk is based on Chapter 8: The Long Tail of Disaster.

If things go according to plan, I’ll present Chapter 1 at a book event at the Needham Public Library on December 7.

More chapters coming soon!

How Correlated Are You?

How Correlated Are You?

This post is an offshoot from Chapter 1 of Probably Overthinking It, which is available for pre-order now!

Suppose you measure the arm and leg lengths of 4082 people. You would expect those measurements to be correlated, and you would be right. In the ANSUR-II dataset, among male members of the armed forces, this correlation is about 0.75 — people with long arms tend to have long legs.

And how about arm length and chest circumference? You might expect those measurements to be correlated too, but not as strongly as arm and leg length, and you would be right again. The correlation is about 0.47.

So some pairs of measurements are more correlated than others. There are a total of 93 measurements in the ANSUR-II dataset, which means there are 93 * 92 = 8556 correlations between pairs of measurements. So here’s a question that caught my attention: Are there measurements that are uncorrelated (or only weakly correlated) with the others?

To answer that, I computed the average magnitude (positive or negative) of the correlation between each measurement and the other 92. The most correlated measurement is weight, with an average of 0.56. So if you have to choose one measurement, weight seems to provide the most information about all of the others.

The least correlated measurement turns out to be ear protrusion — its average correlation with the other measurements is only 0.03, which is not just small, it is substantially smaller than the next smallest, which is ear breadth, with an average correlation of 0.13.

Diagram showing where ear protrusion is measured, from The Measurer’s Handbook.
Diagram showing where ear breadth is measured, from The Measurer’s Handbook.

So it seems like there is something special about ears.

Beyond the averages

We can get a better sense of what’s going on by looking at the distribution of correlations for each measurement, rather than just the averages. I’ll use my two favorite data visualization tools: CDFs, which make it easy to identify outliers, and spaghetti plots, which make it easy to spot oddities.

This figure shows the CDF of correlations for each of the 93 measurements.

Here are the conclusions I draw from this figure:

Correlations are almost all positive

Almost all of the correlations are positive, we we’d expect. The exception is elbow rest height, which is negatively correlated with almost half of the other measurements. This oddity is explainable if we consider how the measurement is defined:

Diagram showing where elbow rest height is measured, from The Measurer’s Handbook.

All of the other measurements are based on the distance between two parts of the body; in contrast, elbow rest height is the distance from the elbow to the chair. It is negatively correlated with other measurements because it measures a negative space — in effect, it is the difference between two other measurements: torso length and upper arm length.

Many distributions are multimodal

Overall, most correlations are moderate, between 0.2 and 0.6, but there are a few clusters of higher correlations, between 0.6 and 1.0. Some of these high correlations are spurious because they represent multiple measurements of the same thing — for example when one measurement is the sum of another two, or nearly so.

A few distributions have low variance

The distributions I’ve colored and labeled have substantially lower variance than the others, which means that they are about equally correlated with all other measurements. Notably, all of them are located on the head. It seems that the dimensions of the head are weakly correlated with the dimensions of the rest of the body, and that correlation is remarkably consistent.

And finally…

Ear protrusion isn’t correlated with anything

Among the unusual measurements with low variance, ear protrusion is doubly unusual because its correlations are so consistently weak. The exceptions are ear length (0.22) and ear breadth (0.08) — which make sense — and posterior crotch length (0.11), shown here:

The others are small enough to be plausibly due to chance.

I have a conjecture about why: ear protrusion might depend on details of how the ear develops, which might depend on idiosyncratic details of the developmental environment, with little or no genetic contribution. In that sense, ear protrusion might be like fingerprints.

All of these patterns are the same for women

Here’s the same figure for the 1986 female ANSUR-II participants:

The results are qualitatively the same. The variance in correlation with ear protrusion is higher, but that is consistent with random chance and a smaller sample size.

In conclusion, when we look at correlations among human measurements, the head is different from the rest of the body, the ear is different from the head, and ear protrusion is uniquely uncorrelated with anything else.

Homophobia and Religion

Homophobia and Religion

Two weeks ago I published an excerpt from Probably Overthinking It where I presented data from the General Social Survey showing a steep decrease in the percentage of people in the U.S. who think homosexuality is wrong.

Last week I followed up to answer a question about data from Pew Research showing a possible reversal of that trend.

Now I want to answer a question posed (or at least implied) on Twitter, “I’d love to see all this, including other less-salient changes, through the lens of the decline of religion.” If religious people are more likely to disapprove of homosexuality, and if religious affiliation is declining, how much of the decrease in homophobia is due to the decrease in religion?

To answer that question, I’ll use the most recent GSS data, released in May 2023. Here’s the long-term trend again:

The most recent point is a small uptick, but it follows an unusually large drop and returns to the long-term trend.

Here are the same results divided by strength of religious affiliation.

As expected, people who say they are strongly religious are more likely to disapprove of homosexuality, but levels of disapprobation have declined in all three groups.

Now here are the fractions of people in each group:

The fraction of people with no religious affiliation has increased substantially. The fraction with “not very strong” affiliation has dropped sharply. The fraction with strong affiliation has dropped more modestly. The most recent data points are out of line with the long-term trends in all three groups. Discrepancies like this are common in the 2021 data, due in part to the pandemic and in part to changes in the way the survey was administered. So we should not take them too seriously.

Now, to see how much of the decline in homophobia is due to the decline of religion, we can compute two counterfactual models:

  • What if the fraction of people in each group was frozen in 1990 and carried forward to the present?
  • What if the fraction of people in each group was frozen in 2021 (using the long-term trend line) and carried back to the past?

The following figure shows the results:

The orange line shows the long-term trend (smoothed by LOWESS). The green line shows the first counterfactual, with the levels of religious affiliation unchanged since 1990. The purple line shows the second counterfactual, with affiliation from 2021carried back to the past.

The difference between the counterfactuals indicates the part of the decline of homophobia that is due to the decline of religion, and it turns out to be small. A large majority of the change since 1990 is due to changes within the groups — only a small part is due to shifts between the groups.

This result surprised me. But I have checked it carefully and I think I have an explanation.

  • First, notice that the biggest shifts between the groups are (1) the decrease in “not so strong” and (2) the increase in “no religion”. The decrease in strong affiliation is relatively small.
  • Second, notice that the decrease in homophobia is steepest among those with “not so strong” affiliation.

Taken together, these results indicate that there was a net shift away from the group with the fastest decline in disapprobation and toward a group with a somewhat slower decline. As a result, the decrease in religious affiliation makes only a modest contribution to the decrease in homophobia. Most of the change, as I argued previously, is due to changed minds and generational replacement.

Backlash of Homophobia?

Backlash of Homophobia?

Last week I published an excerpt from Probably Overthinking It that showed a long-term decline in homophobic responses to questions in the General Social Survey, starting around 1990 and continuing in the most recent data.

Then I heard from a friend that Gallup published an article just a few weeks ago, with the title “Fewer in U.S. Say Same-Sex Relations Morally Acceptable”.

It features this graph, which shows that after a consistent increase from 2001 to 2022, the percentage of respondents who said same-sex relations are morally acceptable declined from 71% to 64% in 2023.

Looking the whole time series, there are several reasons I don’t think this change reflects an long-term reversal in the population:

1) The variation from year to year is substantial. This year’s drop is bigger than most, but not an outlier. I conjecture that some of the variation from year to year is due to short-term period effects — like whatever people were reading about in the news in the interval before they were surveyed.

2) Even with the drop, the most recent point is not far below the long-term trend.

3) Last year was a record high, so a part of the drop is regression to the mean.

4) A large part of the trend is due to generational replacement, so unless young people die and are replaced by old people, that can’t go into reverse.

5) The other part of the trend is due to changed minds. While it’s possible for that to go into reverse, I start with a strong prior that it will not. In general, the moral circle expands.

Taken together, I would make a substantial bet that next year’s data point will be 3 or more percentage points higher, and I would not be surprised by 7-10.

The Data

Gallup makes it easy to download the data from the article, so I’ll use it to make my argument more quantitative. Here’s the time series.

The responses vary from year to year. Here is the distribution of the differences in percentage points.

Changes of 4 percentage points in either direction are not unusual. This year’s decrease of 7 points is bigger than what we’ve seen in the past, but not by much.

This figure shows the time series again, along with a smooth curve fit by local regression (LOWESS).

Since last year’s point was above the long term trend, we would have expected this year’s point to be lower by about 1 percentage points, just by returning to the trend line.

That leaves 6 points unaccounted for. To get a sense of how unexpected a drop that size is, we can compute the average and standard deviation of the distances from the points to the regression line. The mean is 1.7 points, and the standard deviation is 1.3.

So a two-sigma event is a 4.2 point distance, and a three-sigma event is a 5.4 point distance.

Of the 7-point drop:

  • 1 point is what we’d expect from a return to the long-term trend.
  • 4-5 points are within the range of random variation we’ve seen from year to year.

Which leaves 1-2 points that could be a genuine period effect.

But I think it’s likely to be short term. As the Gallup article notes, “From a longer-term perspective, Americans’ opinions of most of these issues have trended in a more liberal direction in the 20-plus years Gallup has asked about them.”

And there are two reasons I think they are likely to continue.

One reason is the expansion of the moral circle, an idea proposed by historian William Lecky in 1867. He wrote:

“At one time the benevolent affections embrace merely the family, soon the circle expanding includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man with the animal world.”

Lecky, A History of European Morals from Augustus to Charlemagne

Historically, the expansion of the moral circle seldom goes in reverse, and never for long.

The other reason is generational replacement. Older people are substantially more likely to think homosexuality is not moral. As they die, they are replaced by younger people who have no problem with it.

The only way for that trend to go in reverse is if a very large, long-term period effect somehow convinces Gen Z and their successors that they were mistaken and — actually — homosexuality is wrong.

I predict that next year’s data point will be substantially higher than this year’s.

Here’s the notebook where I created these plots.

Go Get the Data

Go Get the Data

My mantra when I was working on Probably Overthinking It was “Go Get the Data.” If I wanted to use a result from prior work, I would get the data whenever possible and make my own visualization. Of course, that’s more work than copying and pasting a figure, but there are a lot of benefits. One is that I can often get newer data. Another is that I can check the results.

I was reminded of these benefits a few months ago when I was reading a very good book that I won’t name because I’m about to point out an error, and I don’t want to be a jerk about it. The book includes the following figure:

Source: A book I’m not identifying because the point of this example is not to name and shame.

The top line got my attention because I have worked on several projects with data from the General Social Survey (GSS), and this particular question is one of the examples in my Political Alignment Case Study. Plotting the fraction of people who think homosexuality is wrong, I generated this figure:

Source: Downey, Political Alignment Case Study.

These results are not the same (even accounting for the inverted y-axis).

So I checked the source, which is Steven Pinker’s The Better Angels of Our Nature, specifically this figure:

Source: Page 452 of Pinker, The Better Angels of Our Nature

It looks like Pinker’s figure is consistent with mine. So what went wrong with the first figure?

Here’s what I think happened: they wanted to invert the y axis to show the percentage who think homosexuality is not wrong. But instead of flipping the y axis, they seem to have flipped the x axis, reversing the arrow of time! If you compare the other lines, it looks like they have all been reversed left-for-right.

Now, this is just one error in an otherwise excellent book. But it’s a reminder that this kind of thing can be avoided, if you Go Get the Data.

Abstracts and keywords

Abstracts and keywords

As Probably Overthinking It approaches the finish line, there are just a few more tasks: I am working on the index and — as I have recently learned — I also have to write a 200-word abstract, a list of keywords for each chapter, and a 250-word abstract for the whole book. I have come to two conclusions:

  • There is no good software for making an index from a PDF document.
  • ChatGPT is pretty good at writing chapter abstracts.

By converting each chapter to a simple text file (no figures and tables) I was able to get most of them under the ~4000 token limit. Then I asked, “write a one-paragraph abstract for this chapter with no more than 180 words. write about the chapter in the third person”.

Here’s the unedited abstract for the last chapter:

The chapter titled “Chasing the Overton Window” explores the phenomenon of people considering themselves more conservative as they age, despite their views actually becoming more liberal over time. The chapter begins by examining the trends of self-identification and political views across different age groups. It then delves into the concept of the Overton window, which represents the range of acceptable public opinion on a given issue. The chapter explains how the center of public opinion has shifted toward liberalism over the past 50 years and how the relative terms “liberal” and “conservative” depend on the perceived center. Furthermore, it demonstrates that both liberals and conservatives have become more liberal over time. The chapter concludes by proposing an explanation for why people think they are becoming more conservative, even though their views are becoming more liberal, which involves the interplay of generational effects, the shifting center of public opinion, and the connotations associated with political labels.

ChatGPT June 10, 2023

It’s not great prose, but I think I can revise it into something acceptable without much effort.

Three of the chapters exceeded the token limit, so I asked for a summary of the first half, then a summary of the second half, then I asked, “Combine the following two abstracts into a single paragraph with no more than 180 words”. Here’s the combined abstract of Chapter 8:

This chapter delves into the distribution of natural and human-caused disasters, investigating their sizes, costs, prevalence, and characteristics within long-tailed distributions. Understanding the probabilities of major disasters is crucial for effective preparedness and response, despite the challenge of comprehending rare and large-scale events. By analyzing a dataset of 125 disasters, including hurricanes, earthquakes, floods, nuclear disasters, and terror attacks, the author demonstrates a pattern where doubling the rank of a disaster corresponds to halving its costs when plotted on a logarithmic scale. While exploring the limitations of the lognormal distribution in predicting the probabilities of large disasters, the author introduces Student’s t-distribution as a more suitable model for estimating the probabilities of extreme events. The chapter also examines lunar craters and their abundance and sizes, revealing the prevalence of long-tailed distributions and their connection to asteroid sizes. Additionally, it explores the occurrence of long-tailed distributions in stock market crashes and introduces the concept of black swans to emphasize their relevance in understanding rare and impactful events. Concluding the chapter, it discusses the challenges associated with predicting and comprehending rare, large events in a long-tailed world, with a specific focus on earthquake magnitudes and a comparison of prediction models.

ChatGPT June 10, 2023

Again, I think that’s editing distance away from acceptable — and a near perfect 198 words.

It does pretty well with keywords, too:

  1. Disasters
  2. Long-tailed distributions
  3. Probabilities
  4. Preparedness
  5. Response
  6. Natural events
  7. Human-made incidents
  8. Lognormal distribution
  9. Student’s t-distribution
  10. Black swans

To generate the abstract for the whole book, I pasted the chapter abstracts and keywords into a single document and asked, “the following are chapter abstracts for a book called ‘Probably Overthinking It’. write a one-paragraph abstract for the whole book, more than 200 words, but not more than 250 words. write about the book in the third person. not too much like an ad”

Here’s the unedited result:

“Probably Overthinking It” is a thought-provoking book that delves into various statistical and cognitive paradoxes, challenging common intuitions and shedding light on the complexities of human behavior and measurement. Through a series of captivating chapters, the author explores topics such as normalcy and weirdness, inspection paradoxes, family size and population growth, lognormal distributions, expected remaining time, Berkson’s paradox, low birthweight paradox, long-tailed distributions, fairness and fallacy in algorithmic decision-making, Simpson’s paradox, the moral circle, the evolution of societal values, and the shifting perceptions of political ideologies. By analyzing real-world data and employing statistical models, the book uncovers the nuances and contradictions that emerge when multiple factors interact, leading to counterintuitive outcomes. Readers will gain a deeper understanding of statistical biases, the limitations of measurement, and the challenges inherent in making accurate predictions and drawing meaningful conclusions. “Probably Overthinking It” encourages readers to question their assumptions, approach complex problems with skepticism, and develop a more nuanced and informed perspective on the intricacies of human behavior and the world around us.

ChatGPT June 10, 2023

Surprisingly, the book abstract is only 169 words, despite my prompt. In general, the word counts are not precise, and more often over than under (which is why I asked for 180, not 200).

So, I have some editing to do, but I’d say ChatGPT saved me at least a few hours — and spared me from exactly the kind of writing I dislike the most.