Browsed by
Category: Uncategorized

Taming Black Swans

Taming Black Swans

At SciPy 2023 I presented a talk called “Taming Black Swans: Long-tailed distributions in the natural and engineered world“. Here’s the abstract:

Long-tailed distributions are common in natural and engineered systems; as a result, we encounter extreme values more often than we would expect from a short-tailed distribution. If we are not prepared for these “black swans”, they can be disastrous.

But we have statistical tools for identifying long-tailed distributions, estimating their parameters, and making better predictions about rare events.

In this talk, I present evidence of long-tailed distributions in a variety of datasets — including earthquakes, asteroids, and stock market crashes — discuss statistical methods for dealing with them, and show implementations using scientific Python libraries.

The video from the talk is on YouTube now:

I didn’t choose the thumbnail, but I like it.

Here are the slides, which have links to the resources I mentioned.

Don’t tell anyone, but this talk is part of my stealth book tour!

  • It started in 2019, when I presented a talk at PyData NYC based on Chapter 2: Relay Races and Revolving Doors.
  • In 2022, I presented another talk at PyData NYC, based on Chapter 12: Chasing the Overton Window.
  • In May I presented a talk at ODSC East based on Chapter 7: Causation, Collision, and Confusion.
  • And this talk is based on Chapter 8: The Long Tail of Disaster.

If things go according to plan, I’ll present Chapter 1 at a book event at the Needham Public Library on December 7.

More chapters coming soon!

How Correlated Are You?

How Correlated Are You?

This post is an offshoot from Chapter 1 of Probably Overthinking It, which is available for pre-order now!

Suppose you measure the arm and leg lengths of 4082 people. You would expect those measurements to be correlated, and you would be right. In the ANSUR-II dataset, among male members of the armed forces, this correlation is about 0.75 — people with long arms tend to have long legs.

And how about arm length and chest circumference? You might expect those measurements to be correlated too, but not as strongly as arm and leg length, and you would be right again. The correlation is about 0.47.

So some pairs of measurements are more correlated than others. There are a total of 93 measurements in the ANSUR-II dataset, which means there are 93 * 92 = 8556 correlations between pairs of measurements. So here’s a question that caught my attention: Are there measurements that are uncorrelated (or only weakly correlated) with the others?

To answer that, I computed the average magnitude (positive or negative) of the correlation between each measurement and the other 92. The most correlated measurement is weight, with an average of 0.56. So if you have to choose one measurement, weight seems to provide the most information about all of the others.

The least correlated measurement turns out to be ear protrusion — its average correlation with the other measurements is only 0.03, which is not just small, it is substantially smaller than the next smallest, which is ear breadth, with an average correlation of 0.13.

Diagram showing where ear protrusion is measured, from The Measurer’s Handbook.
Diagram showing where ear breadth is measured, from The Measurer’s Handbook.

So it seems like there is something special about ears.

Beyond the averages

We can get a better sense of what’s going on by looking at the distribution of correlations for each measurement, rather than just the averages. I’ll use my two favorite data visualization tools: CDFs, which make it easy to identify outliers, and spaghetti plots, which make it easy to spot oddities.

This figure shows the CDF of correlations for each of the 93 measurements.

Here are the conclusions I draw from this figure:

Correlations are almost all positive

Almost all of the correlations are positive, we we’d expect. The exception is elbow rest height, which is negatively correlated with almost half of the other measurements. This oddity is explainable if we consider how the measurement is defined:

Diagram showing where elbow rest height is measured, from The Measurer’s Handbook.

All of the other measurements are based on the distance between two parts of the body; in contrast, elbow rest height is the distance from the elbow to the chair. It is negatively correlated with other measurements because it measures a negative space — in effect, it is the difference between two other measurements: torso length and upper arm length.

Many distributions are multimodal

Overall, most correlations are moderate, between 0.2 and 0.6, but there are a few clusters of higher correlations, between 0.6 and 1.0. Some of these high correlations are spurious because they represent multiple measurements of the same thing — for example when one measurement is the sum of another two, or nearly so.

A few distributions have low variance

The distributions I’ve colored and labeled have substantially lower variance than the others, which means that they are about equally correlated with all other measurements. Notably, all of them are located on the head. It seems that the dimensions of the head are weakly correlated with the dimensions of the rest of the body, and that correlation is remarkably consistent.

And finally…

Ear protrusion isn’t correlated with anything

Among the unusual measurements with low variance, ear protrusion is doubly unusual because its correlations are so consistently weak. The exceptions are ear length (0.22) and ear breadth (0.08) — which make sense — and posterior crotch length (0.11), shown here:

The others are small enough to be plausibly due to chance.

I have a conjecture about why: ear protrusion might depend on details of how the ear develops, which might depend on idiosyncratic details of the developmental environment, with little or no genetic contribution. In that sense, ear protrusion might be like fingerprints.

All of these patterns are the same for women

Here’s the same figure for the 1986 female ANSUR-II participants:

The results are qualitatively the same. The variance in correlation with ear protrusion is higher, but that is consistent with random chance and a smaller sample size.

In conclusion, when we look at correlations among human measurements, the head is different from the rest of the body, the ear is different from the head, and ear protrusion is uniquely uncorrelated with anything else.

Homophobia and Religion

Homophobia and Religion

Two weeks ago I published an excerpt from Probably Overthinking It where I presented data from the General Social Survey showing a steep decrease in the percentage of people in the U.S. who think homosexuality is wrong.

Last week I followed up to answer a question about data from Pew Research showing a possible reversal of that trend.

Now I want to answer a question posed (or at least implied) on Twitter, “I’d love to see all this, including other less-salient changes, through the lens of the decline of religion.” If religious people are more likely to disapprove of homosexuality, and if religious affiliation is declining, how much of the decrease in homophobia is due to the decrease in religion?

To answer that question, I’ll use the most recent GSS data, released in May 2023. Here’s the long-term trend again:

The most recent point is a small uptick, but it follows an unusually large drop and returns to the long-term trend.

Here are the same results divided by strength of religious affiliation.

As expected, people who say they are strongly religious are more likely to disapprove of homosexuality, but levels of disapprobation have declined in all three groups.

Now here are the fractions of people in each group:

The fraction of people with no religious affiliation has increased substantially. The fraction with “not very strong” affiliation has dropped sharply. The fraction with strong affiliation has dropped more modestly. The most recent data points are out of line with the long-term trends in all three groups. Discrepancies like this are common in the 2021 data, due in part to the pandemic and in part to changes in the way the survey was administered. So we should not take them too seriously.

Now, to see how much of the decline in homophobia is due to the decline of religion, we can compute two counterfactual models:

  • What if the fraction of people in each group was frozen in 1990 and carried forward to the present?
  • What if the fraction of people in each group was frozen in 2021 (using the long-term trend line) and carried back to the past?

The following figure shows the results:

The orange line shows the long-term trend (smoothed by LOWESS). The green line shows the first counterfactual, with the levels of religious affiliation unchanged since 1990. The purple line shows the second counterfactual, with affiliation from 2021carried back to the past.

The difference between the counterfactuals indicates the part of the decline of homophobia that is due to the decline of religion, and it turns out to be small. A large majority of the change since 1990 is due to changes within the groups — only a small part is due to shifts between the groups.

This result surprised me. But I have checked it carefully and I think I have an explanation.

  • First, notice that the biggest shifts between the groups are (1) the decrease in “not so strong” and (2) the increase in “no religion”. The decrease in strong affiliation is relatively small.
  • Second, notice that the decrease in homophobia is steepest among those with “not so strong” affiliation.

Taken together, these results indicate that there was a net shift away from the group with the fastest decline in disapprobation and toward a group with a somewhat slower decline. As a result, the decrease in religious affiliation makes only a modest contribution to the decrease in homophobia. Most of the change, as I argued previously, is due to changed minds and generational replacement.

Backlash of Homophobia?

Backlash of Homophobia?

Last week I published an excerpt from Probably Overthinking It that showed a long-term decline in homophobic responses to questions in the General Social Survey, starting around 1990 and continuing in the most recent data.

Then I heard from a friend that Gallup published an article just a few weeks ago, with the title “Fewer in U.S. Say Same-Sex Relations Morally Acceptable”.

It features this graph, which shows that after a consistent increase from 2001 to 2022, the percentage of respondents who said same-sex relations are morally acceptable declined from 71% to 64% in 2023.

Looking the whole time series, there are several reasons I don’t think this change reflects an long-term reversal in the population:

1) The variation from year to year is substantial. This year’s drop is bigger than most, but not an outlier. I conjecture that some of the variation from year to year is due to short-term period effects — like whatever people were reading about in the news in the interval before they were surveyed.

2) Even with the drop, the most recent point is not far below the long-term trend.

3) Last year was a record high, so a part of the drop is regression to the mean.

4) A large part of the trend is due to generational replacement, so unless young people die and are replaced by old people, that can’t go into reverse.

5) The other part of the trend is due to changed minds. While it’s possible for that to go into reverse, I start with a strong prior that it will not. In general, the moral circle expands.

Taken together, I would make a substantial bet that next year’s data point will be 3 or more percentage points higher, and I would not be surprised by 7-10.

The Data

Gallup makes it easy to download the data from the article, so I’ll use it to make my argument more quantitative. Here’s the time series.

The responses vary from year to year. Here is the distribution of the differences in percentage points.

Changes of 4 percentage points in either direction are not unusual. This year’s decrease of 7 points is bigger than what we’ve seen in the past, but not by much.

This figure shows the time series again, along with a smooth curve fit by local regression (LOWESS).

Since last year’s point was above the long term trend, we would have expected this year’s point to be lower by about 1 percentage points, just by returning to the trend line.

That leaves 6 points unaccounted for. To get a sense of how unexpected a drop that size is, we can compute the average and standard deviation of the distances from the points to the regression line. The mean is 1.7 points, and the standard deviation is 1.3.

So a two-sigma event is a 4.2 point distance, and a three-sigma event is a 5.4 point distance.

Of the 7-point drop:

  • 1 point is what we’d expect from a return to the long-term trend.
  • 4-5 points are within the range of random variation we’ve seen from year to year.

Which leaves 1-2 points that could be a genuine period effect.

But I think it’s likely to be short term. As the Gallup article notes, “From a longer-term perspective, Americans’ opinions of most of these issues have trended in a more liberal direction in the 20-plus years Gallup has asked about them.”

And there are two reasons I think they are likely to continue.

One reason is the expansion of the moral circle, an idea proposed by historian William Lecky in 1867. He wrote:

“At one time the benevolent affections embrace merely the family, soon the circle expanding includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man with the animal world.”

Lecky, A History of European Morals from Augustus to Charlemagne

Historically, the expansion of the moral circle seldom goes in reverse, and never for long.

The other reason is generational replacement. Older people are substantially more likely to think homosexuality is not moral. As they die, they are replaced by younger people who have no problem with it.

The only way for that trend to go in reverse is if a very large, long-term period effect somehow convinces Gen Z and their successors that they were mistaken and — actually — homosexuality is wrong.

I predict that next year’s data point will be substantially higher than this year’s.

Here’s the notebook where I created these plots.

Go Get the Data

Go Get the Data

My mantra when I was working on Probably Overthinking It was “Go Get the Data.” If I wanted to use a result from prior work, I would get the data whenever possible and make my own visualization. Of course, that’s more work than copying and pasting a figure, but there are a lot of benefits. One is that I can often get newer data. Another is that I can check the results.

I was reminded of these benefits a few months ago when I was reading a very good book that I won’t name because I’m about to point out an error, and I don’t want to be a jerk about it. The book includes the following figure:

Source: A book I’m not identifying because the point of this example is not to name and shame.

The top line got my attention because I have worked on several projects with data from the General Social Survey (GSS), and this particular question is one of the examples in my Political Alignment Case Study. Plotting the fraction of people who think homosexuality is wrong, I generated this figure:

Source: Downey, Political Alignment Case Study.

These results are not the same (even accounting for the inverted y-axis).

So I checked the source, which is Steven Pinker’s The Better Angels of Our Nature, specifically this figure:

Source: Page 452 of Pinker, The Better Angels of Our Nature

It looks like Pinker’s figure is consistent with mine. So what went wrong with the first figure?

Here’s what I think happened: they wanted to invert the y axis to show the percentage who think homosexuality is not wrong. But instead of flipping the y axis, they seem to have flipped the x axis, reversing the arrow of time! If you compare the other lines, it looks like they have all been reversed left-for-right.

Now, this is just one error in an otherwise excellent book. But it’s a reminder that this kind of thing can be avoided, if you Go Get the Data.

Abstracts and keywords

Abstracts and keywords

As Probably Overthinking It approaches the finish line, there are just a few more tasks: I am working on the index and — as I have recently learned — I also have to write a 200-word abstract, a list of keywords for each chapter, and a 250-word abstract for the whole book. I have come to two conclusions:

  • There is no good software for making an index from a PDF document.
  • ChatGPT is pretty good at writing chapter abstracts.

By converting each chapter to a simple text file (no figures and tables) I was able to get most of them under the ~4000 token limit. Then I asked, “write a one-paragraph abstract for this chapter with no more than 180 words. write about the chapter in the third person”.

Here’s the unedited abstract for the last chapter:

The chapter titled “Chasing the Overton Window” explores the phenomenon of people considering themselves more conservative as they age, despite their views actually becoming more liberal over time. The chapter begins by examining the trends of self-identification and political views across different age groups. It then delves into the concept of the Overton window, which represents the range of acceptable public opinion on a given issue. The chapter explains how the center of public opinion has shifted toward liberalism over the past 50 years and how the relative terms “liberal” and “conservative” depend on the perceived center. Furthermore, it demonstrates that both liberals and conservatives have become more liberal over time. The chapter concludes by proposing an explanation for why people think they are becoming more conservative, even though their views are becoming more liberal, which involves the interplay of generational effects, the shifting center of public opinion, and the connotations associated with political labels.

ChatGPT June 10, 2023

It’s not great prose, but I think I can revise it into something acceptable without much effort.

Three of the chapters exceeded the token limit, so I asked for a summary of the first half, then a summary of the second half, then I asked, “Combine the following two abstracts into a single paragraph with no more than 180 words”. Here’s the combined abstract of Chapter 8:

This chapter delves into the distribution of natural and human-caused disasters, investigating their sizes, costs, prevalence, and characteristics within long-tailed distributions. Understanding the probabilities of major disasters is crucial for effective preparedness and response, despite the challenge of comprehending rare and large-scale events. By analyzing a dataset of 125 disasters, including hurricanes, earthquakes, floods, nuclear disasters, and terror attacks, the author demonstrates a pattern where doubling the rank of a disaster corresponds to halving its costs when plotted on a logarithmic scale. While exploring the limitations of the lognormal distribution in predicting the probabilities of large disasters, the author introduces Student’s t-distribution as a more suitable model for estimating the probabilities of extreme events. The chapter also examines lunar craters and their abundance and sizes, revealing the prevalence of long-tailed distributions and their connection to asteroid sizes. Additionally, it explores the occurrence of long-tailed distributions in stock market crashes and introduces the concept of black swans to emphasize their relevance in understanding rare and impactful events. Concluding the chapter, it discusses the challenges associated with predicting and comprehending rare, large events in a long-tailed world, with a specific focus on earthquake magnitudes and a comparison of prediction models.

ChatGPT June 10, 2023

Again, I think that’s editing distance away from acceptable — and a near perfect 198 words.

It does pretty well with keywords, too:

  1. Disasters
  2. Long-tailed distributions
  3. Probabilities
  4. Preparedness
  5. Response
  6. Natural events
  7. Human-made incidents
  8. Lognormal distribution
  9. Student’s t-distribution
  10. Black swans

To generate the abstract for the whole book, I pasted the chapter abstracts and keywords into a single document and asked, “the following are chapter abstracts for a book called ‘Probably Overthinking It’. write a one-paragraph abstract for the whole book, more than 200 words, but not more than 250 words. write about the book in the third person. not too much like an ad”

Here’s the unedited result:

“Probably Overthinking It” is a thought-provoking book that delves into various statistical and cognitive paradoxes, challenging common intuitions and shedding light on the complexities of human behavior and measurement. Through a series of captivating chapters, the author explores topics such as normalcy and weirdness, inspection paradoxes, family size and population growth, lognormal distributions, expected remaining time, Berkson’s paradox, low birthweight paradox, long-tailed distributions, fairness and fallacy in algorithmic decision-making, Simpson’s paradox, the moral circle, the evolution of societal values, and the shifting perceptions of political ideologies. By analyzing real-world data and employing statistical models, the book uncovers the nuances and contradictions that emerge when multiple factors interact, leading to counterintuitive outcomes. Readers will gain a deeper understanding of statistical biases, the limitations of measurement, and the challenges inherent in making accurate predictions and drawing meaningful conclusions. “Probably Overthinking It” encourages readers to question their assumptions, approach complex problems with skepticism, and develop a more nuanced and informed perspective on the intricacies of human behavior and the world around us.

ChatGPT June 10, 2023

Surprisingly, the book abstract is only 169 words, despite my prompt. In general, the word counts are not precise, and more often over than under (which is why I asked for 180, not 200).

So, I have some editing to do, but I’d say ChatGPT saved me at least a few hours — and spared me from exactly the kind of writing I dislike the most.

Absence of Evidence

Absence of Evidence

If anyone tells you that absence of evidence is not evidence of absence, you have my permission to slap them. Of course, my permission will not prevent you from getting slapped back or charged with assault. Regardless, absence of evidence is very often evidence of absence, and sometimes strong evidence.

To make this claim precise, I propose we use the Bayesian definition of evidence:

If an observation, D, would be more likely under a hypothesis, H, than under the alternative hypothesis, then D is evidence in favor of H. Conversely, if D is less likely under H than under the alternative, D is evidence against H.

As an example, suppose H is the hypothesis that unicorns exist. Since people have explored most of the world’s land mass, I’d say there’s a 99% chance we would have found unicorns if they existed.

So if D is the fact that we have not found unicorns, the probability of D is only 1% if unicorns exist, and 100% if they don’t. Therefore, D is evidence that unicorns don’t exist, with a likelihood ratio of 100:1.

Let’s consider a more realistic example. In a recent article, The Economist discusses the hypothesis that social media use is a major cause of recent increases in rates of self-harm and suicide among teenage girls. To test this hypothesis, they propose an experiment:

Because smartphones were adopted at different rates in different countries, the timing of any increases they caused in suicides or self-harm should vary on this basis.

But their experiment came up empty:

[W]e could not find any statistical link between changes over time in the prevalence of either mobile-internet subscriptions or self-reported social-media use in a country, and changes over time in that country’s suicide or self-harm hospitalisation rates, for either boys or girls.

They conclude:

But if social media were the sole or main cause of rising levels of suicide or self-harm—rather than just one part of a complex problem—country-level data would probably show signs of their effect.

Since it did not, this negative result is evidence against the hypothesis. It may not be strong evidence; there are other reasons the experiment might have failed. And in light of other evidence, it is still plausible that social media is harmful to mental health.

Nevertheless, in this example, as in any reasonable experiment, absence of evidence is evidence of absence.

[In this 2015 article, I made a similar claim that we should stop saying correlation does not imply causation.]

Causation, Collision, and Confusion

Causation, Collision, and Confusion

Today I presented a talk about Berkson’s paradox at ODSC East 2023. If you missed it, the slides are here. When the video is available, I’ll post it here.

Abstract: Collision bias is the most treacherous error in statistics: it can be subtle, it is easy to induce it by accident, and the error it causes can be bigger than the effect you are trying to measure. It is the cause of Berkson’s paradox, the low birthweight paradox, and the obesity paradox, among other famous historical errors. And it might be the cause of your next blunder! Although it is best known in epidemiology, it appears in other fields of science, engineering, and business.

In this talk, I will present examples of collision bias and show how it can be caused by a biased sampling process or induced by inappropriate statistical controls; and I will introduce causal diagrams as a tool for representing causal hypotheses and diagnosing collision bias.

So, don’t tell anyone, but this talk is part of my stealth book tour!

  • It started in 2019, when I presented a talk at PyData NYC based on Chapter 2: Relay Races and Revolving Doors.
  • In 2022, I presented another talk at PyData NYC, based on Chapter 12: Chasing the Overton Window.
  • Today’s talk is based on Chapter 7: Causation, Collision, and Confusion.
  • In July I’m presenting a talk at SciPy based on Chapter 9: The Long Tail of Disaster.

And, if things go according to plan, I’ll present Chapter 1 at a book event at the Needham Public Library on December 7.

More chapters coming soon!

The Overton Paradox

The Overton Paradox

Chapter 12 of Probably Overthinking It is about three trends that form what I’m calling the Overton Paradox:

  • Older people are more likely to say they are conservative.
  • And older people hold more conservative views.
  • But people don’t become more conservative as they get older — on average they get a little more liberal.

To demonstrate these trends, I used data from the General Social Survey.

Older people are more likely to say they are conservative:

And older people hold more conservative views:

But if we split people up by decade of birth, most cohorts don’t become more conservative as they get older; on average they become a little more liberal.

So if people become more liberal as they age, why are they more likely to say they are conservative?

I think the reason is that the perceived center of mass changes over time. Here’s how the average number of conservative responses has changed over the ~50 years of the GSS:

And it’s not just liberals going off the rails — all three groups have changed:

Let’s compare these changes to the average for people born in the 1940s:

In 1970, when they were in their 20s, this cohort was about as liberal as the average liberal. In 1990, when they were in their 40s, they were indistinguishable from the average moderate. In 2020, they were in their 70s, they found themselves substantially right of center.

On average, they are more liberal now than they were in 1970, but the world has moved faster. They are more likely to say they are conservative because, relative to the center of mass, they are.