Saturday, May 20, 2017

The NIMBY challenge


The other day I wrote a Bloomberg View post about how California is waking up to the problem of NIMBYism - development restrictions that limit economic activity and make cities less affordable. Ground Zero for this struggle is the Bay Area, San Francisco in particular. The pro-development activists known as the YIMBYs have been at the forefront of the fight. Economists have also been weighing in. Enrico Moretti & Chang-Tai Hsieh and Andrii Parkomenko have both come out with new theory papers showing negative impacts of housing development restrictions. Ed Glaeser and Joe Gyourko have a paper reaching similar conclusions after looking at data, theory, and institutional and legal details. And Richard Florida has a whole new book about the problem.

But the YIMBYs have faced a great deal of intellectual pushback from certain folks in the Bay Area. Even as I was writing my post, physicist Phil Price was writing an impassioned attack on YIMBYism over at Andrew Gelman's blog. He followed it up with a second post three days later, after getting a great deal of pushback in the comments. The commenters have made most of the points I would make in rebuttal to Price, but I think his posts are worth a close look, because they reveal a lot about the way NIMBYs think about the housing market. In order to understand and meet the YIMBY challenge, pro-housing activists should familiarize themselves with the arguments Price makes.

The first thing to note is that NIMBYs think that a house's price is defined when it's built - almost as if the price is built into the walls. Price writes:
[N]ew high-rise apartments are going in that have hundreds of apartments each, typically with a rent of $4000 – $8000 per month. If you let a developer build “market rate” apartments, that’s what they’ll build.
These numbers are a bit exaggerated, but that's not the point. What Price seems to ignore is the impact of construction on all the non-new units. Here's an example. I live in SF, in a market-rate apartment (though not one quite *that* pricey). But when my apartment was built, it didn't have the high rent it now has. It's a small, older apartment, once occupied by working-class families. The rent changed over time, turning an affordable home into a luxury home for a member of the upper middle class. In fact, when I moved into this apartment, I increased demand in this neighborhood, putting increased pressure on any working-class people who still happen to live here. What if, when I moved to SF, instead of moving into this apartment, I had moved into a nice fancy new "market-rate" unit in one of those towers that Price decries? I would not have increased demand in this neighborhood, and would not have put upward pressure on the rents of the families living nearby. 

Later, Price repeats the fixed-price idea when he writes:
Sorry, no. If the ‘market rate’ for newly developed apartments is substantially higher than the median rent of existing apartments, then building more market-rate apartments will make median rents go up, not down.
That sounds like simple math. And if the price of an apartment was somehow built into its walls and floors, it would just be simple math. In fact, though, it's wrong. Here's why. Suppose there are 2000 people in a city, living in 2000 apartments. One quarter of the people are rich, and rent apartments for $4000 apiece. Three quarters are poor, and rent apartments for $1000 apiece. The median rent is therefore $1000. Now build 400 fancy new luxury apartments that rent for $5000 each. And suppose no new people move to the city. All 500 rich people move into the fancy new $5000 places, leaving their old $4000 places vacant. The previously-$4000 apartments fall in price to $2000, and 500 poor people move into them, leaving 500 of their apartments vacant. These are used as second apartments, storage, or whatever. The rent of the 1500 apartments that used to all cost $1000 falls to $900 because of this drop in demand for low-end apartments. The median rent of the city's 2400 apartments is now $900, down from $1000 before.

So the "simple math" is not necessarily correct.

NIMBYs do seem to recognize this on some level. So they intuitively turn to a phenomenon called "induced demand" (though they may not realize it's called that). The theory is that if you build more housing in SF, you encourage people to move into SF, preventing prices from going down, or even pushing them up. Price espouses a version of this theory when he writes:
Tens of thousands of high-income people who would like to live in San Francisco are living in Oakland and Fremont and Berkeley and Orinda because of lower rents in those places. As market rate housing is built in San Francisco, those people move into it...There is a cascade: some people move from Berkeley and Oakland to San Francisco, which allows replacements to move from Richmond and El Cerrito into Berkeley and Oakland, and so on. Ultimately, rents in San Francisco go up, and rents in some outlying communities go down. Yes, the increased supply of housing lead to decreased housing prices on average but they’ve gone up, not down, in San Francisco itself.
It's perfectly possible in theory that this happens. In fact, this is even possible in the simplest, Econ 101 type supply-and-demand theory - it's just the case where supply is infinitely price-elastic.

Is this realistic, though? Price cites Manhattan as a counterexample - a very dense place where rents are still high. I'm not sure this counterexample applies - I see a lot more poor Black people living in Manhattan than in SF, for example. But anyway, a counter-counterexample is Tokyo, where construction seems to have been successful in keeping rents low.

The question is what would happen to SF. As I wrote in a Bloomberg View post last December, there's OK evidence that more housing would ease the city's affordability crisis:
In 1987, economists Lawrence Katz and Kenneth Rosen looked at San Francisco communities that put development restrictions in place. They found that housing prices were higher in these places than in communities that let developers build... 
[Recently, blogger Eric] Fischer collected more than 30 years of data on San Francisco rents. He modeled them as a function of supply -- based on the number of available housing units -- and demand, measured by total employment and average wages. His model fit the historical curve quite nicely. 
Recent experience fits right in with this prediction. In response to the housing crisis, San Francisco recently allowed a small increase in market-rate housing. Lo and behold, rents in the city dropped slightly
Admittedly, this data is not decisive. More SF construction might have pushed rents down a bit this year, but a big construction boom might suddenly induce a flood of rich people to decide to move to the city. It's not possible to know.

But if NIMBY theorists like Price really believe that induced demand determines SF rents, they should do the following thought experiment: Imagine destroying a bunch of luxury apartments in SF. Just find the most expensive apartment buildings you can and demolish them. 

What would happen to rents in SF if you did this? Would rents fall? Would rich people decide that SF hates them, and head for Seattle or the East Bay or Austin? Maybe. But maybe they would stay in SF, and go bid high prices for apartments currently occupied by the beleaguered working class. The landlords of those apartments, smelling profit, would find a way around anti-eviction laws, kick out the working-class people, and rent to the recently displaced rich. Those newly-displaced working-class people, having nowhere to live in SF, would move out of the city themselves, incurring all the costs and disruptions and stress of doing so. 

If you think that demolishing luxury apartments would have this latter result, then you should also think that building more luxury apartments would do the opposite. Price should think long and hard about what would happen if SF started demolishing luxury apartments. 

In any case, I think Price's posts have the following lessons for YIMBYs:

1. Econ 101 supply-and-demand theory is helpful in discussing these issues, but don't rely on it exclusively. Instead, use a mix of data, simple theory, thought experiments, and references to more complex theories.

2. Always remind people that the price of an apartment is not fixed, and doesn't come built into its walls and floors.

3. Remind NIMBYs to think about the effect of new housing on whole regions, states, and the country itself, instead of just on one city or one neighborhood. If NIMBYs say they only care about one city or neighborhood, ask them why.

4. Ask NIMBYs what they think would be the result of destroying rich people's current residences.

5. Acknowledge that induced demand is a real thing, and think seriously about how new housing supply within a city changes the location decisions of people not currently living in that city.

6. NIMBYs care about the character of a city, so it's good to be able to paint a positive, enticing picture of what a city would look and feel like with more development.

I believe the YIMBY viewpoint has the weight of evidence and theory on its side. But the NIMBY challenge is not one of simple ignorance. Nor is it purely driven by the selfishness of incumbent homeowners trying to feather their own nests, or by white people trying to exclude poor minorities from their communities while still appearing liberal (two allegations I often hear). NIMBYism is a flawed but serious package of ideas, deserving of serious argument.

Monday, May 15, 2017

Vast literatures as mud moats


I don't know why academic literatures are so often referred to as "vast" (the phrase goes back well over a century), but it seems like no matter what topic you talk about, someone is always popping up to inform you that there is a "vast literature" on the topic already. This often serves to shut down debate, because it amounts to a demand that before you talk about something, you need to go read voluminous amounts of what others have already written about it. Since vast literatures take many, many hours to read, this represents a significant demand of time and effort. If the vast literature comprises 40 papers, each of which takes an hour to read, that's one week of full-time work equivalent that people are demanding as a cost of entry just to participate in a debate! So the question is: Is it worth it?

Often, reading the literature seems like an eminently reasonable demand. Suppose I were to think about minimum wage for the first time, knowing nothing about all the research economists have done on the topic. I might very confidently say some very silly things. I would be unaware of the relevant empirical evidence. There would probably be theoretical considerations I hadn't yet considered. Reading the vast literature would make me aware of many of these. In fact, I think the minimum wage debate does suffer from a lack of knowledge of the literature.

But the demand to "go read the vast literature" could also be eminently unreasonable. Just because a lot of papers have been written about something doesn't mean that anyone knows anything about it. There's no law of the Universe stating that a PDF with an abstract and a standard LaTeX font contains any new knowledge, any unique knowledge, or, in fact, any knowledge whatsoever. So the same is true of 100 such PDFs, or 1000. 

There are actual examples of vast literatures that contain zero knowledge: Astrology, for instance. People have written so much about astrology that I bet you could spend decades reading what they've written and not even come close to the end. But at the end of the day, the only thing you'd know more about is the mindset of people who write about astrology. Because astrology is total and utter bunk.

But astrology generally isn't worth talking or thinking about, either. The real question is whether there are interesting, worthwhile topics where reading the vast literature would be counterproductive - in other words, where the vast literature actually contains more misinformation than information.

There are areas where I suspect this might be the case. Let's take the obvious example that everyone loves: Business cycles. Business cycles are obviously something worth talking about and worth knowing about. But suppose you were to go read all the stuff that economists had written about business cycles in the 1960s. A huge amount of it would be subject to the Lucas Critique. Everyone agrees now that a lot of that old stuff, probably most of it, has major flaws. It probably contains some real knowledge, but it contains so much wrong stuff that if you were to read it thinking "This vast literature contains a lot of useful information that I should know," you'd probably come out less informed than you went in. 

Of course, many would say the exact same about the business cycle theory literature that emerged in response to the Lucas Critique and continues to this day. But if so, that just makes my point stronger. The point is, a bunch of smart people can get very big things wrong for a very long period of time, and that period of time may include the present.

I have personally encountered situations where I felt that reading the vast literature didn't improve my knowledge of the real thing that the literature was about. For example, I read a lot of the macro models that came out in the years following the 2008 financial crisis. Obviously, the financial sector is very important for the macroeconomy (as more people should have realized before 2008, but which almost everyone realizes now). But the ways that macro papers have modeled financial frictions are pretty unsatisfying. They are hard to estimate, the mechanisms are often implausible, and I bet that most or all will have glaring inconsistencies with micro data. I could be wrong about this, of course, but I felt like reading this vast literature was setting me on the wrong track. I'm not the only one who feels this way, either.

The next question is: Can a misinformative vast literature be used intentionally as a tactic to win political debates? It seems to me that in principle it could. Suppose you and your friends wanted to push a weak argument for political purposes. You could all write a bunch of papers about it, with abstracts and numbered sections and bibliographies and everything. You could cite each other's papers. If you wanted to, you could even create a journal, and have a peer review system where you give positive reviews to each other's B.S. papers. Voila - a peer-reviewed literature chock full of misinformation.

In practice, I doubt anyone ever does this intentionally. It takes too much coordination and long-term planning. But I wonder if this sometimes happens by accident, due to the evolutionary pressures of the political, intellectual, and academic worlds. The academic world gives people an incentive to write lots of papers. The political world gives people an incentive to use papers to push their arguments. So if there's a fundamentally bad argument that many people embrace for political reasons, there's an incentive for academics (or would-be academics) to contribute to a vast literature that is used to push that bad argument.

And in the world of intellectual debate, this vast literature can function as a mud moat. That is a term I just made up, sticking with the metaphor of political arguments as medieval castles requiring a defense. A mud moat is just a big pit of mud surrounding your castle, causing an attacking army to get trapped in the mud while you pepper them with arrows.

If you and your buddies have a political argument, a vast literature can help you defend your argument even if it's filled with vague theory, sloppy bad empirics, arguments from authority, and other crap. If someone smart comes along and tries to tell you you're wrong about something, just demand huffily that she go read the vast literature before she presumes to get involved in the debate. Chances are she'll just quit the argument and go home, unwilling to pay the effort cost of wading through dozens of crappy papers. And if she persists in the argument without reading the vast literature, you can just denounce her as uninformed and willfully ignorant. Even if she does decide to pay the cost and read the crappy vast literature, you have extra time to make your arguments while she's so occupied. And you can also bog her down in arguments over the minute details of this or that crappy paper while you continue to advance your overall thesis to the masses.

So when I want to talk and think and argue about an issue, and someone says "How about you go read the vast literature on this topic first?", I'm presented with a dilemma. On one hand, reading the vast literature might in fact improve my knowledge. On the other hand, it might be a waste of time. And even worse, it might be a trap - I might be charging headlong into a rhetoritician's mud moat. But choosing not to read the vast literature keeps me vulnerable to charges of ignorance. And I'll never really be able to dismiss those charges.

My solution to this problem is what I call the Two Paper Rule. If you want me to read the vast literature, cite me two papers that are exemplars and paragons of that literature. Foundational papers, key recent innovations - whatever you like (but no review papers or summaries). Just two. I will read them. 

If these two papers are full of mistakes and bad reasoning, I will feel free to skip the rest of the vast literature. Because if that's the best you can do, I've seen enough.

If these two papers contain little or no original work, and merely link to other papers, I will also feel free to skip the rest of the vast literature. Because you could have just referred me to the papers cited, instead of making me go through an extra layer, I will assume your vast literature is likely to be a mud moat.

And if you can't cite two papers that serve as paragons or exemplars of the vast literature, it means that the knowledge contained in that vast literature must be very diffuse and sparse. Which means it has a high likelihood of being a mud moat.

The Two Paper Rule is therefore an effective counter to the mud moat defense. Castle defenders will of course protest "But he only read two papers, and now he thinks he knows everything!". But that protest will ring hollow, because if you can show bystanders why the two exemplar papers are bad, few bystanders will expect you to read further.

If it proves to be as effective as I think, the Two Paper Rule, if widely implemented, could make for much more productive public debate. The mud moat defense would be almost entirely neutralized, dramatically reducing the incentive for the production of vast low-quality literatures for political ends. It could allow educated outsiders and smart laypeople access to debates previously dominated by vested insiders. In other words, it could shine the light of reason on a lot of dark, unexplored corners of the intellectual universe.


Update

Some people seem to misunderstand the purpose of the Two Paper Rule. The Two Paper Rule is not about summarizing the literature's findings - for that, you'd want a survey paper or meta-analysis. It's about evaluating the quality of the literature's methodology.

Sometimes a lit review will reveal pervasive methodological weakness - for example, if a literature is mostly just a bunch of correlation studies with no attention to causal effects. But often, it won't. For example, if the literature has a lot of mathematical theory in it, a lit review will generally contain at most one stripped-down partial model. But that doesn't give you nearly as much info about the quality of the fully specified models as you'll get from looking at one or two flagship theory papers. Or suppose a literature consists mostly of literary theorizing; the quality of the best papers will depend on the clarity of the writing, which a lit review is unlikely to be able to reproduce. Sometimes, lit reviews simply report results, without paying attention to what turn out to be glaring methodological flaws.

In other words, if you suspect that a literature functions mainly as a mud moat, what you need to assess quickly is not what the literature claims to find, but whether those claims are generally credible. And that is why you need to see the best examples the literature has to offer. Hence the Two Paper Rule.

Meanwhile, Paul Krugman endorses the Two Paper Rule. Tyler Cowen is more skeptical of its universality.  

Sunday, May 14, 2017

Actually good Silicon Valley critiques?


Scott Alexander has a post with some pretty spectacular smackdowns of Silicon Valley's more exuberant critics. Some excerpts:
While Deadspin was busy calling Silicon Valley “awful nightmare trash parasites”, my girlfriend in Silicon Valley was working for a company developing a structured-light optical engine to manipulate single cells and speed up high-precision biological research. 
While FastCoDesign was busy calling Juicero “a symbol of the Silicon Valley class designing for its own, insular problems,” a bunch of my friends in Silicon Valley were working for Wave, a company that helps immigrants send remittances to their families in East Africa.. 
While Gizmodo was busy writing that this “is not an isolated quirk” because Silicon Valley investors “don’t care that they do not solve problems [and] exist to temporarily excite the affluent into spending money”, Silicon Valley investors were investing $35 million into an artificial pancreas for diabetics. 
While Freddie deBoer was busy arguing that Silicon Valley companies “siphon money from the desperate throngs back to the employers who will use them up and throw them aside like a discarded Juicero bag and, of course, to themselves and their shareholders. That’s it. That’s all they are. That’s all they do”, Silicon Valley companies were busy inventing cultured meat products that could end factory farming and save millions of animals from horrendous suffering while also helping the environment.
Alexander then goes on to look at a bunch of venture-funded startups, and concludes that most of them are either run-of-the-mill computer-related businesses, or idealistic save-the-world type of stuff, not goofy overpriced trinkets.

Alexander's post is an entertaining and timely reminder that most of the tech industry's more ardent critics are probably just using the Valley as a misplaced whipping boy for their general frustration at the larger problems of the American economy. When you live in daily fear of losing your $40,000/year job, skate on the verge of bankruptcy from overpriced medical bills, lose your house to a bailed-out bank, and realize every day that you make less than your parents did, seeing some high-flying computer whiz getting handed $10 million or $50 million seems to just rub salt on the wounds. (Also, Silicon Valley badboy Peter Thiel sued Gawker out of existence, so Gawker-derived outlets like Gizmodo and Deadspin may have a bit of a chip on their shoulder about that.)

Alexander's post asks the right question, which is "What could Silicon Valley be doing to make the world better, that it's not currently doing?" The answer is: Probably not a lot. There are a few excesses here and there, but by and large, these are just people doing the best they can, trying to both make a buck and make the world a better place. They just happen to have gotten luckier than most over the last few years.

The American public, unlike the writers Alexander spotlights, believes in the tech industry. Gallup tracks the favorability ratings of U.S. industries. The "computer industry" is the second most favorable (behind restaurants), with an enormous positive-minus-negative gap of 53 points, and the "internet industry" comes in at #7 with a positive-minus-negative of 29 points. This stands in stark contrast to pharmaceuticals and health care, both of which garner significantly negative ratings.

In other words, angry Gawker writers and pugilistic lefty bloggers to the contrary, most Americans love the heck out of tech. But OK, just to play devil's advocate, suppose we did want to criticize Silicon Valley and not end up looking foolish. What would actually non-silly criticisms look like? Here are some candidates:


1. Silicon Valley culture is still too sexist.

I haven't worked in the tech industry, so I can't speak to this personally. Female friends' anecdotes range from "Oh my God, it's so sexist" to "I don't really see any sexism". And my male friends in the industry - who are, of course, a highly selected set - almost all try to go out of their way to create a supportive, inclusive, fair environment for women. But it's hard to deny that at some companies like Uber, there is a pervasive culture of sexism. And surveys say that sexism is still fairly common in the industry, though not overwhelmingly so. Since employment at the less sexist companies is limited, that means if you're a woman looking to work in tech, there's a good chance you're going to be forced to take a job at one of the more sexist ones, and endure unwanted sexual advances, lack of promotion, casual slights regarding your technical competence, etc. Female founders also probably have extra difficulty getting funding.

How could Silicon Valley mitigate this problem? One way is for non-sexist tech industry leaders to speak out more aggressively against workplace sexism. Just let everyone know it isn't OK. Another way is for venture capitalists to hire more women (some are doing this already). But I suspect that a lot of the change will have to come from big established companies like Google, Apple and Amazon. Big institutions are probably better at creating female-friendly cultures, are more susceptible to public pressure, and have a larger profit cushion to allow them to make deep organizational changes. 10-person startups are too busy trying to survive the month to examine their gender attitudes, and closely held behemoths like Uber are generally less transparent and accountable than their public counterparts. So I expect the big public companies to lead the way.


2. Silicon Valley is late coming out with the Next Big Thing.

Tech venture financing is a hit-driven business - a few big wins make up all of the returns. VC funding is tiny compared to private equity or hedge funds - a few tens of billions per year - but three out of five big companies got their start with venture financing.

But it's noticeable that really huge successes, at least as measured by stock performance, seem rare in recent years. Facebook seems like the last really successful behemoth to come out of the Valley and it went public five years ago. Twitter's stock is down by half since its IPO more than 3 years ago, it hasn't managed to branch out beyond its core product, and it remains plagued by Nazis and bots. LinkedIn did well in the markets but was acquired last year by Microsoft. Stuff like Groupon and Zynga flamed out.

Meanwhile, the current crop of behemoth candidates seem like they could be on shaky ground. Uber, despite being valued in the tens of billions in private markets, still hasn't made any money, and if its pricing power doesn't improve it might never turn a profit; meanwhile, it's plagued by scandals, dysfunction, and an exodus of talent. Snap doesn't seem to be very ambitious as a company, and might already be getting outcompeted by Facebook even as it continues to lose money. The only real contenders for post-Facebook behemoths seem to be Netflix, which actually went public long before FB and only recently became an entertainment giant, and Tesla, whose long-term success remains to be seen.

This doesn't mean VCs themselves aren't making good returns. How much money venture funds are making depends on how you measure it, and is often something that isn't known til years after the fact. A VC fund's return on any company depends not just on where the company ends up, but on how much the VC paid for it.

But it's possible that the frontiers of technology are shifting toward capital-intensive things that favor large established players with deep pockets and long investment horizons. Machine learning, for example, might favor big companies with lots of data over plucky startups in their garages. If that's true, it would mean that tech is becoming a more mature, sedate industry - at least for now.

Anyway, I guess this only sort of counts as a "criticism", since if this is true there's not much anyone can do. Also, it was 8 years between Google's IPO and Facebook's, and 7 years between Amazon's and Google's, so I'd give it at least a few more years before we get impatient.


3. Peter Thiel is an evil man.

In the tech industry, there's a culture of not criticizing anyone publicly. I like that culture, but I'm not part of it, so I'm free to say that Silicon Valley badboy Peter Thiel looks like a bad guy. I'm kind of neutral on the Thiel vs. Gawker war - Gawker definitely had it coming, but having rich people be able to sue newspapers out of existence due to personal feuds seems like a scary precedent. But Thiel's support of Trump, his habit of making a buck off of government surveillance, and his promotion of nasty political ideas combine to make him the closest thing America has to a comic-book evil mastermind. Thiel's sort-of-reactionary ideas are confined to a small minority of techies, but the Valley's friendly culture means that even those who disagree with him are out there publicly singing his praises. I certainly wouldn't mind if tech industry people got more vocal about disagreeing with Thiel's values.


4. Silicon Valley is too blase about disruption.

Economists are rapidly learning they were wrong about something big - the economy is not as flexible and dynamic as many had assumed. People who lose their careers, to globalization or automation or regulation or whatever, often never find anything as good. Retraining has proven to be much harder than economists had hoped. Americans are moving less, too. Basically, a lot of people who lose their career jobs have crappy lives forever after.

It's not clear what Silicon Valley companies could do about this problem. If the frontiers of technology are shifting from things that complement human skills to things that substitute for human skills, or if technology is widening the skills gap and making inequality worse, it's not clear whether the boardroom decisions of Google, Amazon, etc. can do anything to alter that trend. No one really knows how much of tech progress is intentional, and how much just sort of happens automatically.

But it would be nice to see big tech companies actually worrying about this out loud. So far - and this is based on anecdote - there seems to be a general presumption in the tech industry that displaced workers will just find something better to do. It would be nice to see tech execs grapple more explicitly with the emerging realization that many displaced workers will in fact not find something better to do, but will sink into a lifetime of low-paid service work, government welfare, and unhealthy behavior.

Even if tech companies can't actually do anything about this problem, it would be nice to see more acknowledgment that the problem is real and significant.


5. Tech might be in the middle of a bust.

Venture financing is falling, indicating that some of the enthusiasm of 2013-2015 might have been overdone. But even if this continue, and tech has a bust, this is just not that worrying. Even if tech startups turn out to be overvalued, it represents almost zero danger to the U.S. economy or financial system. The dollar amounts are small, and stock in closely held tech companies is not a big percentage of any normal person's wealth. If a bunch of unicorns go bust, the vast bulk of the pain will be felt by tech workers and investors themselves, not by the broader public. So this criticism, while potentially true, is just not that big a deal.


There, you have my list of potentially valid critiques of Silicon Valley. Notice that this is pretty weak tea. #2 might not be anything the Valley could change at all, #5 is no biggie, and #3 and #4 are mostly just a matter of optics. Only #1 represents anything real and substantive that the tech industry could definitely be doing differently. All in all, Silicon Valley represents one of the least objectionable, most rightfully respected institutions in America today.

Saturday, May 13, 2017

How should theory and evidence relate to each other?


In response to the empirical revolution in econ, and especially to the rise of quasi-experimental methods, a lot of older economists are naturally sticking up for theory. For example, here's Dan Hamermesh, reviewing Dani Rodrik's "Economics Rules":
Economics Rules notes with some approbation the rise of concern among applied economists, and especially labor economists, about causality. It fails, though, to observe that this newfound concentration has been accompanied, as Jeff Biddle and I show (History of Political Economy, forthcoming 2017), by diminished attention to model-building and to the use of models, which Rodrik rightly views as the centerpiece of economic research. He recognizes, however, that the “causation ├╝ber alles” approach (my term, not Rodrik’s) has made research in labor economics increasingly time- and place-specific. To a greater extent than in model-based research, our findings are likely to be less broadly applicable than those in the areas that Rodrik warns about. Implicit in his views is the notion that the work of labor and applied micro-economists might be more broadly relevant if the concern with causation were couched in economic modeling. If we thought a bit more about the “how” rather than paying attention solely to the “what,” the geographical and temporal applicability of our research might be enhanced... 
In the end, the basic idea of the book—that models are our stock in trade—is one that we need to pay more attention to in our research, our teaching, and our public professional personae. Without economic modeling, labor and other applied economists differ little from sociologists who are adept at using STATA.
Oooh, Hamermesh used the s-word! Harsh, man. Harsh.

Anyway, it's easy to dismiss rhetoric like this as old guys defending the value of their own human capital. If you came up in the 80s when an economist's main job was proving Propositions 1 and 2, and now all the kids want to do is diff-in-diff-in-diff, it's understandable that you could feel a bit displaced.

But Hamermesh does make one very good point here. Without a structural model, empirical results are only locally valid. And you don't really know how local "local" is. If you find that raising the minimum wage from $10 to $12 doesn't reduce employment much in Seattle, what does that really tell you about what would happen if you raised it from $10 to $15 in Baltimore?

That's a good reason to want a good structural model. With a good structural model, you can predict the effects of policies far away from the current state of the world.

In lots of sciences, it seems like that's exactly how structural models get used. If you want to predict how the climate will respond to an increase in CO2, you use a structural, microfounded climate model based on physics, not a simple linear model based on some quasi-experiment like a volcanic eruption. If you want to predict how fish populations will respond to an increase in pollutants, you use a structural, microfounded model based on ecology, biology, and chemistry, not a simple linear model based on some quasi-experiment like a past pollution episode.

That doesn't mean you don't do the quasi-experimental studies, of course. You do them in order to check to make sure your structural models are good. If the structural climate model gets a volcanic eruption wrong, you know you have to go back and reexamine the model. If the structural ecological model gets a pollution episode wrong, you know you have to rethink the model's assumptions. And so on.

If you want, you could call this approach "falsification", though really it's about finding good models as much as it's about killing bad ones.

Economics could, in principle, do the exact same thing. Suppose you want to predict the effects of labor policies like minimum wages, liberalization of migration, overtime rules, etc. You could make structural models, with things like search, general equilibrium, on-the-job learning, job ladders, consumption-leisure complementarities, wage bargaining, or whatever you like. Then you could check to make sure that the models agreed with the results of quasi-experimental studies - in other words, that they correctly predicted the results of minimum wage hikes, new overtime rules, or surges of immigration. Those structural models that failed to get the natural experiments wrong would be considered unfit for use, while those that got the natural experiments right would stay on the list of usable models. As time goes on, more and more natural experiments will shrink the set of usable models, while methodological innovations enlarges the set.

But in practice, I think what often happens in econ is more like the following:

1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence.

2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions ("Minimum wages don't have significant disemployment effects").

3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to "explain" the empirical result they just found. These models are generally never used or seen again.

A lot of young, smart economists trying to make it in the academic world these days seem to write papers that fall into Group 3. This seems true in macro, at least, as Ricardo Reis shows in a recent essay. Reis worries that many of the theory sections that young smart economists are tacking on to the end of fundamentally empirical papers are actually pointless:
[I have a] decade-long frustration dealing with editors and journals that insist that one needs a model to look at data, which is only true in a redundant and meaningless way and leads to the dismissal of too many interesting statistics while wasting time on irrelevant theories.
It's easy to see this pro-forma model-making as a sort of conformity signaling - young, empirically-minded economists going the extra mile to prove that they don't think the work of the older "theory generation" (who are now their advisers, reviewers, editors and senior colleagues) was for naught.

But what is the result of all this pro-forma model-making? To some degree it's just a waste of time and effort, generating models that will never actually be used for anything. It might also contribute to the "chameleon" problem, by giving policy advisers an effectively infinite set of models to pick and choose from.

And most worryingly, it might block smart young empirically-minded economists from using structural models the way other scientists do - i.e., from trying to make models with consistently good out-of-sample predictive power. If model-making becomes a pro-forma exercise you do at the end of your empirical paper, models eventually become a joke. Ironically, old folks' insistence on constant use of theory could end up devaluing it.

Paul Romer worries about this in his "mathiness" essay:
[T]he new equilibrium: empirical work is science; theory is entertainment. Presenting a model is like doing a card trick. Everybody knows that there will be some sleight of hand. There is no intent to deceive because no one takes it seriously. 
In addition, there are also paper groups 1 and 2 to think about - the purely theoretical and purely empirical papers. There seems to be a disconnect between these two. Pure theory papers seem to rarely get checked against data, leading to an accumulation on the shelves of models that support any and every conclusion. Meanwhile, pure empirical papers don't often get used as guides to finding good structural models, but are simply linearly extrapolated.

In other words, econ seems too focused on "theory vs. evidence" instead of using the two in conjunction. And when they do get used in conjunction, it's often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. Of course, this is just my own limited experience, and there are whole fields - industrial organization, environmental economics, trade - that I have relatively limited contact with. So I could be over-generalizing. Nevertheless, I see very few economists explicitly calling for the kind of "combined approach" to modeling that exists in other sciences - i.e., using evidence to continuously restrict the set of usable models.

Sunday, April 30, 2017

The siren song of homogeneity


The U.S. and Europe are in a time of great political change. Policies haven't changed that much yet, but the set of ideas that drive movements and activism and the public discussion have altered radically in the last few years. In the U.S., which of course I know the best, there have been new outpourings on the left - the resurgent socialist movement and the social justice movement chief among them. But as far as I can see, the biggest new thing is the alt-right. Loosely (we can argue about definitions all day, and I'm sure many of you will want to do so), the alt-right wants to make American society homogeneous. Most of the enthusiasm is for racial homogeneity, but religion seems to figure into it a bit as well.

The siren song of homogeneity is a powerful one. On Twitter and elsewhere, I am encountering more and more young people (mostly men) who openly yearn for a society where everyone is white. The more reasonable among these young people tell me that homogeneity reduces conflict, increases social trust, and has a number of other benefits. They often cite Japan as their paradigmatic homogeneous society; some explicitly say they want a white version of Japan.

Those are the reasonable ones - the less reasonable ones tend to communicate in memes, threats, and slurs ("Fuck you, Jew! How about open boarders for ISRAEL!!", etc.). But the fact that these men are dedicating so much time, effort, and passion into those memes, threats and slurs says something important. It says that there is passion in this movement.


Is the alt-right really a growing, rising movement?

Much of the passion for white homogeneity seems new to me - twenty years ago, despite the existence of Nazi-type websites like Stormfront, the idea of making America an all-white nation seemed like a fringe notion. Perhaps it still is a fringe notion - after all, social media acts as a force multiplier that allows a relatively small number of highly committed individuals to seem like a huge army. And perhaps this kind of sentiment was always reasonably common in America, but simply kept under wraps by the mainstream media before the internet emerged to make it more visible.

There is some evidence to support the contention that alt-right ideas are still highly unpopular in America. A 2016 Pew survey found that only 7 percent of Americans say that growing diversity makes the country a worse place to live:


Compare that to 31 percent in Britain and Germany and 36 percent in the Netherlands!

Meanwhile, recent polls find support for immigration:


That's a short time series, so here's a longer one from Gallup. It also shows a gentle downtrend in anti-immigrant sentiment, and also pegs it at just under 40 percent:


As Gallup's racial breakdown shows, the decline in anti-immigrant sentiment is being driven by whites - anti-immigrant sentiment is actually slightly up among blacks and Hispanics. That implies that much of what anti-immigrant sentiment does exist is not due to a growing yearning for a homogeneous white nation. A substantial majority of white Americans supports letting undocumented immigrants stay, as long as certain conditions are met - that doesn't exactly seem like a vote for white homogeneity.

So it's certainly possible that the alt-right - even defined very generally, including the more moderate "alt-light" and the quietly sympathetic "alt-white" - is a shrinking, dying idea that is only becoming louder and more aggressive because it's under threat. It's possible that Trump's election was really driven more by people's economic hopes that he would bring back dying industries and bring American jobs back from overseas, or even just by a desire to roll the dice of change.

But I think that whether or not the alt-right is really a growing, burgeoning movement, it makes sense to take it and its ideas seriously. First, the presence of Trump in the White House will probably force much of the country to listen to what the alt-right has to say. Even though he isn't really their man, he has hired several people who at least loosely sympathize with the movement's ideas - Bannon, Miller, Anton and Gorka among them. That means that at least as long as Trump's butt is planted in a chair in the Oval Office, alt-right ideas have at least a chance of making it into government policy. That means the alt-right, and their ideas, matter.

And even beyond that, I feel an emotional desire to engage with the alt-right - at least, the more reasonable among them. I couldn't care less about the people in Europe supporting Le Pen or Geert Wilders, but alt-right Americans are my countrymen. I'm a nationalist at heart and I care about what my countrymen think.

And I think that there are a decent of young (mostly) men out there whose intellectual lives will be defined by this stuff - who will spend their 20s and 30s entranced by the idea of a homogeneous white society. Just as there are old hippies who still look at the world through the lens of the 1960s anti-war movement, in a few decades there will be some aging white Millennial men for whom Pepe the Frog and r/thedonald and Kekistan and the Great Meme War were the climax of their youthful energy and imagination. I want to engage with those people, even if (as I predict) they ultimately lose.


Is the alt-right really a pro-homogeneity movement? Is Trumpism?

Every movement is...well, heterogeneous. Alt-right people talk a lot about homogeneity, but it's certainly not the only thing they talk about, or the only reason for their movement. Some may join the alt-right simply out of a fear of the social justice movement - banding together for mutual defense. Others may simply be opposed to some group of immigrants - someone who would be fine with a Cuban neighbor might be terrified of a Syrian one. Still others may be religious traditionalists looking for a home after the collapse of the Christian right, neo-Confederates allied to an Old South style of racial politics or just Trump fans looking for a cool club to join. For some, "homogeneity" might be simply a convenient rallying cry for expelling undesirable groups from the country, or for instituting one's chosen value system. As for Trumpism, that almost certainly has multiple causes - anything as big and all-encompassing as a presidential election will have multiple causes.

But I think research shows that fear of ethnic heterogeneity is a real driver of Trump support. For example, this study shows that reminding white people with strong white identification that America is getting less white (which might not actually be true, but we'll get to that later) increased support for Trump. And anecdotally, support for homogeneity pops up again and again in pro-Trump literature and discourse. Here's a quote from Trump advisor Michael Anton's famous essay "The Flight 93 Election," widely considered to be one of the basic Trumpist manifestos:
Third and most important, the ceaseless importation of Third World foreigners with no tradition of, taste for, or experience in liberty means that the electorate grows more left, more Democratic, less Republican, less republican, and less traditionally American with every cycle. As does, of course, the U.S. population[.]
So I'd say the case is fairly clear that the desire for a homogeneous society runs strong through both the alt-right and the broader Trump movement.


The data-based case for homogeneity

(Note: When I talk about "homogeneity" in this post, I'm only talking about the ethnic/racial type. I'm not talking about linguistic, religious, or other dimensions of homogeneity/diversity.)

The case for homogeneity comes down to the idea that a homogeneous society is a nicer place to live. Alt-right people cite Japan's stunningly low crime rate, for example, as evidence that ethnically similar people don't fight. They also claim that homogeneity increases social trust.

There is a reasonably large body of research that supports the "trust" idea. For a good list of links to those papers, check out this post by blogger James Weidmann, better known as Roissy. Roissy sums up the thesis in one simple equation: "Diversity + Proximity = War." I'm not going to replicate the whole list here, but here's a very small sampling:

1. A study in Denmark showing a negative correlation between reported trust and ethnic diversity at the municipality leve from 1979-2009

2. A study in Britain find that people who stay in communities after those communities become more diverse report more negative attitudes toward their communities afterward

3. A study in the Netherlands finds that increasing diversity in classrooms made kids more likely to choose friends of similar ethnicity

4. A study found that across Europe, different-ethnicity immigration tends to decrease social trust, while similar-ethnicity immigration tends to increase it.

Roissy didn't include econ papers on his list, but economists have also flagged the dangers of ethnic divisions. Alesina, Baqir, and Easterly (ironically, a rather diverse team of authors) famously found that ethnic divisions reduce public good provision. Alesina, Glaeser, and Sacerdote hypothesize that diversity is what prevents America from having a Europe-style welfare state.

There are lots of postulated mechanisms for how diversity reduces trust and leads to dysfunctional societies. Maybe people are genetically programmed to cooperate with those who are genetically more similar to them. Maybe people who belong to different groups have different interests. Maybe we just generally fear that which is different and strange.

On top of this appeal to evidence, however, there's an emotional appeal - as there always is for any really important political idea. There's the negative appeal of fear of diversity - the specter of becoming a minority, potentially hated, despised, and/or oppressed by other groups. But there also seems to be a yearning for a half-imagined utopia - a "Japan for white people", where shared whiteness produces a neighborly camaraderie, social cohesion, and peace that is unknown in much of modern America.


Caveats to the data-based case for homogeneity

Roissy is a polemic blogger; his aim is to advocate, not to educate. The academic case for homogeneity is not nearly as clear-cut as what he presents.

Many of the studies he cites have methodological issues. For example, one study finds that "neighborly exchange" is negatively correlated with diversity. But its data set doesn't allow it to compare recently diversified neighborhoods with neighborhoods that recently received a lot of internal in-migration - in other words, it may simply be that a flood of newcomers, be they the same race as the majority or not, tends to disrupt neighborly friendships. In fact most of the cited studies tend to have this problem - it's hard to distinguish between the impact of population mobility and the impact of diversity itself.

Other studies he cites show some cases in which ethnic diversity increases trust. For example, a study in America found a U-shaped relationship between ethnic fractionalization and trust, meaning that high and low diversity places tend to have more trust than medium-diversity places (which makes sense if medium-diversity places are places where a bunch of newcomers just showed up).

Also, it's worth noting that many of the studies Roissy cites are from Europe. It may be the case that Europe functions differently than America, and is not an appropriate comparison. Most Europeans may think of their societies as based on ethnicity - "blood and soil", as some say - while this may hold true for only a minority of Americans. Also, recent European nonwhite immigration may be very different from the type of nonwhite immigration America gets - where America has recently mostly taken in hard-working Hispanics and high-skilled Asians and Africans, Europe has tended to take a lot of lower-skilled Middle Easterners and North Africans. Not only might the latter tend to be a more fractious type of immigrant, but there's also an enmity between Europe and the MENA region that goes back further than reliably recorded history. That could contribute to the distrust. In other words, the kind of diversity you get probably matters a lot.

Then there are all of the contrary studies Roissy, as a polemic blogger, doesn't cite. It's a big literature, and there are lots of findings that go in the other direction. For example:

1. A recent study in Southern California found that ethnic diversity is associated with decreased crime and higher home values

2. A study in Britain showed no relationship between ethnic diversity and trust.

3. A study in Europe found a positive long-term effect of diversity on trust.

4. A 2014 literature survey finds that "ethnic diversity is not related to less interethnic social cohesion."

5. A 2008 study in Europe found that ethnic diversity didn't decrease social capital.

6. A 2007 study in Britain found that the negative effect of diversity on social cohesion disappears after controlling for economic variables.

7. There's also a big literature on diversity and group decision-making, most (but not all) of which concludes that ethnic diversity makes groups smarter.

I could go on - most of this is the result of me just doing Google Scholar searches for "diversity and trust" and "diversity and social capital" and picking out any studies on the first page or two that seem to contradict the "diversity decreases trust" conclusion. That's hardly a scientific way to proceed, but it does show that if you get your academic information from a polemicist, you're going to get a distorted picture of the academic literature.

My point here is not to say that the alt-right is wrong about homogeneity and trust. They might be right - my sense from reading literature surveys is that the correlation between homogeneity and trust is a common finding, but not overwhelmingly common. My point here is to say that the question of homogeneity and trust is not yet answered. This is not surprising, because both homogeneity and trust are big, expansive, vaguely defined concepts, which usually means clear-cut answers don't exist.

Another thing that bothers me about many of these studies is that I tend to be a bit skeptical of survey research. This is not to say survey research is worthless, but I guess like any good economist I instinctively put more stock in measures of actual behavior. Roissy's link list does include some studies showing diversity increases conflict, but to my knowledge, the academic consensus is that immigration reduces crime (including in Canada). That literature review is from a few years back, but recent research all seems to confirm the finding. To me, lower crime is a much more tangible result than people simply saying negative things on a survey.

But an even more important reason why you shouldn't put too much stock in this literature is that almost none of these studies are very good at dealing with endogeneity. Here are some examples of endogeneity issues:

* Suppose low-skilled immigrants tend to move to areas with low social trust, because businesses in places with low social cohesion tend to hire cheap labor.

* Suppose large empires tend to conquer lots of different ethnicities and encourage internal migration that increases local ethnic diversity, but suppose that large empires also tend to collapse, causing lots of local conflicts.

* Suppose exogenous events that cause waves of newcomers - conflicts, recessions, out-migration from declining areas - are also things that tend to reduce trust.

To really control for these kinds of things, you really need natural experiments. They already do this for things like the impact of immigration on wages. But to isolate the effect of ethnic diversity from the effect of population mobility - i.e., to tell the difference between "newcomers of any race" and "newcomers of a minority race" - will require finding some situation where different ethnicities of newcomers are randomly assigned to different areas.

(Update: Someone forwarded me this paper showing that when housing is randomly assigned in France, diversity is correlated with "social anomie", which apparently increases vandalism but reduces violent conflict. Interesting! Keep in mind that this might be specific to the types of people who live in France.)

Anyway, so this is all important to think about. But to me, the really interesting question is whether ethnicity itself is endogenous.

More on that later, though. First, let's shift gears from data to anecdote, so I can talk about my experiences living in an ethnically homogeneous society.


My own experience in a homogeneous society

As regular blog readers know, I've live in Japan (for a total of about 3.5 years). Though I'm of course not Japanese, the experience taught me much about how Japanese people live and think. So I have observed at least one good example of a homogeneous society up close. While that example might not generalize, here are my thoughts.

First of all, if you think Japanese people share a sense of camaraderie and togetherness from all being the same ethnicity, think again. Because Japan is homogeneous, ethnicity just isn't that salient to most Japanese people - when a Japanese person meets another Japanese person, they don't think "Japanese person," they just think "person". Ethnic identity isn't on their minds.

Because of this, ethnic homogeneity creates very little solidarity on a day-to-day basis in Japan. Japanese people are generally wary of striking up conversations with strangers - more wary than Americans of different races are of striking up conversations with each other, I find. Services like Craigslist that facilitate informal transactions between private parties are rarely used - when I ask Japanese people why, they say it's because they can't trust strangers. Some Japanese people have told me that they feel far less shy talking to a foreigner than they do talking to another Japanese person.

I suspect that the feeling of ethnic solidarity that many alt-right whites feel for other alt-right whites is something unique to minorities. People who have always been part of the overwhelming majority just don't think about ethnicity enough for it to create bonds of solidarity - except in extreme situations, like a foreign war.

Surveys corroborate my hunch. Japan has always reported relatively low levels of interpersonal trust - until recently, considerably lower than in the U.S.:


Now keep in mind, that's trust, which is very different from trustworthiness. Japanese people, as a rule, are some of the most scrupulously honest people I've ever met. I've had old Japanese women run to catch up with me on the street, handing me a penny I dropped. The one time I dropped a substantial amount of cash on the ground, it was a yakuza bodyguard who notified me. Japanese people generally deserve high trust, but don't necessarily give it to each other. 

Urban Japan also seems to me to have little tradition of "neighborly exchange" (I'm sure this is different in small towns, but Japan is very highly urbanized). I see very few people saying hello to their neighbors. One person I knew who did this was considered eccentric.

So if you think a homogeneous society means that people will tip their hat to you on the street and be you're friend just because you're the same race as them, think again. 

However, Japanese culture also has quite a lot of unwritten rules, which almost everyone follows. Some of these are speech rules - the famous Japanese "politeness". Some are rules about work - the famous Japanese "corporate culture". Some are rules about service in restaurants and shops. There are many others. 

These rules - which people sometimes mistakenly label "conformity" - would be harder to turn into universal norms in the diverse United States. Foreigners, or people from other parts of the country, might just not know the rules. And people from certain ethnic backgrounds might resent being pressured to follow those rules by people of other ethnic backgrounds, and so might intentionally disobey. The less other people follow a social rule, the less incentive there is for me to follow it.

So Japanese homogeneity seems to produce a society where everyone's minor, day-to-day interaction is a little more predictable

How about politics? Japan has long been dominated by a single political party (the LDP), and politics is traditionally conducted via factions within that ruling party. There's little question in my mind that homogeneity is one of the causes of one-party dominance - there's no ethnic minority to form the core of an opposition party. 

So how does that work out? Japanese politics is famously dysfunctional - the debt is out of control, patronage politics is rife, and there's usually a dearth of leadership. This was as true before World War 2 as it is today - Japan in the 30s was afflicted with frequent coup attempts and plenty of extremism, and essentially bumbled its way into multiple disastrous wars. Nowadays, Japanese political dysfunctionality is more likely to manifest itself as wasteful spending and obstruction of needed economic reforms.

However, it's worth noting that Japan has not experienced a "populist backlash" like other countries. Shinzo Abe is a true nationalist leader, and a responsible one. He was quick to quell outbursts of racism against those minorities that do exist in Japan, and in general has a pretty progressive agenda. And overall, Japanese people are (so far) pretty happy with Abe. He's worlds away from a Trump or a Le Pen or an Erdogan or a Chavez. So it's possible that homogeneity exerts a stabilizing effect on Japanese politics, insulating it from periodic outbreaks of madness, while making it less responsive in normal times due to the lack of a credible opposition.

As for crime, everyone knows that Japan is an extraordinarily safe country. It's hard for people who've never lived there to wrap their heads around how safe it is - teenage girls walk the streets of major cities alone at night in schoolgirl skirts and fear absolutely nothing. Is Japan so nonviolent because of its homogeneity? It's hard to say. In America, immigration - which is usually nonwhite immigration - tends to decrease crime. The ultra-diverse New York City and Los Angeles are some of the lowest-crime cities in America. Also, Japan does have a few very diverse neighborhoods, and these are also quite safe. So my instinct is to say that Japan's secret safety sauce is something else. But I don't really know.

So overall, if I were to draw conclusions from my experience in Japan, I'd say that homogeneity has its advantages and disadvantages, but ultimately isn't clearly better or worse. Japan is one of the awesomest, nicest places I've ever been, but the other top contenders are diverse places like Vancouver, Austin, and the San Francisco Bay Area.

(As an aside, if I were making policy, I'd recommend that Japan not take in mass immigration. Maybe their society could handle it, maybe it couldn't - but I say, no need to mess with a good thing. But that's also why I recommend that America and Canada keep taking in lots of immigrants - we've got a different kind of good thing going. Anyway, that's my instinct.)


How racial is homogeneity?

But here's one coda, which leads into my next point. Are Japanese people all the same race? Maybe not. Japan was formed from the confluence of two groups, the Jomon (unusually densely populated hunter-gatherers) and the Yayoi (rice farmers). This genetic mixing is still very apparent in the genetic data. And perhaps as a result of this, you see a reasonably large diversity of features in Japanese people. For example, here are two Japanese guys:


Are those two guys the same race? Technically, yes. In America they'd both be "Asian", in Asia they'd both be "Japanese". Neither American culture nor Japanese culture recognizes any ethnic difference between these two men. And sure, they both have straight black hair, and their skin tones aren't that different. But the pretty big difference in physical appearance between those two guys - and between many people in Japan - makes me wonder whether our definitions of race aren't a little...elastic.


What if homogeneity is a choice?

In lefty circles, it's common to hear people say that "race is a social construct." What could that possibly mean? Obviously, physical differences are real. And obviously, those differences are going to be clustered, because for most of human history - and even now, really - there was only limited population mixing across areas. A clustering algorithm will pick out clusters of traits, and you can call those "races" if you want.

But are the "races" we recognize the same that would be picked out by a clustering algorithm? Sometimes, sure. But not always. The two pictures above demonstrate that even in a supposedly super-homogeneous place like Japan, genetic differences exist that culture and society just don't recognize as representing different ethnicities.

Another important example is "Han Chinese". When you look at the genetics, Han Chinese people are actually pretty diverse. Another is "Turkish". Here are two Turkish actors I just found by Googling:



Wow. Compared to these guys, the two Japanese guys above look like twins. Obviously, these two men have ancestors from very different geographic locations, and yet somehow they're both Turks. Just like some British people have red hair and some have black, and just like some Japanese people have "sauce faces" and some have "soy faces", some Turkish people have dark skin and some have light. A difference in appearance need not translate to a difference in race, in the real world.

But the most interesting example might be "white." In America, we have a race called "white" that Europe just doesn't seem to have. In Europe, anecdotally, ethnicity is defined by language, and perhaps also by religion. While skin color differences are recognized, European ethnic definitions are usually much finer. In America, though, they're all just "white." 

In fact, who's included in "white" seems to change quite a lot over time. In 1751, Benjamin Franklin was arguing against North European immigration on the grounds that Swedes, French people, Russians, and most Germans weren't "white":
Which leads me to add one Remark: That the Number of purely white People in the World is proportionably very small. All Africa is black or tawny. Asia chiefly tawny. America (exclusive of the new Comers) wholly so. And in Europe, the Spaniards, Italians, French, Russians and Swedes, are generally of what we call a swarthy Complexion; as are the Germans also, the Saxons only excepted, who with the English, make the principal Body of White People on the Face of the Earth.
What a difference two and a half centuries make, eh? And the expanding definition of whiteness doesn't seem confined to the distant past, either. Twentieth-century immigrant groups like Italians, Jews, and Poles were initially not considered "white" (except by the legal system), but rather "white ethnics". Now, no one in America questions whether Italians are white, and were in fact white all along, from the very start. And the only people who question whether Ashkenazic Jews are white are a few screeching Nazis on Twitter (who may or may not reside in the U.S.). 

In fact, this may already be happening with Hispanics. More and more Hispanics are declaring themselves white

"Black" and "Asian" are other examples. In America, "black" people are all assumed to be part of one big race, as are "Asian" people. But try telling Hutus and Tutsis in Africa that they're both part of the same ethnically homogeneous group. Or try going to a bar in Korea and telling some guys that they're the same race as Japanese people (My advice: Be ready to duck). Ethnic differences that Americans don't even recognize the existence of are the basis of genocide in other parts of the world.

"White" too. Hitler's plan for the Soviet Union involved genocide of Slavs on a scale so epic that it makes it clear the Holocaust was just a dress rehearsal:


Now that's some #whitegenocide, right there. Even though Germany lost, they made considerable headway toward making that plan a reality, slaughtering over 20 million Russians.

So you have blue-eyed Turks thinking they're the same race as black-haired Turks. You have pale Americans and swarthy Americans both calling themselves "white". And then you have Germans launching an all-out apocalyptic war to exterminate a group of people that they probably couldn't even tell from themselves if they all had the same clothes and haircuts. 

(Random anecdote: One time, in Germany, a German woman came up to me and started speaking rapid German. She was astonished to find that I was American, and said "But you look so German!")

OK, but suppose you don't buy all this stuff about the social definition of race. That's hippie-dippy bullshit, right? Genetic differences are real, end of story. OK, but even then you must admit the power of intermarriage.

Intermarriage was probably essential for the creation of the white race here in America. This is from a recent National Academy of Sciences report titled "The Integration of Immigrants into American Society":
Historically, intermarriage between racial- and ethnic minority immigrants and native-born whites has been considered the ultimate proof of integration for the former and as a sign of “assimilation” (Gordon, 1964; Alba and Nee, 2003). When the rate of interethnoracial or interfaith marriage is high (e.g., between Irish Americans and non-Irish European Americans or between Protestants and Catholics), as happened by the late 20th century for the descendants of the last great immigration wave, the significance of group differences generally wanes (Alba and Nee, 2003). Intermarriage stirs the ethnic melting pot and blurs the color lines.
When tons of people have Irish, German, and English ancestors, it's just very hard to keep those three ethnic categories separate in society. The same thing happened to Italians and Jews after World War 2. In the early 1960s, the outmarriage rate among Italian Americans was over 40 percent. Jews took a little longer, but got there eventually - the Jewish outmarriage rate is now 58 percent, and among the non-Orthodox it's 71 percent. 

(In case you were wondering, somewhere around 33% of native-born Hispanic and Asian Americans currently marry non-Hispanic whites.)

Whether you believe race is fundamentally about biology or sociology, intermarriage erases racial boundary lines. It's the final proof that ethnic homogeneity is not fixed, but changes depending on what people do.


An alternate theory: Trust causes homogeneity

Once you realize that homogeneity can be produced, through redefinition and through intermarriage, an alternate theory presents itself for why there might be a correlation between homogeneity and trust: Places with high trust become more homogeneous over time. 

This could happen genetically. When people associate freely and don't have intergroup suspicions and hatreds, they probably tend to hook up and get married with each other a lot more. Over time, the prevalence of trust leads to a genetically homogeneous group.

This could also happen socially. When people of disparate groups are bound together for a common purpose - fighting a war against a neighboring country, for example - the increased feeling of solidarity and commonality might cause them to start to consider themselves as one single race. 

So what produces trust? Perhaps another big, nebulous thing: institutions. Research shows that when organizations like the military, colleges, and public schools put people in close contact and make them cooperate, they start to trust people of other ethnic groups more. For example, here's the abstract of a 2006 American Economic Review paper called "Empathy or Antipathy? The Impact of Diversity":
Mixing across racial and ethnic lines could spur understanding or inflame tensions between groups. We find that white students at a large state university randomly assigned African American roommates in their first year were more likely to endorse affirmative action and view a diverse student body as essential for a high-quality education. They were also more likely to say they have more personal contact with, and interact more comfortably with, members of minority groups. Although sample sizes are too small to provide definitive evidence, these results suggest students become more empathetic with the social groups to which their roommates belong.
And here's the abstract from a very recent paper called "Trust, Ethnic Diversity, and Personal Contact: Experimental Field Evidence":
We combine a lab and a field experiment in the Norwegian Armed Forces to study how close personal contact with minorities affect in-group and outgroup trust. We randomly assign majority soldiers to rooms with or without ethnic minorities and use an incentivized trust game to measure trust. First, we show that close personal contact with minorities increases trust. Second, we replicate the result that individuals coming from areas with a high share of immigrants trust minorities less. Finally, the negative relationship between the share of minorities and out-group trust is reversed for soldiers who are randomly assigned to interact closely with minority soldiers. Hence, our study shows that social integration involving personal contact can reduce negative effects of ethnic diversity on trust.
Crucially, unlike most of the papers about diversity and trust cited above, these studies are randomized experiments

Because they're randomized experiments, they're inevitably small-scale. These are moderate, short-run effects - to really know whether institutions like schools and the military can erase racial boundaries over many decades is beyond the scope of controlled experimentation. So these papers are really just suggestive.

But the notion seems to fit with American history. The Civil War seemed to put an end to the eruption of anti-Catholic sentiment, allowing Irish and South German Americans to integrate both socially and genetically into the emerging white race. And after World War 2, the outmarriage of Italian, Jewish, and Polish Americans accelerated. In both cases, the experience of being part of a nation at arms, cooperating side by side in a desperate, titanic struggle, probably erased a lot of the suspicions, prejudices, etc. that had persisted before the wars.

Anyway, this alternate theory can potentially explain the correlation between trust and homogeneity - places with institutions that create high trust levels tend to become more homogeneous over time. 


An alternate theory: "War + Proximity = Diversity"

But what about all those wars? Most of the time there's a really big war, there's at least some modest ethnic difference between the combatants - British vs. French, German vs. Russian, Hutu vs. Tutsi, Japanese vs. Korean. If small differences like those could cause such incredible bloodshed, think about what calamities could be caused by the difference between groups as distinct as Africans and Europeans!

In fact, I think the historical record gives us a clue as to why this idea is wrong. The bloodiest wars in history are mostly either civil wars in China, or interstate wars in Europe or East Asia. This was true even when Europe and Japan had global reach. They chose to kill people who looked a lot like them, rather than people who looked very different. In fact, genocides between extremely distinct groups - for example, the Belgian genocide in the Congo - are the exception, not the rule. In fact, plenty of mass killings happen among people who don't recognize any ethnic differences between the sides at all - the Khmer Rouge, Mao Zedong, the Spanish Civil War, etc. 

So we have big genetic differences not even being recognized in some parts of the world, and tiny, possibly undetectable genetic differences being the basis for genocide in other parts of the world. I'd say the thesis that "Diversity + Proximity = War" is, at the very least, suspiciously incomplete.

A better general theory, I think, is that most competition happens between groups of people that are pretty similar. Similar people have similar interests and desires, which naturally leads them to compete. But when people fight en masse, they need ways to organize themselves in order to motivate soldiers to kill others who look and act like them. Thus, they exaggerate any small differences they can find. "You're German, superior to those inferior Slavs; exterminate them!" Etc.

Under this theory, the "#whitegenocide" that some alt-right people fear - a term they use for race mixing - is actually the exact opposite of real genocide. Under this theory, race mixing happens when high social trust causes group differences to stop mattering, while genocide happens when low social trust causes previously insignificant group differences to start mattering.

To sum up, instead of "Diversity + Proximity = War", we might theorize that "War + Proximity = Diversity" - wars give people a reason to emphasize and magnify small differences. 

It's why you don't often see humans fighting emus


A compromise theory

Given the evidence on both sides, and the plausibility of both the pro-homogeneity and the pro-diversity theories, it seems at least somewhat likely to me that the real world features a combination of the two. Here's how the compromise theory goes: At first, when an influx of new people comes in, there's a natural reaction of distrust, and existing communities get fractured. However, as time goes on, the previous inhabitants and the newcomers get used to each other. This process is accelerated by integrating institutions like public schools, colleges, and the military, and is complete once intermarriage is widespread. However, social conflict, especially political conflict, can keep this integration from happening, causing groups not to mix and people to continue to emphasize and maintain their differences. 

So the compromise theory says: In the short run, increased diversity causes decreased trust; in the long run, high trust cause increased homogeneity. 

Or, as I once put it on Twitter: "One different-looking person in your neighborhood is a guest. 100 are an invasion. 1000 are just the neighbors."

Update: I should mention that this compromise theory is basically Robert Putnam's conclusion:
[E]vidence from the US suggests that in ethnically diverse neighbourhoods residents of all races tend to ‘hunker down’. Trust (even of one's own race) is lower, altruism and community cooperation rarer, friends fewer. In the long run, however, successful immigrant societies have overcome such fragmentation by creating new, cross-cutting forms of social solidarity and more encompassing identities. Illustrations of becoming comfortable with diversity are drawn from the US military, religious institutions, and earlier waves of American immigration.
If this theory is right, America's success depends on having institutions strong enough to integrate Asians and Hispanics - the two most recently arrived big groups - with the existing groups of whites and blacks. In other words, this theory says that homogeneity isn't the means, it's the goal. 

Who knows; one day even white and black Americans might consider themselves part of the same ethnic group.


The dream of a white nation

But what about the people who don't want that? What about the alt-right folks and fellow-travelers who have no intention of waiting around for America's various races to all decide they're on the same team? Many want to take the shortcut to a homogeneous society - they want to live in a place where only white people are allowed. They want the dream of a half-remembered, half-imagined 1950s Southern California - the clean streets, the nice lawns, the dependable white neighbors who tip their hat and say hi to you as they stroll down the lane. And dammit, they want it now

Well, the short answer is: I don't know how they're going to get it. It's not going to be possible for them to reimplement racial segregation, or kick all the Asians and Hispanics out of the country. Any serious, large-scale attempt to do that would mean civil war and the collapse of America, which I guarantee would not lead to a nice pleasant racially homogeneous peaceful life for anyone anytime soon.

And what are the other options for creating Whitopia? Secede? Not gonna work. You can go to small towns and gated communities, but the jobs won't follow you, and by the law of the land, any nonwhite person who wants to can buy the house next to you. So what other options are there? Move to Argentina, I guess. Or maybe New Zealand.

It's this paucity of options, I think, that has so many alt-right people so freaked out. For people who want a white heterogeneous society, there's pretty much just nowhere to go. Until recently there was Europe, but with the rise of substantial nonwhite minorities there, and with most European leaders still committed to allowing large-scale nonwhite immigration, that avenue to Whitopia - or Kekistan, as it were - seems closed down. To those who dream of white homogeneity, it must seem like they're being hounded to the ends of the earth, denied any place to call their home, told everywhere by their leaders to integrate with the nonwhite people nextdoor. No wonder they're going crazy on Twitter.

I wish it were different. I wish there were some island nation where alt-right folks could go, and establish their all-white nation-state. It doesn't seem likely to happen, but if it could, I'd say: More power to you.

But the ironic thing is, suppose they did get their Kekistan. Suppose New Zealand decided to become an all-white country (like it did in 1920), and twenty million alt-right types from around the world moved there (giving it about a quarter the population density of Japan). I think it just wouldn't work.

I think people would move there, and find that homogeneity doesn't automatically produce trust and goodwill and social peace. They would find that their population was a highly selected set - it would be made up of people who couldn't get along with the people in their homelands. And they would find that the real thing keeping most of them from getting along with their neighbors wasn't ethnic diversity - it was their own personalities. 

Eventually, social strife would return. Neighbors would feud over land and resources and power and community status. Gunfights would erupt. Killdozers would be unleashed. The government would lurch from crisis to crisis. Protectionist economic policies would be tried and would fail. The economy would languish. Some people would emigrate, back to the hellscapes of diversity. 

And those who remained would cling to the theory that "Diversity + Proximity = War". No one likes to give up their cherished social theories, especially if it's the theory that the country was founded on. Just as with Hutus and Tutsis, the inhabitants of Kekistan would "discover" ethnic differences that had been there all along. Suddenly they wouldn't be just white people anymore, but Russian-Kekistanis, Italian-Kekistanis, Hungarian-Kekistanis. Strife and distrust would return, and the new country would undergo decades, if not centuries, of brutal upheaval, fragmentation, clan warfare, unstable military rule, competing aristocracies, atrocities, and poverty.

I didn't just make that prediction up, by the way. That's pretty much just the history of Japan

So although there's certainly a case to be made for homogeneity, I'd say the case is a lot weaker and more uncertain than its proponents believe. And more importantly, there's no path for how to get there - at least, not for a country like America. Except for a few small towns scattered throughout the country, the dream of an all-white utopia is likely to remain just that.