You Must Have a Diagram

(Epistemic status: I may have some details or formulations wrong but I'm pretty confident in the overall general thesis.)
(See also: Framing for Light Instead of Heat)
(Writing status: I wrote this originally sometime in 2015, despite only publishing it now; I've put an addendum at the bottom indicating one thing I'd change today.)

I've discussed my problems with the social justice movement in the past but I'm not sure I've ever really explained the core of my disagreement with them. I mean, it's easy to point out how much of the social justice movement is nothing but pure standard ingroup/outgroup dynamics that has lost touch with reality -- but, if that were my whole issue, that would certainly explain why I don't call myself one of them, but not why I say I disagree with them. To disagree with someone there has to actually be something to disagree with! But the fact is that when you look at examples of social justice that are not that, that are better than that -- for instance, the Black Lives Matter movement, or Ozy Frantz, I find that I still disagree. Their arguments, simply put, are flawed. Why is that?

I think there's a basic error in thought that the SJers are making, one made even by the better ones. Or maybe not so much one particular error, one particular way of doing things wrong, so much as a general failure to do things right. And it's that way of doing things right that I want to try to describe here. I mean -- lists of fallacies can be helpful sometimes, sure, but they're no substitute for actually learning logic. That said, I'm not sure I totally know how to describe this way of doing things right that I'm talking about here. So what I'm going to do is, I'm going to state some maxims. I have arbitrarily broken this down into 5 "major" maxims, and will occasionally state other "minor" maxims which are in bold. Do be aware that there's any number of other ways one could describe this and that there will be a fair bit of redundancy. In any case, my hope is that, even if I don't quite know how to exactly describe what I'm calling the right way of thinking about things, the maxims, explanations, and examples here will be enough to point to it, that those who read this can see the rest for themselves.

It's really this error in thought that I want to focus on, rather than object-level disagreements. That said, I will use object-level disagreements as examples of these errors in thought. Note that one example might illustrate multiple errors; or, due to inclarity, it may be difficult to pin down exactly which error it is -- in many cases, I find, SJers make statements that are underspecified, with one possible interpretation having one error, another possible interpretation having a different error, and no interpretation being quite correct. So where below I say "SJers frequently say X", X may just be one reading of what they say; in reality they may have intended a different reading with a different error, or just not been clear on the matter. I will try to address these multiple possible interpretations where I can, but can't promise I will catch everything.

(Note that none of this is to let non-SJers off the hook regarding these errors; I see several of these errors fairly commonly from others as well. But SJ arguments seem to fall into them with a distinct constancy.)

Finally, I'd like to make one assumption of mine explicit before we begin: I am, throughout, going to assume moral individualism, by which I mean, that all things that are morally fundamental are things that make sense at the level of individuals, not groups. (We can make an exception for the group consisting of literally everyone -- since that's the one other partition invariant under arbitrary permutations -- but this exception won't be relevant here so I'll ignore it.) So for instance, if you were to say that a particular group has become worse off, but every individual in that group has become better off, and nobody has entered or left, that would be in contradiction to this assumption. I realize many SJers don't accept moral individualism, but, I'm not here to argue that, so I'm just going to lay this out. (Note that if you are a utilitarian you are necessarily a moral individualist, if you're consistent!) For the most part this won't be relevant anyway.

Anyway, with that out of the way, let's begin.

#1: You must have a diagram

As you may noticed, I chose to title the whole essay after this one, because without this one, none of the others really make sense. You must have a diagram.

What sort of diagram am I talking about here? I mean a causal diagram, a diagram that describes the causal structure of whatever you're talking about. Now I don't necessarily mean a full-on formal causal graph like you'd find in Judea Pearl. I'm going to be way more informal than that. For instance I'm going to allow loops -- in an actual proper causal graph, if you've got a feedback loop going between A and B, you have to represent that as A at some time influencing B at a later time influencing A at a still yet later time (a distinct node from the original A). I say, let's be informal and just draw edges going both ways between A and B, you know? I guess basically I'm talking about something like a causal loop diagram, but like I said I want to be fairly informal here. Regardless, since we are talking about causation, edges must be directional. You can have an edge from A to B and an edge from B to A, but the process by which A influences B is not the same as the process by which B influences A, so we'd better have distinct edges for those.

If you do not have a diagram, then you simply do not have a real model of the phenomenon you're talking about. Now I'm not requiring that you draw the diagram explicitly -- just that the things that you're saying, that you're thinking, should be implicitly representable by one, you know? When I read something that makes sense, I can see the implicit diagram, even if it hasn't been explicitly drawn. When I read things by SJers, frequently all I see is a gigantic bidirectional web of association. That's no good. Association is no substitute for causation. And causation is directional.

Let's say a bit more here. It's important that the nodes in your diagram be things that make sense as individual nodes, not complex processes or other things that have relevant internal structure. (I'll expand on this point a fair bit in the next section.) For instance, it's probably a mistake to include in your node indicating how much "power" a particular person or group has. I mean, what does that even mean? Power isn't just one thing, and different sorts of power let one do different things. Power, simply put, has internal structure, and that internal structure is almost always relevant. Obviously it's OK to include nodes that have internal structure when that internal structure can be safely abstracted away, but I'm not sure I've ever seen a case where that's true of such a thing as "power".

Another example of a bad node is "capitalism" -- here I'm using that word in the leftist sense. Capitalism, as the leftists conceptualize it -- assuming I'm understanding them correctly, anyway -- is a complex process, made of a bunch of reinforcing loops. So, whatever cause is acting on the phenomenon you're discussing, it's not "capitalism", it's some component of such. Certainly the other parts will bear on that component, yes, but you're making a mistake if you try to take such loops and abstract them away into individual nodes. Especially if the thing you're discussing, that you're claiming is "caused by capitalism", is part of those loops! Then really you should say it is "part of capitalism" -- you have mistaken a relation of composition for one of causation (more on that theme, sort of, in the next section). And then the abstraction really makes no sense and you have to open up the box and look inside.

One final note. I'm not saying your diagram has to be neat and sparse or anything. Social phenomena are infamous for having a high causal density, as they say. But the diagram should, at least implicitly, exist; and it should be causal and directional rather than associational and bidirectional. Let's move on.

#2: Your diagram must obey locality

Obviously, by that I don't just mean that you can't have faster-than-light signalling in your diagram; I don't think such a rule would put me in disagreement with anyone. I mean, maybe there's someone out there claiming that structural oppression is transmitted by tachyons, but if so I've been fortunate enough never to encounter them.

Instead I'm talking about a more informal notion of locality. When a thing happens, when a cause produces an effect, it must happen via some mechanism, and that implies a sort of locality. Maybe "locality" is the wrong word here and I should instead say, you must remember that each causal arrow must happen via some mechanism. I don't know. Put it however you like. The point is that when I can extract an implicit diagram from what SJers say, often they include these effcts that are, well, nonlocal. One thing happens over here and it affects another thing happening over there with no plausible mechanism as to how causation is transmitted from here to there. And that's a problem. (Note I'm less talking about physical space here than "social space". The idea still applies.)

Let's consider an example. People frequently posit all sorts of things as caused by inequality, and this frequently makes no sense. The problem is that inequality within a large group, such as a country, is a global measure, a group-based one, not a local one. How can a global measure affect any individual person? A change in the wealth of one person in California, which will have some effect on inequality, doesn't just automatically affect some random person in New York; there has to be some chain of causation by which this occurs. Let's face it -- inequality isn't a causal node, it's a summary statistic. Summaries, including statistics, aren't causal nodes. How does a summary statistic affect the world? I mean, OK, sometimes people might look at it and act on that information, but that's the exception, not the rule.

On the other hand, it's a lot clearer how poverty can have effects on the world, because poverty is localizable. (Some people, imagining a zero-sum world, conflate inequality with poverty. This is a mistake.) Poverty is a property of an individual; it makes sense to ask whether any given individual is poor, but not to ask whether they're unequal (to what?). And it's clear how any one person's poverty can affect that same person. No mysterious action-at-a-distance going on there. But to say a summary statistic affects the world, is basically the ecological fallacy.

And yes, it's even OK to use overall poverty level -- a summary statistic -- as a node in your causal graph! Doesn't that violate what I just said? Well, let's examine this more closely. What's even going on when we make such a graph? Obviously, by drawing such a diagram, or just describing such a process when discussing a social problem, the implication is that the process holds with some level of generality. Obviously one can draw causal diagrams that just apply to particular, non-repeated situations, but that's not the sort of thing we're discussing here. We're talking about diagrams that take a phenomenon that occurs over and over and abstract to just the parts that are common between these occurrences. Well, if we are making such a diagram, that abstracts, that aggregates, in such a way, then certainly the nodes in it will represent both individual instances, and the aggregation of those instances. So yes, overall poverty level is a summary statistic. But it's one that localizes, that's a direct aggregate of something local, something individual. Inequality, by contrast, is necessarily global, and only makes sense when applied to groups, not individuals; it's not an aggregate, it involves subtraction! The use of subtraction should be a big red flag. It doesn't localize; it's only a summary statistic, and doesn't belong in a causal diagram (unless the thing it causes is "people notice the level of inequality and react to such").1

Speaking of summary statistics, another common mistake I see is mistaking logical relations for causal ones. Logical relations are not causal. (Really, this probably should go under #1, but if you read on I think you'll be able to see why I put this here.) Now it's true that often logically-related quantities or objects are related in a way that resembles causation; you shouldn't include such things in your causal graph, because they're not causation, but using the language of causation is probably not too harmful. For instance, if I say that the death rate has increased because the murder rate has increased, strictly speaking, that is a logical relation, not a causal one, but you probably won't get into trouble if you think that way. However, if you find yourself saying that the murder rate has increased because the death rate has increased, something has gone very wrong. The thing is, SJers make statements analogous to the latter all the time!

The easiest way to see this is to consider how they talk about, say, racism. Now it's important to understand that the way SJers use the term "racism" is very different from the way most people use that word. Exactly what most people mean by "racism" varies, and I don't want to go too much into it, though Scott Alexander discusses the matter some here, in case you are somehow only familiar with the SJ definition. (I think Scott's piece misses a fair bit, actually, but that's irrelevant to my point here.) In any case, SJers mean something quite different by the word "racism", using it not to refer to something that lives in people's thoughts or feelings or actions, but to refer to a pattern, a result, a result where, on the whole, people of certain races tend to end up better off than people of other races.

But, by this meaning of "racism", racism can never be the cause of anything (as also pointed out by Scott Alexander in the piece just linked), except of course insofar as people observe this overall result and choose to respond to it. But it's not a thing in the world, it's not a causal node; it's a summary. It is, barring the one exception I just noted, only a result. And, of course, it is purely global; it cannot be localized. Simply put, it doesn't belong in a causal graph at all (well, barring the exception I made). And yet SJers will frequently attribute racism as a cause of various things (that do not fall under the exception I stated)! And yet, if we use their definition, such an attribution is exactly analogous to the "an increase in the death rate causes an increase in the murder rate" example I gave above -- firstly, the relation is logical, one of composition, not causation; and secondly, to the extent that the relation resembles causation, the casuation is in the wrong direction! (By contrast, racism in the usual sense -- though what that usual sense is probably requires further breaking down, as in truth there may be multiple -- easily works as a causal node, and thus to my mind is a much more useful concept. Admittedly, if you reject moral individualism, as many SJers do, then you might say that "racism" in the SJ sense has direct moral relevance; however this does not work if you assume moral individualism. It is also worth noting that there are many things that are bad and somehow race-related, or bad and somehow gender-related, that are not "racism" or "sexism" in the usual sense, and SJers would point out that an advantage of their terminology is that these are included. I agree that we need a way to talk about these things -- or some of these things, anyway, see below -- but expanding existing terms in a confusing manner is not the way to do it.) And of course all this applies, mutatis mutandis, to other things they attribute as a cause in cases where it makes no sense, including inequality.

That covers the maxims relating to the construction of the diagram itself. But there's still a fair bit to say about how one should apply such things, starting with...

#3: You must intervene specifically on the problems

That sounds vacuous, but I mean something non-vacuous by it. OK, let's make the setup a bit more explicit here. Above we were merely supposing you had some phenomenon under discussion, and you wanted to draw a diagram to understand it. Now there's not merely something going on, but something going on that's a problem, or multiple such somethings, and you want to do something about it. (It's worth discussing what I mean by "problem" here. I dont necessarily mean something that's directly morally bad in and of itself; I'm also including, like, processes going wrong, that is to say, errors. I'm assuming we want to live in a world where processes work correctly, producing the correct output for their input.) OK. Well, to have any effect on any of these problems, obviously you must intervene somewhere upstream of it -- either on it directly, or somewhere prior. Doesn't sound like something that needs saying, does it? And yet I frequently encounter SJers advocating for solutions that violate this basic constraint.

Admittedly, some of these may make sense if we assume that they consider different things to be problems than I do (which does appear to at least somewhat be the case). I'll save a detailed example for later. But for now I think it should suffice to say that I frequently see SJers attempting to fix -- or advocating for fixing -- a problem downstream of where it has occurred. Roughly speaking, advocating for workarounds, and not realizing that these are not, in fact, solutions. Like the cautionary tale of the programmer who finds that on a particular input their program outputs an answer that is 3 too low, and rather than attempting to understand why and fix the problem, modifies their program by having it always add 3 before outputting the result. This sort of thing is, I'm afraid, not problem-solving.

A particularly harmful form of this is when one masks indicators. If X is a problem, and X causes Y, then Y may serve as a useful indicator of X. Many of the proposals I see from SJers have the problem of attempting to "fix" Y -- i.e., to intervene on Y so that Y is in a state as though the problem X were not affecting it -- rather than fix X. Like trying to refuel your car by sticking the fuel meter in place. Not only does this not fix the problem, it reduces your ability to detect it.

Now, I don't want to be that fool who says "You don't treat a disease by treating its symptoms!" In fact in many cases treating the symptoms is important -- if your original root problem causes downstream problems, there are many reasons why one might have to or want to address those separately. (You still have to eventually get to the root cause, of course, but you don't necessarily need to prioritize it over the downstream problems; "eventually" can be a long time.) However one must distinguish between a downstream effect that is itself a problem, and a downstream effect that is not itself a problem, only an indicator. It is a mistake to try to "treat" the latter. Don't mask harmless indicators. (This is tangential, but I also want to caution that some indicators are not perfectly reliable -- they are useful red flags, but on seeing one must still investigate further to see whether there is a problem or not and if so what it is.)

But that isn't all. Obviously, to have an effect on the problem, you must intervene on the problem or somewhere upstream of it... but you see, I want to caution against the latter as well. Now, as I have said, typically there are multiple problems at once. And so typically if you have some problem, and you look upstream to see what causes it, you see, hey! These are also problems! And so they need to be addressed. But the thing is that any one thing has many causes, and so it will also happen that many of the causes of any given problem will not themselves be problems. And to intervene on these would generally be a mistake. When looking for the root cause of a problem to intervene on, look for the causes that are also problems themselves; don't intervene on causes that aren't problems. Intervening on these other causes would just screw things up (don't forget that these things probably have all sorts of other effects as well; beware side effects). But, again, I frequently see SJers advocate for solving problems by intervening on causes that aren't themselves problems. (We can make an exception here for when the causation from this upstream node to the problem is not something one can actually intervene on; then, absolutely, it makes sense to look upstream, or even just to consider the two things together as a single node, a single problem. But one should think about whether such a thing is actually the case before going straight to something upstream.)

The reasoning here seems to be, as best I can tell, that these things are associated, and therefore good targets. But that doesn't work. You can't just go by association; you have to go by causation. And, as mentioned, you can't ignore the question of which parts are problems and which parts are not, saying instead that because a thing is associated with a problem that it is itself a problem. Just as it's a workaround to fiddle with the output rather than fix the problem, it's also a workaround to fiddle with the input rather than fix the problem. Don't fiddle with the input, don't fiddle with the output, fix the broken process.

(There's an important exception here -- as has been noted, you can't fix problems downstream, but if someone's been hurt, it's a good thing to compensate them. But the appropriate way of doing this is money, not screwing up other things to favor them.)

One thing I've noticed is that if we roughly divide processes into those that introduce error, those that reduce error, those that amplify error and those that merely propagate error, SJers will often tag as the problem processes that merely propagate error. But, most of the time you can't really do better than propagating error; correcting error is often impossible and, when possible, seriously difficult. As such I think it makes little sense to blame error-propagating processes for propagating error -- because most of the time, how could they not? -- rather than their earlier inputs for introducing the error. (As for things that amplify error... let's just leave those out for simplicity.) Now if the thing that is introducing the error is impossible to intervene on, then, yes, it may make more sense to focus on correcting error downstream rather than stopping it. The thing is that, like I said, most of the time this is impossible, and attempts to intervene downstream (i.e. work around the problem rather than fix it) will just result in the sort of problems I've described above -- not only have you not fixed the original error, you've now introduced more of your own. Basically, your efforts will not change the basic principle of GIGO.

But there's still more that can be said about this:

#4: Problems must be localizable

This is going to be closely related to the previous one; like I said, there's a fair amount of redundancy in the organization I've chosen. Anyway, I should note in advance that I actually mean something rather different by "localizable" in this section than I did in section #2; I hope you will forgive the double-use, but I do not think any of my uses of it here should be ambiguous. In section #2 I was talking about localizability with regard to the space all of this takes place in; physical space, social space. In this section, by contrast, I am talking about localizability within your causal graph.

What do I mean by that? Well, suppose you've identified a particular node in your graph as a problem, and you want to stop it. You don't really see any way to intervene on it directly, so you look at its causes; one particular causal arrow looks like it's a problem itself. Or at least it would be if that arrow was something you couldn't really decompose further. But you can. So you focus on that arrow for further study, you dig in and examine the mechanism by which this causation occurs, and you draw a refined causal graph, one that reflects the more fine-grained causal structure of that mechanism. On doing this, you see that among these parts, these sub-nodes and sub-arrows, not all of them are actually problems -- some are correct processes but with bad inputs, for instance, with the actual problem therefore being upstream -- and the ones that are problems are actually pretty different from one another, and shouldn't really be grouped together as the same problem.

Here's the thing: Having now done this, you can't really talk about the original problem you identified as the problem anymore. Because you've seen that actually, it's not the monolithic thing you thought it was, it's not the problem itself, rather it has components and some of those components are problems, but they're not even really of the same type. In short, having localized the problem, you can't unlocalize it. And you must localize the problem -- must attempt to break it down to find which parts are actually problems and which aren't.

Let's take an example. Let's consider the infamous gender wage gap. To recap: On initial inspection, it appears that there's a large gender wage gap; naïvely one would attribute this to wage discrimination. Once you make an apples-to-apples comparison, though, it largely goes away; not entirely, there could easily be some real wage discrimination there, but mostly what's going on seems to be a difference in terms of what jobs are being worked rather than how those jobs are being paid. For now we'll pretend, in order to make the argument clearer, that it goes away entirely when we look closer, although note as I said that this is not in fact the case and this assumption is purely to reduce irrelevant complications in the example.

So: Given that pretty much none of this apparent difference exists in an apples-to-apples comparison, that pretty much none of it is due to wage discrimination -- in what sense is it a wage gap? The part where the problem lies, once you've localized things further, does not involve wages. And as I said above -- once you've localized, you can't unlocalize. So, in short, speaking about it as a "wage gap" is misleading. Now, SJers will say, "But you see, the wage gap comes about by means of the hiring gap!" This, to my mind, is a misleading way of talking about the problem. You must use language that reflects the structure of the problem. And, let's say moreover, the things you talk about should be causal processes, not associations. The way SJers talk about such things does not reflect the causal structure of the problem. They are giving a label to something that only initially appeared to be a direct causal link, but in fact turned out to possibly just be a correlation. So firstly, this confuses most people, because they naturally misinterpret the SJers as talking about something else, something which actually is a causal link (because that's what's useful to talk about, causation, not association); and so it misleads them into thinking they should be thinking about wage discrimination, because that's how pretty much any sensible person would interpret the notion of a wage gap that needs fighting. Secondly, it's just a useless way of talking, a useless way of breaking down the problem. Like I said, once you've localized, you can't unlocalize; once you've observed that the overall process has parts that might not be problems -- and it's not entirely clear to what extent the hiring difference is a problem, because it may be due to hiring discrimination (obviously a problem) or it may be due to differences in the populations (which could itself be caused by sexism further upstream, or could be caused by other things; more on this later) -- or consists of problems that are not of a kind, you can't just talk about this process a singular problem anymore. (Note that, as with the "racism" example above, the SJer's position here makes a bit more sense if we reject moral individualism, as then the "wage gap" in the sense they mean it might be directly morally relevant even if no sexism in the ordinary sense is involved. But if we accept it, then it's not, and it becomes just a useless concept.)

Finally, there's one more maxim I'd like to state:

#5: Don't be needlessly specific

I already said above to use language that reflects the structure of the problem -- and that covers a lot of this. Don't be needlessly specific in your language. However I actually mean something more here -- in addition to avoiding needless specificity in how one talks about the problem, one must also avoid needless specificity in one's modeling of your problem in the first place. (Unsurprisingly, it's frequently unclear which of these two SJers are failing at.)

This is the one that... well, it seems seriously rare that SJers' arguments pass the first four tests -- practically unheard of, really -- but when they do, they fail here, it seems. Even when they've got a detailed model of the problem, that they've localized, with plans to intervene on the parts that are actually problems, they're still needlessly specific, or else are talking about the problem in a way that doesn't reflect its structure.

Black Lives Matter is my example here. There's a lot to like about Black Lives Matter; they've pointed out an actual problem and make good demands, but ultimately I disagree with the things they say, and it's because the things they say don't reflect the structure of the problem. Now they're dealing with the problem of the police killing people, which they discuss as a racial problem; their whole thesis, after all, is that this happens because police officers consider black people's lives as worthless (hence the name). But when the police kill someone, that's a problem regardless of the person's race. And the interventions that BLM has called for and in some cases gotten implemented (e.g. body cameras), are, again, race-neutral. So (and I'm pretty sure I'm quoting a Slate Star Codex post in saying this, but I can't find it) if the problem doesn't involve race... and the solution doesn't involve race... then in what sense is this about racism?

Obviously the SJ answer to this is that it's about racism because it happens more often to black people. But once again we have to ask the question, is that connection causal and localized? (Here I mean "localized" in the sense of the previous section, not in the sense of section 2.) If not, it's simply not relevant, it's just an association. Now this is not to say there cannot also be a racism problem occurring! It is very, very believable that police officers are racist (I mean this in the usual sense, not the SJ sense) and are biased in favor of shooting black people. That's a very believable claim! I mean, I sure believe it; I'd say we have ample evidence that it's true. So let's go ahead and assume it. The thing is that then we have to ask the question, which problem are we talking about, which problem are we trying to solve? The summary execution problem, or the bias problem? If it's the latter, of course we have to talk about race. But if it's the former, doing so doesn't make sense; focusing on one race is just being needlessly specific. But that's exactly what BLM does -- focusing on one problem while using rhetoric appropriate only to the other; talking about a problem of "police killing black people" but calling for solutions that make sense more generally, for "police killing people".

There is something I've left out, though; despite what I just said, there actually is a way race could be directly relevant even when dealing with the problem of police killings. Well, OK -- one way it could be directly relevant is if we accept the idea, occasionally espoused by SJers, that police killings are morally worse when they happen to black people than to white people, due to the former constituting a marginalized group. Once again, I can't accept that because it requires us to reject moral individualism. But there is another way it could be relevant. And that's if there were two separate causal pathways. That is to say, if there were one reason why police killed white people, and a different reason why police killed black people -- if there were different causal pathways involved, that had to be addressed separately -- then, absolutely, it would make sense. Of course then, as I just said, you'd then end up with a solution composed of two separate interventions; so while this would meet the "problem doesn't involve race" criterion above, it wouldn't meet the "solution doesn't involve race" criterion -- the solution would indeed involve race. So evidently that's not the situation BLM thinks we're in, since its solutions don't.

To put it another way, "don't be needlessly specific" means "don't split categories without reasons". Your categories should reflect the structure of the problem -- that means the causal structure and the moral structure. Unless we believe the hypothesis outlined in the previous paragraph, which BLM doesn't seem to, splitting "police killing people" into "police killing black people" and "police killing white people" is faling to respect the causal structure of the problem (and if we accept moral individualism, it's also failing to respect the moral structure). Don't split a category based on things that are neither causally nor morally relevant; things that could occupy the same spot in a causal diagram, and are bad for the same reason, must be grouped.

Further examples

At this point I think I've said all I intend to in terms of general principles; I am hoping I have managed to convey at least some of what I mean by the right way of doing things as opposed to the way SJers do things. But, I've got two more examples I want to use to illustrate matters (these may also illustrate principles other than the ones above).

The first one is perhaps fairly silly but representative of a general mistake -- one covered by the above, to be clear; this example is going to be fairly similar to that in the previous section -- you can find any number of other issues on which SJers make exactly the same mistake. The example we are going to consider is "manspreading"; I pick this one because it's such a pure example of the error.

Seeing as I just discussed something similar it should probably be fairly obvious what I have to say about this. The problem of people taking up too much room on public transportation by means of spreading their legs into the adjacent seats despite there being little room has nothing to do with the sex or gender of the person doing so; the extent to which it is a problem is completely independent of that. And though it has been held up as an example of sexism, there's really no credible way it can be called that, not even using the expansive SJ definition of that term, unless we are to believe that people do this disproportionately often when the people who are sitting next to them or need to sit down are female (something I've never even seen claimed). (OK, I suppose it would fit within the SJers' notion of sexism if we were to say that women were less comfortable asking someone to stop doing that so they could sit. But A. once again I've never so much as seen this claimed, and B. we shouldn't be using the SJers' notion of sexism anyhow; as I've already explained above, it's of neither moral nor causal relevance). So why is it talked about in gendered terms? Well, you see, men do it more often, and to the SJers, that assocation is sufficient reason. Unsurprisingly, I say it's not.

Really this is all already covered under maxim #5. There's no reason you need two separate nodes for when a man does it versus when a woman does it; just have one node, and if men do it more often (which we can suppose is a causal relation -- although probably one with internal structure that needs to be elaborate) then you can draw an additional arrow there, that's all. Of course in drawing that arrow we are saying that there indeed may be a gender-related problem here! It's just that it is not the problem originally discussed; it is a different problem and needs to be distinguished and discussed separately. Once again, see maxim #4, and the discussion of the example under maxim #5. The fact that you have two related problems, of which one is gender-related, does not mean that the other is also gender-related in any way that is relevant to anything. Even then I do not think these two problems are directly related to one another; I think if you look at the internal structure of this second problem, you will find other intermediates, which are closer to the actual problem, and are better things to focus on. As always, getting a clear picture of the world requires noticing what things are not directly connected to one another; conflating distinct problems makes it harder to meaningfully intervene on either. The social justice habit of mashing everything together by association leads nowhere good.

Moving on, the second example is the issue of deliberately gender-balancing selective groups (such as companies), which SJers often advocate. Presumably this is because there is something wrong with an imbalance. But what, exactly? As I see it, we have something of a trilemma here. At an abstract level, the (non-exclusive) possibilities are:

  1. Imbalances are the result of something bad (like sexism/discrimination);
  2. Imbalances are bad in and of themselves; and
  3. Imbalances will cause other bad things (of which further discrimination is one possible example).

Of these, (2) I have to immediately reject as not being morally individualist; mind you, obviously I am rejecting it as a correct reason, not rejecting it as a reason SJers advance -- many of them do indeed advance that reason. But it's not a reason I can possibly accept as correct.

Then we have (1). Now I have to agree that such imbalances are often the result of sexism and discrimination! But, this actually doesn't justify deliberate balancing, for several reasons. Let's suppose, for the duration of the discussion of (1), that (1) is the only problem, and not (2) or (3). Then deliberate balancing -- that is to say, targeting an even ratio -- is not actually fixing the problem. For one thing, you don't actually know that the unbiased ratio would be even (more on why this is in a bit, but I want to be clear that one does not have to believe in any sort of major biologically-based cognitive differences between men and women for this to be the case). But let's suppose you did know that -- there's no guarantee that deliberately targeting that ratio would get you the correct result. Remember, the correct result is the one that gets you the best people! It could be that getting the best people would get you an even ratio, but lots of other results, that don't get you the best people, also get you an even ratio. If you know that the correct result has the property of having an even ratio, than an imbalance tells you something is wrong, but balance doesn't tell you that something is right. (Or, in short, remember Goodhart's law -- this here in particular is the causal form of it.) In other words this is exactly what I warned against above -- masking a neutral indicator. (Remember, we're assuming (2) and (3) are not in play for now.) Don't do that! There could still be all sorts of biases in your process but now you've killed your ability to detect them. Remember, bias isn't about whether the ratio of men to women is too higher or too low -- which, really, if we're not assuming (2) or (3) is a meaningless question, the correct ratio is simply whatever getting the correct people gets you, not something you can know in advance -- it's about whether you're evaluating each individual correctly. Targeting a ratio doesn't help you do that better. If you see an imbalance and conclude that it's due to sexism and bias, the thing to do is, as always, fix the problem, not work around it. That means getting rid of steps where bias can be introduced or amplified, finding ways to blind things... hell, if you're scoring things numerically maybe you can even come up with a way to measure the bias and adjust it out (but remember that you can't do this just by looking at the end ratio; measuring bias requires making ceteris paribus comparisons). All these are possible ways to fix the error -- and hopefully there are more such ways -- rather than introducing a second error in order to attain not the correct result but something that happens to share one indicator with the correct result.

Of course, there is the matter of "pipeline issues". That is to say, it's possible that the people you're getting, the inputs, are affected by sexism upstream. It seems to likely to me that this is a real phenomenon in many cases. But by this point it should be pretty clear why such pipeline issues do not justify balancing: You can't fix a problem by intervening downstream from it. If there is sexism upstream, you must fix the problem upstream, not attempt to compensate for it downstream, which will, as I've already described above, just introduce more errors while masking your indicators. The fact is, if there is sexism upstream, then the correct answer for your problem, the unbiased answer, will have an uneven ratio. Is that to say that in the absence of all sexism anywhere there would be an uneven ratio? No. But you cannot fix the upstream error downstream. If what you can intervene on is this downstream node, then the best you can do is to correctly execute your process in an unbiased fashion so that it is merely propagating error rather than introducing new errors. As I've said above, there is no avoiding GIGO; while exceptions may exist, it is seriously unlikely that you can turn your process into one that corrects errors rather than merely propagating them. Deliberate balancing isn't error-reducing, it's just introducing a different sort of error. Now, if you reject moral individualism and accept (2), so that the imbalance is itself an error, then I suppose balancing could be considered a correction, but as I've said above, I accept moral individualism and don't think that the imbalance should be considered any sort of error. The error is bias that causes you to judge people incorrectly. If there is a problem upstream, you must fix that separately, not attempt to work around it.

(As hinted at above, it is sometimes claimed that this upstream sexism, these pipeline problems, are in fact due to these imbalances. That introduces an element of (3) rather than being purely (1). Nonetheless these two aspects can be analyzed separately; if the aspect that falls under (3) justifies balancing, it does so independently of the aspect that falls under (1), which makes no contribution to such a justification.)

(It's also worth noting that there are other possible problems that the imbalance could be an indicator of, not just discrimination, but I won't go into those here. But I've basically already discussed this above.)

This leaves (3) as the remaining case. Now, if (3) is the case (and not (1) or (2)) then really if possible it would be better to intervene downstream, to intervene directly on the problem caused (see above about avoiding upstream interventions). However that's not necessarily possible; sometimes there's just no way to do that. If (3) is true, and downstream intervention is impossible or prohibitively difficult, then some amount of deliberate balancing could certainly be justified on that basis. And I've certainly see SJers argue for balancing based on (3). What bothers me, though, is how the bulk of SJ argument for balancing that I see doesn't really distinguish between (1), (2), and (3), and just kind of conflates them all together -- as if they don't even have a diagram and haven't attempted to localize the problem! Such an argument is not one I can take seriously. Moreover, even if we accept (3), that by itself is still not enough to justify targeting an even ratio; there's much more work that needs to be done here. That is to say, even if we accept (3), it remains the case that deliberately targeting a ratio (or range of ratios) is introducing error. As such, whatever benefit we are getting by deliberately targeting that ratio or range, we are paying a cost for it. That means it is vital that we know how much cost we are paying for how much benefit, in order to determine just how much balancing we need to do (do we actually need to go for an even ratio, or would it just being in a certain range suffice?) and whether it's worth it. The SJ arguments I've seen for (3) don't attempt to do that, they just take it as read that (3) by itself justifies balancing and don't consider the costs, which again, is not an argument I can take seriously.

(Note that while these possibilities are not mutually exclusive, I have been considering them independently because they're basically independent of one another for my purposes here; if multiple are true, these facts don't really interact in any relevant way and can be considered separately.)

In short, it's possible that the SJers are right about this; but I haven't seen them provide sufficient argument for it, and the arguments I have seen seem to indicate various errors in thought. This is a definite problem within SJ.

Conclusion

Ozy Frantz likes to point out that one of the virtues of SJ is that it is consequentialist. This is true. But what I hope I've managed to convey through the above is that it's consequentialist in a really dumb way -- one that ignores the actual causal structure of things, one where direct causal links are considered no more important than other sorts of associations, where the same language is used to talk about both. If you want to actually act on the world and have the results resemble what you intend, such non-causal thinking fundamentally doesn't work. I hope that through the above maxims and examples I've managed to convey what I consider the right way to think about things to be.

Can you fix social justice so that it no longer has these problems? I mean, maybe. The thing is that to my mind, if you did that, you wouldn't have social justice anymore, you'd just have liberalism. Still, if anyone wants to try it, I would be glad to see the result. Because these are in fact errors, and whether you call yourself an SJer or not, fixing them can only be a good thing.

(Corrections, at the end of 2020: The one thing above I think that really needs correcting was that I had assumed that police discrimination against black people was essentially just due to them being biased, and did not realize how often they are in fact active white supremacists. That, I think, qualifies for the "separate causal pathway" I said was needed above to repair the BLM argument. That said, while this is necessary to repair their argument, I do not think it is sufficient; it still remains the case, that, even assuming this, their rhetoric does not match the causal structure of the problem.)


1(OK, what I said above about inequality not being localizable isn't actually fully justified by what I said above; some more explanation is really required. But this is nitpicking so I'm relegating it to this footnote. So anyway, I have seen phenomena posited as being the result of local inequality, such as differentials in social status between a particular person and others that person may encounter. This is not individual, but it is local -- individual implies local but the reverse doesn't always hold. So it's a perfectly good causal node; locality is required, that everything happens at the individual level is not. (And yes, it has a minus sign in it, but it's comparing two things that directly interact, so there's no problem there.) So doesn't that justify the use of the global inequality measure, because it's actually aggregating this? Not really. Firstly, while it's impossible to say what the correct way of aggregating such a thing would be without a particular application in mind, I suspect that what you would want here would be something like the total variation (only for social space rather than physical space), that will add up the magnitude of local differences, rather than allowing them to cancel out when they're oppositely signed (and that doesn't include irrelevant non-local differences). Secondly and more importantly, if this is what people actually mean all the time, they should say it. Most of the time all sorts of ills are just directly attributed to "inequality", construed in a global fashion, with no such chain of causation posited, and so I hard it hard to believe that this, rather than the more nonsensical version, is what's meant.)


(Back to Diagrammer's essays)