Scott Alexander on how often science is wrong
Another great post with loads of interesting comments.
Descartes Cartesian Manichean
Dichotomy
polarising
https://slatestarcodex.com/2014/07/02/how-common-are-science-failures/
And Hegel
Bayes Bayesian reasoning
Statistics
Update your priors
Bayesian statistics (/ˈbeɪziən/ BAY-zee-ən or /ˈbeɪʒən/ BAY-zhən)[1] is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation, which views probability as the limit of the relative frequency of an event after many trials.[2] More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.
scott
I’m reading through Marx: A Very Short Introduction, and one of its best features is its focus on Marx’s influence from Hegel. Hegel is really interesting.
I should rephrase that. Hegel is famously boring. His books are boring. His ideas are boring. He was even apparently a boring person – a recent biography of him was criticized on the grounds that “Hegel’s life was really not eventful enough to support a graceful biography of eight hundred pages”. But the phenomenon of Hegel is interesting. I don’t know of any other philosopher with such high variance.
My inability to be tempted by Hegel brings me to another point: what parts of my thought, right now, are Hegelian? Hegel seems like a classic case where we should read history of philosophy backwards – if almost all philosophical thought for fifty to a hundred years was Hegelian, modernity should be absolutely saturated with Hegelian ideas. That means I might get less gain from trying to read Hegel forward (to see if he has startling insights I didn’t know) and more gain from trying to read him backwards (to see if he is the source of things I assumed unquestioningly, and that negating them – as the contingent opinions of some German guy who thought 19th century Prussia was objectively perfect – would produce startling insights).
I don’t know enough Hegel to do a good job of this. One easy target might be the modern belief in human progress or linear history. Fukuyama (“The End of History”) writes:
For better or worse, much of Hegel’s historicism has become part of our contemporary intellectual baggage. The notion that mankind has progresses through a series of primitive stages of consciousness on his path to the present, and that these stages corresponded to concrete forms of social organization, such as tribal, slave owning, theocratic, and finally democratic egalitarian societies, has become inseparable form the modern understanding of man. Hegel was the first philosopher to speak the language of modern social science, insofar as man for him was the product of his concrete historical and social environment and not, as earlier natural right theorists would have it, a collection of more or less fixed “natural” attributes. The mastery and transformation of man’s natural environment through the application of science and technology was originally not a Marxist concept, but a Hegelian one. Unlike later historicists whose historical relativism degenerated into relativism tout court, however, Hegel believed that history culminated in an absolute moment — a moment in which a final, rational form of society and state became victorious
But I find both more unexpected and more plausible David Chapman’s theories that Hegel inspired modern Westernized Buddhism, the hippie movement, and the New Age. He breaks his arguments into a bunch of posts that aren’t really collected in any organized way, but I would recommend An Improbable Re-Animation, Bad Ideas From Dead Germans, and Zen vs. The US Navy. Chapman’s argument isn’t very developed, but just raising the idea is enough to make its evidential support obvious. Hegel’s system was based around the principle that the key principle of the universe was a divine Mind trying to find itself, that everything was interrelated and purposeful, that as this Mind became more self-aware it would be reflected in increasing levels of consciousness among human beings culminating in an ideal utopian social arrangement. This is the daaaaaawning of the Age of Aquarius, the Age of Aquarius…
Now we turn the dial up to Hard Mode. Are there any cases of failure on a similar level within a scientific community in a country not actively being ruled by Stalin?
I can think of two: Freudian psychoanalysis and behaviorist psychology.
Freudian psychoanalysis needs no introduction. It dominated psychiatry – not at all a small field – from about 1930 to 1980. As far as anyone can tell, the entire gigantic edifice has no redeeming qualities. I mean, it correctly describes the existence of a subconscious, and it may have some insightful things to say on childhood trauma, but as far as a decent model of the brain or of psychological treatment goes, it was a giant mistake.
I got a little better idea just how big a mistake doing some research for the Anti-Reactionary FAQ. I wanted to see how homosexuals were viewed back in the 1950s and ran across two New York Times articles about them (1, 2). It’s really creepy to see them explaining how instead of holding on to folk beliefs about how homosexuals are normal people just like you or me, people need to start listening to the psychoanalytic experts, who know the real story behind why some people are homosexual. The interviews with the experts in the article are a little surreal.
Behaviorism in psychology was…well, this part will be controversial. A weak version is “psychologists should not study thoughts or emotions because these are unknowable by scientific methods; instead they should limit themselves to behaviors”. A strong version is “thoughts and emotions don’t exist; they are post hoc explanations invented by people to rationalize their behaviors”. People are going to tell me that real psychologists only believed the weak version, but having read more than a little 1950s psychology, I’m going to tell them they’re wrong. I think a lot of people believed the strong version and that in fact it was the dominant paradigm in the field.
And of course common people said this was stupid, of course we have thoughts and emotions, and the experts just said that kind of drivel was exactly what common people would think. Then came the cognitive revolution and people realized thoughts and emotions were actually kind of easy to study. And then we got MRI machines and are now a good chunk of the way to seeing them.
So this too I will count as a scientific failure.
But – and this seems important – I can’t think of any others.
Suppose there are about fifty scientific fields approximately as important as genetics or psychiatry or psychology.
But if we want to be even fairer, we can admit that there are probably some science failures that haven’t been detected yet. I can think of three that I very strongly suspect are in that category, although I won’t tell you what they are so as to not distract from the meta-level debate.
Douglas Knight says:
July 5, 2014 at 9:54 pm ~new~
The interpretation of quantum mechanics is a great example of how history is rewritten. The meaning of “the Copenhagen interpretation” has changed every generation.
In particular, everyone pro or con (except Bohr) who was in Copenhagen in 1925 agrees that Bohr said that consciousness causes collapse. Bohr explicitly denies this position in the 1927 debate with Einstein, long before “idiot popularizers” like von Neumann and Wigner.
I think of these two examples fairly often when I’m thinking about groups of intelligent, educated, well-meaning, experienced people who miss fundamental problems with the way they do things. I realize that psychiatrists and stock traders in general have more to offer than the below examples might suggest.
1. Thomas Szasz criticized psychiatry for inability to make accurate diagnoses, likening the field to alchemy or astrology. David Rosenhan, roughly in the same school of thought, then put it to the test (along with several others) by getting himself admitted to a mental institution as a healthy person, and then had a hell of a time getting out again.
“The second part of his study involved an offended hospital administration challenging Rosenhan to send pseudopatients to its facility, whom its staff would then detect. Rosenhan agreed and in the following weeks out of 193 new patients the staff identified 41 as potential pseudopatients, with 19 of these receiving suspicion from at least 1 psychiatrist and 1 other staff member. In fact Rosenhan had sent no one to the hospital.” http://en.wikipedia.org/wiki/Rosenhan_experiment
2. Daniel Kahneman writes about the time he studied a stock trading firm. He realized that the firm’s most distinguished employees actually did slightly worse than the market, on average! Implying that despite all of their mathematical and economic and financial education, their customers would have been better off putting their money in an index fund. When he delivered this bombshell to the firm, no one at the meeting said anything. At all. They simply carried on. On the way out, one trader walking with him said defensively that he had worked there for many years, and no one could take that away from him. He thought, well, I just did.
***
Mai La Dreapta says:
July 2, 2014 at 11:57 pm ~new~
I have a non-political example: Chomskian transformational syntax. TL;DR version: Chomsky had a bright idea about how to describe the syntax of English, and the entire field of linguistics got derailed for about forty years trying to extend and refine his idea, except that now most linguists are coming around to the fact that while the Chomskian approach is appealing, it’s fundamentally flawed and doesn’t actually explain anything. The full explanation will get kind of long.
So in the late 60’s Chomsky (then a very young dude) published a book which showed how you could explain most of the syntactic features of English with a set of “transformational rules” which took an underlying utterance and rearranged or replaced its parts. This was closely tied to the notion of Universal Grammar, the thought being that if you peeled away all of the transformations there was a single, consistent, and simple grammar that was directly encoded in the human brain and shared by all languages, and that the differences between languages were merely differences in lexicon and which transformations the languages applied. And the whole linguistics field was like whoah, and spent a long time cataloguing more transformations, in more languages, and hundreds of papers were written showing how seemingly disparate constructions in different languages could be derived from a common underlying syntax with the judicious application of transformations.
Two problems became apparent: one was that Chomsky’s original catalog of transformations was incomplete, as there were places where the published transformations would predict the wrong form, or prohibited an attested form. But you could always fix that up by adding some more transformations or tweaking the conditions. This, however, pointed to the second problem: Chomsky’s transformational notation was too strong. It could literally turn anything into anything else, limited only by the ingenuity of the linguist writing the paper, which in turn meant that you couldn’t actually infer anything about Universal Grammar from them.
What followed were a series of revisions of the theory meant to constrain the kinds of transformations that were available, or to change the way in which underlying structures and transformations were conceived. I recall learning about at least three of these: Principles and Parameters, the Minimalist Programme, and X-Bar Theory (the last of which is the current reigning model among people who still hold to Chomskian syntax). All of these were ingenious in some ways, but they all had the same two problems. They would over-generate some things, allowing for kinds of languages and syntactic transformations which are never observed, and they would under-generate other things, prohibiting syntactic features which are actually found. And no one ever got any closer to an actual description of Universal Grammar.
Eventually people started looking for other ideas (and the handful of anti-Chomskian holdouts started to get some cred). There are still Chomskians around, so this example might not fulfill your condition #4, but most of the new interesting work on syntax these days does not use the Chomskian model at all. No one believes in Universal Grammar any more. The most interesting new syntactic models are stochastic (meaning they work with a probabilistic model of grammaticality and generativity) and/or don’t have deep structures at all, both of which are anathema to the Chomskian approach.
(This is based on my memories of linguistics undergrad courses which are now a decade in the past, plus my reading of linguistics papers since then, so I could be wrong about all of this, and have probably screwed up some of the details. But I’m pretty confident in my overall conclusion that Chomsky was fundamentally wrong about syntax.)
Hide
David Chapman says:
July 3, 2014 at 12:12 am ~new~
Hooray! Really glad you wrote this one up. I was going to give it as an example after Scott clarified point #3, but I think you’ve done a better job.
I was railing against Chomskianism in the 80s, which made me borderline crackpot at the time. Glad to be vindicated by history
Hide ↑Phil Goetz says:
July 3, 2014 at 2:18 am ~new~
I’m surprised you didn’t pounce on symbolic AI.
Hide ↑David Chapman says:
July 3, 2014 at 2:24 am ~new~
That would have been boasting.
Hide ↑
Phil Goetz says:
July 3, 2014 at 2:34 am ~new~
I’d also mention Chomsky’s oft-quoted claim that there must be a universal grammar because the sentences a child hears don’t contain enough examples to learn a grammar. This is testable; we can use information theory to measure the information in a grammar and the information in a corpus of text. As far as I know, linguists still cite this claim today, yet a couple of minutes of analysis proves it’s false, and false by orders of magnitude if you adjust for the fact that many different grammars may suffice.
Hide ↑
Douglas Knight says:
July 3, 2014 at 12:05 am ~new~
I wrote a relevant post. It mentions an example you didn’t give. It’s a great example because it happened in parallel, so you can’t say that people weren’t ready for it.
Hide
Douglas Knight says:
July 3, 2014 at 1:10 am ~new~
one ancient Greek guy nobody really read
He was one of the four great astronomers of antiquity: Eratosthenes, who measured the size of the Earth (300BC); Aristarchus, who measured the size of the moon and the sun (250); Apollonius, who studied ellipses (200); and Hipparchus, who gathered data (150).
His argument for heliocentrism was not preserved, but everyone knew that it existed. For a century after Copernicus’s death, people claimed to be arguing for and against Aristarchus, not Copernicus. It was only recently that he has been erased from history.
Phil Goetz says:
July 3, 2014 at 2:25 am ~new~
What about the germ theory of disease? We have Ignaz Semmelweis, the fellow who reduced the contraction of fever in childbirth by a factor of 9 by making his doctors wash their hands, and also John Snow, the cholera epidemic pump-handle-remover. They were regarded as crackpots for decades, which I think is long enough to qualify.
Hide
Daniel Speyer says:
July 3, 2014 at 2:38 am ~new~
Not exactly sciences, but…
Economics: minimum wage monotonically causing unemployment
Archaeology: “Native” Americans arriving in one clump 10kya via Bering land bridge and ice-free corridor.
Archaeology: Viking settlements in North America
Medicine: Smoking is good for you (I can’t swear this was for real)
Jacob Steinhardt says:
July 3, 2014 at 2:46 am ~new~
I think many of the examples here (including some of your original examples) are missing the point of science. Science at its core is *not* about finding truth; rather, it is about finding *frameworks* that are useful for uncovering truths.
Take the example of behavioralism. The core tenets were bullshit, but it was *way* better than the thing it replaced, which was basically armchair philosophy. Behavioralism was a reaction to a field full of armchair philosophers who speculated about what was in people’s heads without bothering to collect any data. The (admittedly extreme) reaction to this was to ban everything *but* observable data, which was quite effective at getting the overall endeavour back on track but had some side effects down the line.
Or take Freudian psychoanalysis. I’m less sure about this example, but my impression is that before Freud people didn’t really think that *listening to what the patient said* was a particularly useful thing to do. Freud bothered to do that, and yes, also came up with a bunch of other crazy ideas that were not terrible useful, but that fundamental insight was *so* important that even with all the additional baggage it was still better than what it replaced.
Note that in linguistics, behavioralism was later mostly replaced by Chomskian nativism, which again goes to the other extreme and claims that language is almost entirely hard-coded into humans with a small number of parameters to be learned. This seems mostly wrong given current knowledge but was a huge improvement upon behavioralism (why? because there were important identifiable aspects of language that behavioralism was unable to say anything meaningful about, but which Chomsky was). And now we’re emerging into a more nuanced, statistical understanding of language, which is succeeding because *it* can explain properties that *nativism* can’t.
Doug S. says:
July 3, 2014 at 3:24 am ~new~
Ludwig Boltzmann had a devil of a time getting physicists to take his work on the kinetic theory of gases seriously; it wasn’t until Einstein’s paper in 1905 on Brownian motion that physicists couldn’t justify disbelieving in atoms any more. (Chemists had been taken atomic theory literally for quite a long time before that.)
HideRazor108 says:
July 3, 2014 at 3:45 am ~new~
A huge failure in the field of Physics was the discovery of N-Rays which were analogous to X-Rays. 120 Scientist, 300 published papers and almost 30 years research into something that did not in fact exist.
http://en.wikipedia.org/wiki/N_ray
And again the discovery of Polywater. Ten years of research into something that should have been debunked in one day. This is just boiling/freezing water.
http://en.wikipedia.org/wiki/Polywater
Rachael says:
July 3, 2014 at 4:14 am ~new~
How about:
The belief that smoking was beneficial to health (cigarettes handed out in doctors’ offices).
The belief that a low-fat high-carb diet was the best way to lose weight.
James James says:
July 3, 2014 at 6:00 am ~new~
Not sure if these are good enough:
Plate tectonics.
N-rays.
HBD.
Homosexuality as a medical condition (separate from it being a fitness-reducing disease).
The Highlands controversy (which Wikipedia hardly mentions)
Two links to the same article:
http://archive.lewrockwell.com/rozeff/rozeff152.html
http://www.lewrockwell.com/2007/05/michael-s-rozeff/how-the-state-corrupts-science/
The number of human chromosomes has already been mentioned. It’s sort of a real-life example of Asch’s conformity experiment.
Of interest:
https://en.wikipedia.org/wiki/Suppressed_research_in_the_Soviet_Union
Hidepeterdjones says:
July 3, 2014 at 6:12 am ~new~
Freud has his defenders…
http://www.theguardian.com/society/2003/mar/22/health.healthandwellbeing
Jimmy says:
July 3, 2014 at 1:44 pm ~new~
The Oxford Handbook of Hypnosis is a good place to start. You can read bits of the book on the first link and chapter abstracts on the second. http://books.google.com/books/about/The_Oxford_Handbook_of_Hypnosis.html?id=Nz_dnQEACAAJ
http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780198570097.001.0001/oxfordhb-9780198570097
If you want “lots”, google scholar will generate them for you. I suggest starting with the queries: “hypnosis real simulator”, “hypnosis fMRI”, “hypnosis pain”
Phil Goetz says:
July 3, 2014 at 10:28 am ~new~
American Anthropology, 1930-1980. Franz Boas opposed evolution and was convinced that human behavior was entirely a product of culture, and his doctoral student Margaret Mead knew that she’d have to produce results in line with this to get her degree. She did some sloppy fieldwork and published “Coming of Age in Samoa”, which reported as fact things her informants had made up as jokes about Samoan sexuality, and was a primary reference used for decades to argue that sex roles were purely cultural artifacts.
The alternate theory was evolutionary anthropology. At the time, it was interpreted in a very conservative, racist, sexist, and teleological way, so it’s hard to blame Boas, or even to be sure that his mistake wasn’t beneficial to us.
Hide
fubarobfusco says:
July 3, 2014 at 10:41 am ~new~
Nitpick: Boas rejected orthogenetic evolution, not Darwinian evolution.
Wikipedia:
The notion of evolution that the Boasians ridiculed and rejected was the then dominant belief in orthogenesis—a determinate or teleological process of evolution in which change occurs progressively regardless of natural selection. Boas rejected the prevalent theories of social evolution developed by Edward Burnett Tylor, Lewis Henry Morgan, and Herbert Spencer not because he rejected the notion of “evolution” per se, but because he rejected orthogenetic notions of evolution in favor of Darwinian evolution.
This was during the period that Huxley described as the “eclipse of Darwinism”, which may well be an example of what Scott is looking for here — the period where evolution of species was generally scientifically accepted, but natural selection as a means was not. Biologists (and anthropologists, etc.) looked for other driving forces behind the change of species: theistic evolution, orthogenesis, neo-Lamarckism. This lasted until the merger of genetics with evolutionary theory in the modern synthesis.
fubarobfusco says:
July 3, 2014 at 3:36 pm ~new~
We in the Bayesian Conspiracy call that “establishing a base rate”. You might look into it sometime.
Hide ↑Alfanerd says:
July 3, 2014 at 4:05 pm ~new~
I can understand using a Bayesian analysis for some questions, but when something is directly knowable it seems ridiculous.
Should we use that in criminal trials too?
“Your honor, 95% of black males prosecuted by this District Attorney have been found guilty as charged. In light of this I propose we dispense with the trial and declare him guilty right now.”
Hide ↑Oligopsony says:
July 3, 2014 at 5:10 pm ~new~
Different heuristics are appropriate to different goals. Prior to doing the research your estimate should be the base rate. If you find the topic interesting and worth your time, then dive into the substantive disputes.
e:lol maybe I shouldn’t use “questions” to mean three different things in three sentences
Hide ↑
Randy M says:
July 3, 2014 at 2:44 pm ~new~
“The issue of whether scientists are wrong, as a group within a field, is a social-science question”
No, the issue of why they are wrong is a social science question. The issue of whether they are wrong is entirely a a climatology question, although the Beyesian analysis of how readily we should trust them can and should draw on observations from many fields.
Hide ↑anon says:
July 3, 2014 at 7:13 pm ~new~
If 95% of blacks pulled into court were convicted, betting on their conviction for offered odds better than 1 to 20 would be an excellent idea. If you think that they shouldn’t be convicted, you either think that the court is racist or that the costs are so high that 95% certainty isn’t enough. Your analogy proves nothing. Bayes always wins. Come to the darkness.
fubarobfusco says:
July 3, 2014 at 10:36 am ~new~
It isn’t clear to me that recovered-memory therapy was ever mainstream enough scientifically to count here; but it was mainstream enough in society to cause quite a bit of upset in the 1980s. The “Satanic ritual abuse” moral panic thrived on the conjecture of repressed memories of abuse that were “recovered” through therapy, and which in retrospect it’s pretty clear were invented through what amounts to guided storytelling.
Madeleine Ball says:
July 3, 2014 at 11:17 am ~new~
You may be disappointed, amused, and/or interested to hear that our solution is not about security: Some people want to share their samples and genomes publicly, despite the risks. Work with these people.
(That is to say, our work is in “restructuring how we pursue data/sample sharing and consent”. I don’t have a solution for the sunk costs beyond “stop sinking more costs!”)
You can follow our work at blog.personalgenomes.org and also our new project blog, blog.openhumans.org.
Troy says:
July 4, 2014 at 4:05 pm ~new~
There are two kinds of fields: these which are legitimate science and these which aren’t.
I don’t think this is a helpful dichotomy
has a list of superseded scientific theories. Obviously most of these will not fit into your reference class, and only one of the examples you gave is actually listed there so the overlap seems poor, but it still might be worth skimming through it to see if any fit.
http://en.wikipedia.org/wiki/Superseded_scientific_theories
There are also lists of discredited substances and topics characterized as pseudoscience.
http://en.wikipedia.org/wiki/List_of_discredited_substances
http://en.wikipedia.org/wiki/List_of_topics_characterized_as_pseudoscience
Chris Hallquist says:
July 3, 2014 at 1:09 am ~new~
[Freudian psychoanalysis] dominated psychiatry – not at all a small field – from about 1930 to 1980.
Uh… normally I’d take your word on the history of psychiatry better than I do, being a psychiatrist, but based on what little I know of the field, this sure sounds like an exaggeration. Even if you count Alfred Adler and Carl Jung as basically Freudian, you’ve still got alternatives like Carl Rogers and Abrahamm Maslow (who Wikipedia tells me got their start in the 1940s and 1950s) and drug treatments and electroconvulsive therapy becoming a big part of psychiatry some time in the 60s (maybe even the 50s, going by Wikipedia).
Scott ,
Neither Rogers nor Maslow were psychiatrists, and their work has been almost completely ignored by psychiatry. Psychology ≠ psychiatry.
Early drug and electroconvulsive treatment were not considered a disproof of Freudian therapy. The common belief among psychoanalysts was that drugs were all nice and well for symptomatic relief but that only analysis could provide a real cure. Many analysts incorporated medication, Freud himself prescribed various psychoactive medications (including cocaine), and his daughter Anna Freud prescribed several of the early antidepressants. I don’t know to what degree they were viewed as contradictory paradigms. It seems to me that it would be perfectly consistent to understand that the brain had a biological substrate while believing Freud’s theories described its higher level operations (as indeed many people do today)
Derrida, from summaries I’ve read of his work, is pointing out inconsistencies in epistemology without knowing what to replace them with. Derrida and his opponents each have different pieces of the truth, but can’t produce a synthesis, because all of them try to categorize statements as “true” or “false”, and this is a false conceptualization. Statements in natural languages cannot be true or false, if for no other reason than that the words in them can’t be precisely defined. They should properly be regarded as conveying information. Doing this should dissolve the paradoxes, from Hume to Derrida, one runs into when asking whether something is “true” or “known”, and how a category or claim refers to reality.
” Statements in natural languages cannot be true or false” is in natural language, so it cannot be true or false.
And yet it conveys information.
It is a convenient approximation to say a sentence is true or false, but if you read the literature that I was referring to, it is about difficulties that arise when you believe “true” means something like “certain; true under all possible interpretations; probability 1” and “false” means the opposite. Such as Hume’s famous objection that we can’t say that we know the sun will rise tomorrow.
peterdjones says:
July 5, 2014 at 10:45 am ~new~
“Elvis is still alive” conveys information.
Concepts of information that are agnostic about truth are only of use in engineering….they cannot be plugged into a useful
Epistemology.
Yes, you can’t be certain. That is something that all philosophers now agree. But you should respend to that by abandoning certainty claims, not by abandoning truth claims, because “there is no truth” is self refuting in a way that “there is probably no certainty” is not.
Hide ↑
Hide ↑
How often do academic fields, filled with some pretty smart people, go absolutely nuts?
Rousseau's noble-savage stuff sociology
Blank-slatism
Marxist Economics
Post-Structuralism
Critical Theory
Most strands of contemporary art criticism
Most strands of contemporary literary criticism
Deconstructionism
Hegelian Studies
That lots of people took Jacques Derrida or Judith Butler seriously should terrify us.
The most comprehensible thing I’ve found is Wesley Phoa’s Should Computer Scientists Read Derrida?.
Hide ↑
the standard resources:
http://www.iep.utm.edu/derrida/
http://plato.stanford.edu/entries/derrida/
Oh, also, a funny and pretty accurate explanation of deconstruction for computer geeks is How To Deconstruct Almost Anything.
Hide ↑
If there were any actual content in “general relativity,” somebody would have distilled it into a clear, comprehensible format. I can’t understand it, and—it happens—I even have an MIT math degree, so obviously it’s complete nonsense.
Albert Einstein’s Theory of Relativity In Words of Four Letters or Less
watermelons coconuts
he will brook no compromise with the warmists who are watermelons and want to kill and destroy.
I shouldn’t ask, I shouldn’t ask, I shouldn’t ask…
What’s a watermelon?
Edit: I have the idea of green on the outside, red on the inside, therefore communists, therefore evil?
Hide ↑Randy M says:
July 4, 2014 at 12:22 pm ~new~
Yes, basically people using environmental causes to get greater state power.Scott Alexander says:
July 2, 2014 at 10:45 pm ~new~
I think we don’t hear about the ozone hole anymore because everyone met in Montreal and signed an agreement banning the offending chemicals, after which the ozone hole started to shrink and is now in the process of disappearing entirely.
Do you mean the treaty that was signed in 1987? http://en.wikipedia.org/wiki/Montreal_Protocol
[Edit:] And it looks like some serious amendments came in the early 90’s.
THE OZONE LAYER faded from public consciousness
A great deal of scientific racism falls into this category pretty clearly:
http://en.wikipedia.org/wiki/Scientific_racism
Hide
Race = ancestry + social construct. If you’re going to study social constructs, go ahead and say “race”, but otherwise you need to get away from that cruft. See also: http://www.theatlantic.com/national/archive/2013/05/the-social-construction-of-race/275974/
https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/
If we can’t come up with more than a dozen examples of science failure, I think it’s fair to say there were enough opportunities for science failure that these are very rare, much rarer than would be necessary for the anti AGW case to merit serious attention.
…