[COPY] Jan 2025 Sciences Po English course part 2
Living together https://www.theguardian.com/lifeandstyle/2025/mar/08/the-only-way-i-can-survive-co-living-as-a-single-parent
a/an European policy?
Verbs followed by infinitive/gerund
causative verbs
https://englishwithjennifer.wordpress.com/2010/10/18/student-stumper-24-causative-verbs/
Fuchs and Bonner
a list of 32 verbs from Fuchs and Bonner:
advise, allow, ask, cause, challenge, choose, convince, enable, encourage, expect, forbid, force, get, help, hire, invite, need, order, pay, permit, presuade, promise, remind, request, require, teach, tell, urge, want, warn, wish, would like
Please use control F and composition// Week 1/2/ 3/ TOEFL// Esther Duflo// Cummings kayfabe/ Romer/ Chappelle/ Wolf/ Joy/ Unabomber/ Thiel/ Orfalea/ Eliot/ Lessing/tax /Silver/ Biden/ Harris/ Bayes// Covid/ Luria// counterfactuals// Taleb/ hedgehog/ Tetlock/ Bilderberg- Thiel// Marx/ Fonda/ Frankl/ Steve Jobs/drugs/ Taibbi/ Hamming /Charles C Mann// CIA from Unherd
to find the part you need.
add your name here
https://docs.google.com/spreadsheets/d/1bntndQtQbz-PSu-7azPW_8qSDT1_S9CQyMBR63-1w4E/edit?userstoinvite=stuart.wiffin%40sciencespo.fr&gid=0#gid=0
Week 12 23.4.25
Trade-offs
Hard choices for Keir Starmer- if he gets rid of this policy, will he still be PM?
As of the latest data available, there were nearly 9.1 million pupils across all school types in the UK during the academic year 2023/24.6 This figure includes children from early years through to secondary education. Additionally, the population of children in care, known as "looked after children," was approximately 107,000 in 2022/23.
child poverty here
and the concept of relative poverty below Diamandis
The Pope
“As I meet, or lend an ear to those who are sick,
to the migrants who face terrible hardships
in search of a brighter future,
to prison inmates who carry a hell of pain inside their hearts,
and to those, many of them young, who cannot find a job,
I often find myself wondering:
"Why them and not me?"
I, myself, was born in a family of migrants;
my father, my grandparents, like many other Italians,
left for Argentina
and met the fate of those who are left with nothing.
I could have very well ended up among today's "discarded" people.
And that's why I always ask myself, deep in my heart:
"Why them and not me?"
***
Happiness can only be discovered
as a gift of harmony between the whole and each single component.
Even science – and you know it better than I do –
points to an understanding of reality
as a place where every element connects and interacts with everything else.**
Mother Teresa actually said:
"One cannot love, unless it is at their own expense."
cf
Orwell on coolies and Hitchens on Mother Teresa
poverty https://www.ted.com/talks/peter_diamandis_abundance_is_our_future
Tim Harford https://www.ted.com/talks/tim_harford_trial_error_and_the_god_complex
nationalisation
from the Telegraph
The second home tax is punitive, petty and politically motivated
Joe Wright
Senior Money Writer
Tax rises are never popular, but I think we have a winner. Second home owners feel that they are being unjustly targeted and treated as cash cows. Understandably, they're not happy.
Two thirds of local authorities across England have this month brought in a double council tax charge on second homes, piling misery on 280,000 owners who will see their bills rise 77 per cent to £3,672, on average, according to analysis by The Telegraph. In some areas, council tax bills for second homes will surpass £10,000.
I’ve rarely seen a topic enrage readers like this stealth tax on wealth. The Telegraph’s inbox has been inundated with emails of woe from those who feel that they have done nothing wrong but are being financially penalised. Property owners tell us they feel more like residents in their second home area than their hometown, taking great efforts to get to know their neighbours and pumping money into the economy. Yet councils want them gone, ostensibly to free up homes in holiday hotspots for local buyers and to raise funds to tackle housing shortages.
These have already failed to hold up to scrutiny, with holiday “notspots” (really, Bradford?) also capitalising on the chance to raid wallets and councils spending just 9p of every £1 raised on affordable housing.
Worst of all? The Government has admitted it has no idea what effect this policy will have on house prices or the number of second homes. This sorry state of affairs appears to be a politically-motivated crackdown on perceived wealth – and it may well backfire.
In 2013, the Government attempted a similar attack on empty homes, giving local authorities powers to increase council taxes by 50 per cent, with quadruple bills for properties unfilled for over a decade. I dug into the numbers and discovered that the situation is worse now than it was before, with the number of empty homes at a 14-year high.
Could a similar fate befall second homes? It certainly appears so. My colleague Pieter Snepvangers went to Wales, where the second home council tax surcharge was introduced a year ago, and found that the country had succumbed to a “lose-lose” situation where house prices have plunged but still remain out of the budget of local buyers.
Questions are also swirling about the legality of the crackdown. Homeowners tell us that they were “blindsided” when the tax bill landed on their doormat. They say the policy amounts to “taxation without representation” as second home owners cannot vote in local elections and are being billed twice for council services that they cannot use.
Some savvy owners are already deploying legally sound tactics to swerve the hefty bills. By listing a second home for sale (and marketing it at a reasonable price) you’ll be awarded with a 12-month exemption to the premium from your local council. It’s a neat trick and one where there is no obligation to sell.
But this should be unnecessary, as second home owners shouldn’t be penalised for inheriting a family home or investing their hard-earned money in property instead of stocks. That’s why The Telegraph is calling on the Government to abolish this punitive, petty premium. Have you been affected by double council tax on second homes?
conterfactuals Luria Black Swan Taleb
Did Paul Graham more on Y Combinator, Aaron Swartz Failure at Ycombinator link??????
Mean people fail https://www.paulgraham.com/mean.html
China US export stats
https://x.com/SpencerHakimian/status/1909059731748167963
https://avc.com/2014/10/paul-graham-dropping-serious-wisdom/
***
personality test- “I ramp up views”
https://www.theguardian.com/wellness/2025/apr/14/why-am-i-like-this-disagreeable
***
plus size models- Ozempic https://www.theguardian.com/fashion/2025/apr/15/ozempic-arrived-and-everything-changed-plus-size-models-on-the-body-positivity-backlash
****
steel
On Saturday, when MPs were supposed to be on their Easter holidays, a rare emergency sitting was called. Jonathan Reynolds, the business secretary, told the House of Commons that they were meeting “in exceptional circumstances to take exceptional action in what are exceptional times”.
MPs passed a bill to save the Scunthorpe steelworks, a vital part of the UK’s critical infrastructure and the last remaining maker of mass-produced virgin steel. The emergency legislation allowed the government to instruct the Chinese owners of the British Steel plant, Jingye, to keep Scunthorpe open or face criminal penalties.
Jasper Jolly is a financial reporter for the Guardian. He tells Helen Pidd that the steelworks are central to life in Scunthorpe and that their loss would be devastating for the town. He explains that the plant has been loss-making for several years and that it is largely the glut of cheap Chinese steel in the global market that has led its owners to consider its closure.
The pair discuss why the government has taken control of Scunthorpe in a way that it did not with the Port Talbot steelworks, the race to keep the blast furnaces hot, and the way that this crisis has led many to question the wisdom of selling critical parts of the UK’s infrastructure to foreign and private companies.
https://www.theguardian.com/news/audio/2025/apr/15/the-scramble-to-save-british-steel-podcast
**
space
*****
“I am broadly of the view that Katy Perry should do whatever she likes, and wear whatever she likes, and if she wants to be shot into space for no obvious purpose looking like one of Charlie’s Angels, then that falls squarely into those categories. So why am I so bothered by Blue Origin’s New Shepard rocket, which has just made a suborbital flight to the edge of space and back? OK, it’s partly the outfits (see Angels, Charlie’s, above), but it’s mainly the flight manifest: Perry joined Lauren Sánchez, Jeff Bezos’s fiancee; Amanda Nguyen, a civil rights activist; CBS Mornings co-host Gayle King; film producer Kerianne Flynn and former Nasa rocket scientist Aisha Bowe. I cannot help but notice that only two of these women had anything to do with astronauting (Nguyen studied astrophysics and interned at Nasa before becoming an activist).
Obviously it’s very staid and 20th century to think that only experts should be allowed in space – yet their absence did suggest the primary purpose of the trip to be tourism rather than research. Which in turn suggests that this was a dumb waste of money. Which itself makes you wonder just how many of the world’s problems would have to have been solved before space tourism would look like a worthwhile enterprise – hard to put a number on it, but significantly more than have been solved today.”
from CNN
Karman line 62 miles above sea level
week 11
aaron the logic of the situation
Ozempic price? https://www.doktorabc.com/fr/ordonnance/ozempic/ozempic-prix
no easy answers, trade-offs?
997 a month- who pays?
https://ro.co/weight-loss/ozempic-cost-without-insurance/
cancer drug
The cost of cancer drugs has been a significant concern, with some drugs priced at over $100,000 per patient for one year of treatment, and more recently, launch prices exceeding $400,000 for a year of treatment.32 For instance, ipilimumab (Yervoy), approved for treating metastatic melanoma, costs $120,000 for four doses.1 Global spending on cancer drugs reached a record $185 billion in 2021 and is expected to reach $307 billion by 2030.2 Despite the high costs, there is no evidence that the savings from developing precision oncology drugs, which could be $1 billion cheaper to develop than non-precision drugs, will lead to more affordable cancer medicines.
Orwell.International socialism.
“In England there is only one Socialist party that has ever seriously mattered, the Labour Party. It has never been able to achieve any major change, because except in purely domestic matters it has never possessed a genuinely independent policy. It was and is primarily a party of the trade unions, devoted to raising wages and improving working conditions. This meant that all through the critical years it was directly interested in the prosperity of British capitalism. In particular it was interested in the maintenance of the British Empire, for the wealth of England was drawn largely from Asia and Africa. The standard of living of the trade-union workers, whom the Labour Party represented, depended indirectly on the sweating of Indian coolies. At the same time the Labour Party was a Socialist party, using Socialist phraseology, thinking in terms of an old-fashioned anti-imperialism and more or less pledged to make restitution to the coloured races. It had to stand for the ‘independence’ of India, just as it had to stand for disarmament and ‘progress’ generally. Nevertheless everyone was aware that this was nonsense. In the age of the tank and the bombing plane, backward agricultural countries like India and the African colonies can no more be independent than can a cat or a dog. Had any Labour government come into office with a clear majority and then proceeded to grant India anything that could truly be called independence, India would simply have been absorbed by Japan, or divided between Japan and Russia.”
“A Socialist Party which genuinely wished to achieve anything would have started by facing several facts which to this day are considered unmentionable in left-wing circles. It would have recognized that England is more united than most countries, that the British workers have a great deal to lose besides their chains, and that the differences in outlook and habits between class and class are rapidly diminishing. In general, it would have recognized that the old-fashioned ‘proletarian revolution’ is an impossibility.”
nationalize steel and coal??????
https://x.com/trevgoes4th/status/1911366527867068714
****
Sandel- killling in the army sgt B
16.4 deadline for 3rd composition
either https://www.toeflresources.com/toefl-discussion-board-writing-question-city-spending/
or
https://www.toeflresources.com/toefl-academic-discussion-writing-question-price-controls/
or
How should France deal with the problem of homelessness?
or
Does France need an independent nuclear defence?
or
France should have a universal basic income.Discuss.
or
We should require people of all genders to register for the draft.
week 10 9.4
L'Institut des Compétences et de l'Innovation a le plaisir de nous inviter à l'évènement "Comprendre nos étudiants : attentes et attitudes de la génération Z" qui aura lieu ce jeudi 10 avril, 10h30-12h au Learning Lab (1 place Saint-Thomas) et sur Zoom. Ce lien https://www.billetweb.fr/ici-evenements-du-learning-lab-2024-2025 permet de s'y inscrire. Vous y trouverez également une brève description.
Vous souhaitant une agréable journée et bien cordialement,
Homelessness from Scott
Engage in good faith- “you’re all wrong”
‘the logic of the situation’
Andrew Chambers
Senghor
“And one of the things that we talked about is the three things that I found important in my personal transformation, the first being acknowledgment. I had to acknowledge that I had hurt others.I also had to acknowledge that I had been hurt.The second thing was apologizing. I had to apologize to the people I had hurt.Even though I had no expectations of them accepting it,it was important to do because it was the right thing. But I also had to apologize to myself. The third thing was atoning. For me, atoning meant going back into my community and working with at-risk youthwho were on the same path, but also becoming at one with myself.”
“So what I'm asking today is that you envision a world where men and women aren't held hostage to their pasts, where misdeeds and mistakes don't define you for the rest of your life.”
Ray Dalio
hedgers
from linked in
https://www.linkedin.com/pulse/effects-tariffs-how-machine-works-ray-dalio-vg96e/
“Additionally, there is a lot of talk now about whether it is a helpful or harmful thing that 1) the the U.S. dollar is the world’s primary reserve currency and 2) the dollar is strong. It is clearly a good thing that the dollar is a reserve currency (because it creates a greater demand for its debt and other capital than would otherwise exist if the U.S. doesn’t have that privilege to abuse via over-borrowing). Though since the markets drive such things, it inevitably contributes to abusing this privilege and over-borrowing and debt problems which has gotten us to where we now are (i.e., needing to deal with the inevitable reducing of goods, services, and capital imbalances, needing to take extraordinary measures to reduce the debt burdens, and reducing foreign dependencies on these things because of geopolitical circumstances.) More specifically, it has been said that China’s RMB should be appreciated which probably could be agreed to between the Americans and Chinese as part of some trade and capital deal, ideally made when Trump and Xi meet. That and/or other non-market, non-economic adjustments would have unique and challenging impacts on the countries they apply to, which would lead to some of the second order consequences I mentioned earlier happening to cushion the effects.”
dalio tedtalk
currency trading
The renminbi (Chinese: 人民币; pinyin: Rénmínbì; lit. 'People's Currency' Chinese pronunciation: [ʐən˧˥min˧˥pi˥˩]; symbol: ¥; ISO code: CNY; abbreviation: RMB), also known as the Chinese yuan, is the official currency of the People's Republic of China.[a] The renminbi is issued by the People's Bank of China, the monetary authority of China.[3] It is the world's fifth-most-traded currency as of April 2022.[4]
What % of currency trading is speculation?
Galloway tedtalk
Scott Galloway https://www.profgalloway.com/earners-vs-owners-2/
Orwell on Gandhi https://stuartwiffin.substack.com/p/orwell-on-gandhi?utm_source=publication-search
Week 9 2.4
Harari chain of events
“There is no such thing as a chain of events” see Taleb below
Yuval Noah Harari
Nassim Nicholas Taleb
Taleb
Charlie Rose
T
https://x.com/rohanpaul_ai/status/1883601254318039148
To
from the New Yorker
On a blanket on the lawn under a cedrillatoona tree, in Southern Rhodesia, in 1942, a young woman sat explaining to her two children why she was leaving them. The boy was three years old and the girl not yet two, but she believed that they would understand someday, and thank her. She was going off to forge a better world than the one she herself had grown up in, a world without race hatred or injustice, a world full of marvellous people. She later admitted that she really hadn’t been much interested in politics; what had drawn her to the local Communist Party was the love of literature that she’d found she shared with its members—some twenty mostly well-off, young white people who formed a self-designated Party branch in nearby Salisbury—and the sense of heroic expectation that surrounded their lives. They were eagerly awaiting the revolution that they believed would end the hateful white regime, and, if their activities were principally confined to meetings and disputes and love affairs with one another, their goals were undeniably noble. The newest comrade had not yet begun to consider the disparity between their professed goals and their actions, possibly because she could not afford to consider the disparity between her own.
Doris Lessing’s stingingly self-mocking account of her escape from maternal to global responsibility appeared just a few years ago, in the first volume of her autobiography, “Under My Skin.” Now the second volume, “Walking in the Shade” (HarperCollins; $27.50), carries her story up to its political and emotional climax, twenty years later, with the publication of her most celebrated novel, “The Golden Notebook”—which has been in print since 1962 and become a fixture of the social history it helped to shape. Many of the scenes in these memoirs are already familiar, since Lessing has drawn on the particulars of her experience throughout the broad reaches of her fiction, characteristically joining the daily realities of life on the veldt or in a London flat to the biggest political issues of the age.
“After all, you aren’t someone who writes little novels about the emotions. You write about what’s real,” a Party comrade assures “The Golden Notebook” ’s emphatically autobiographical heroine, Anna Wulf. A fully self-conscious specimen of the “position of women in our time,” Anna is also an individual of high neurotic distinction, the very model of the modern madwoman who has found her way down from the attic to the bedroom only to stand fumbling for decades with the next set of keys. It is Anna—an earnest revolutionary with a weakness for scarred and brooding men—who patiently writes and assembles the many parts of “The Golden Notebook” itself: an old-fashioned baggy monster of a modern novel, at once didactic and feverishly intense, thick with sermons and stories about African racial policy and Soviet Communism and the modern male’s inability to love and the vaginal orgasm and the question of whether women can ever be free. Anna’s goal is to write a book that will change the way people see the world, and many readers of “The Golden Notebook” claim that Lessing herself accomplished something remarkably like that.
Get The New Yorker’s daily newsletter
Keep up with everything we offer, plus exclusives available only to newsletter readers, directly in your in-box.
Sign up
By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
“Walking in the Shade,” which begins with Lessing’s arrival in England, in 1949, just after the end of her second marriage, takes a far more caustic view of these goals and possible accomplishments. A blunt and hasty book, it provides a less compelling—even a less convincing—version of Lessing’s life than the complex novel whose themes it shares. Yet the memoir is a valuable guide to the equally complex human being who now questions whether that novel was worth writing, and a portrait of the artist as a woman and a thinker which is as troubling as it was evidently meant to be.
“Iwas free. I could at last be wholly myself” the gay divorcée exulted as she caught sight of London from the ship she’d boarded in Cape Town. “A clean slate, a new page—everything still to come.” Readers’ expectations of a triumphant resolution to “A Doll’s House,” however, are quickly smashed. Looking back, Lessing is sardonic, even accusing, as she marvels at what she terms her adolescent sense of self-possession. In the modern self-experiment of a life she had begun, freedom would turn out to be much harder to use than it was to win.
She was nearly thirty; and had brought with her the manuscript of her first novel and also a son, aged two and a half. Her second marriage had been no more than a wartime arrangement made to avail a German lover in Rhodesia of her British citizenship. To her surprise, she’d found that she yearned for another baby—a yearning that she now accuses Mother Nature of having aroused in order to make up for the war’s wounded and dead. Against such a force she hadn’t a choice. And so her account of her early London days becomes a catalogue of the hardships of single motherhood, what with getting up at five, working for too little money, feeling ever duty-bound and exhausted. Then, there was the tremendous loneliness and exclusion from adult society. London was still a dark and blinkered city, a barely recovered war casualty itself. She describes walking the empty streets alone at night, and the tantalizing glimpses of fellowship in the lighted pubs she sometimes passed. And one feels that if only these pubs had been a little less beery or more welcoming to a foreign woman Lessing might not have committed “probably the most neurotic act of my life,” about a year after her arrival, by officially joining the Communist Party. She writes:
I did know it was a neurotic decision, for it was characterised by that dragging helpless feeling, as if I had been drugged or hypnotised—like getting married the first time because the war drums were beating, or having babies when I had decided not to—pulled by the nose like a fish on a line.
If this lends some limited clarity to her reasons for joining, a good deal of the book is spent anxiously trying to explain why she stayed on past so many protests of her conscience. Riddled by doubts from the start, she was increasingly dismayed by the petty corruption of the British Communist bureaucracy and by evidence of the deadly Soviet variety, impossible for even the most faithful to ignore after Khrushchev’s 1956 revelations of Stalin’s atrocities and the military suppression of Hungary later that year. In fact, she was sufficiently outraged about Hungary to write a “passionate letter protesting about it” to—astonishingly—the Union of Soviet Writers. Perhaps no one has taken the power of the pen more literally; one longs to have witnessed the bleakly Gogolian farce out of which a “conciliatory letter” was produced for her edification and sent by return post. And still she struggled with turning in her Party card. In her memoirs Lessing often blames the undertow of history itself: if her feelings were neurotic, her thinking belonged to the “Zeitgeist.” What she was always secretly hoping for, she writes, was the emergence of a few pure Russian souls to put the system back on its true path.
The most perplexing and dispiriting of her adventures, however, occurred when she came upon just such a soul, and the message was not what she wanted to hear. In 1952, while she was visiting the Soviet Union and touring a collective farm, a classic Tolstoyan old farmer in a white peasant smock stepped forward to cry out that everything the foreigners were being shown was false, that life under Communism was terrible, and that they must go back to Britain and tell the truth. Lessing now calls this the bravest act she had ever witnessed, since the old man had to know that “he would be arrested and disposed of.” It is unclear how much she herself knew or was willing to know at the time. After a banquet and a viewing of “presents to Stalin from his grateful subjects,” she went to sit outside, alone, and brood. And it was apparently her shame over her aesthetic revulsion at those hideous presents—mostly boxes and rugs featuring Stalin’s carved or woven face that led her to the macabre conclusion of this episode: her decision to write a story “according to the communist formula.”
Insisting on one’s belief in the face of all evidence; seeing and hearing what you need to be true instead of what, horribly, is; hanging on to what you know you cannot live without. And then letting it go. This memoir—like “The Golden Notebook”—is about the admission of colossal, sickening error and defeat. “All around me,” Lessing recalls, “people’s hearts were breaking, they were having breakdowns, they were suffering religious conversions” over the collapse of Communism as a moral force. She points almost gratefully to one of the century’s awful paradoxes: that, while the politically negligent helped make Hitler possible, it was “the most sensitive, compassionate, socially concerned people” who did the same for Stalin. This is not offered as a defense, exactly, but as evidence of a kind of mass delusion, in which the only imperative was to believe in something larger than the single self.
“Losing faith in communism is exactly paralleled by people in love who cannot let their dream of love go,” Lessing writes. For her, the comparison is not abstract. She suffered two disastrous love affairs during the years covered here, long affairs that were more marriages than her marriages had been, and that left her feeling misused, empty, and nearly out of her mind with misery. In fact, she was misused, as she reports it—both here and, in far more excruciating detail, in “The Golden Notebook.” She was lonely and in need and had read far too much D. H. Lawrence. But she was also a modern woman: she wasn’t going to ask for anything that would make her seem wanting or weak. And the very modern “men-babies” that the age was producing certainly weren’t going to give anything—there were now spoken rules about this—even as they pillaged her emotional store and absorbed all the loving and the cooking and the nursing and the sex that any sensible Victorian woman would have set at a far higher market price. No wonder she felt empty by the time they were sated and moved on.
“Sometimes I think we’re all in a sort of sexual mad house,” one of the shell-shocked women in “The Golden Notebook” remarks. “My dear,” a friend replies, “we’ve chosen to be free women and this is the price we pay.” As it happens, “Free Women” is the title of a book that Anna Wulf begins writing, and it is meant to be ironic. Lessing’s women are not only willfully blind in their romantic delusions but—and this is the price they pay—deprived of any further use of their corrupted and exhausted wills. Like Lessing in her memoir, they have descended to the level where accepting pain is easier than taking responsibility.
What is most ironic of all, perhaps, is that “The Golden Notebook” has entered literary history as, in Lessing’s words, the “Bible of the Women’s Movement”—the novel that introduced the subject of women’s liberation to American society. Lessing herself seems torn between laughter and tears at the thought. It is clearly her searching, shameless honesty that readers responded to so avidly; women had never talked like this in print before. But as a document of liberation her book may be classed with Simone de Beauvoir’s “The Second Sex” (which in 1949 announced the “free woman” as a type just being born) and with Richard Wright’s “Native Son.” All three are works in which the angry subject is such a crippled specimen that hope and progress can be detected primarily in the sheer, howling relief of the author’s declaration: If I am a monster, it is because you have made me one.
Lessing’s heroines tend to break down completely before they are able to put themselves back together—perhaps on the Marxist model that revolution must precede utopia. The author informs us that she herself managed to escape real breakdown through writing about it. Through psychoanalysis, she came to understand that even her support of the Soviet Union was “only a continuation of early childhood feelings,” primarily suffering and identification with pain. In Africa, in a house with mud walls and Liberty curtains, her mother had complained bitterly of the sacrifice of her own talents to the raising of two children. Lessing’s strikingly exact reversal of that sacrifice did not ease the weight she carried. In her earlier memoir she skips ahead to let us know that her firstborn son, when a middle-aged man, told her that he understood why she’d had to leave his father, but that he still resented it. Her need to save the world—so much better an excuse for running off than domestic restlessness or the desire to write stories and novels—is not mentioned. Both this crusade and its failure, however, may help to explain why her stories and novels had to take on that moral burden.
But this isn’t why they are worth reading. For all her theories and her ethics and the range of her literary personae—the African realist, the London scene painter, the anguished psychologist, the science-fiction galaxy roamer turning out volumes on “the fate of the universe”—Lessing’s rarest gift is for getting characters on their feet and setting the wind stirring the curtains with language so apparently simple it betrays no method at all. Many of the short stories, in particular, seem to have written themselves. “Homage for Isaac Babel” renders adolescent tenderness and the psychology of literary style in three perfect pages; “The Day Stalin Died” and “How I Finally Lost My Heart” divide the weighty themes of “The Golden Notebook” and release them like birds into the air. The classical concision of the form seems to induce in Lessing a kind of clear-eyed mental energy, an urge to pick the locks of the elaborate cages she builds in her novels.
It’s possible that what makes Lessing so fascinating a writer is the skewed alignment of her vision and her gifts. Here is a utopian humanist who cannot help seeing into the mixed and muddled cores of actual people; an apocalyptic town crier who is also one of modern fiction’s most precise domestic realists—a Pieter de Hooch who can suddenly flare into Bosch. True, it can sometimes seem that apocalypse is merely her requirement for making a decision. (When a world is ending, it is surely time to pack your bags.) But the woman who now claims in her memoirs to have been borne through much of her life upon the tides of history, helpless and passive, has written a stack of books that entirely contradict this notion—books that are nothing if not the product of a hugely stubborn and struggling soul.
“I have to conclude that fiction is better at ‘the truth’ than a factual record,” Lessing surmised in a new introduction to “The Golden Notebook,” in 1993. As is true of most writers’ memoirs, Lessing’s are secondary events, their unvarnished truths of special interest only because of the beautifully painted half truths that have gained a hold on our imaginations. But with Lessing the usual paradox only begins here, for the fiction that details her personal and political failures also embodies her creative triumphs. “The Golden Notebook” may be a monument to disillusion and despair, but it remains a monument—one that stands for an era of formidable transition. At its end, Anna Wulf pulls herself together, joins the Labour Party, and begins to do volunteer work to ease her need to make all things better than they ever can be. Her steps seem small, sad, and honest. At the same point of emotional recovery in “Walking in the Shade,” Lessing announces a newfound allegiance to Sufism, and the reader almost sighs at the sound of the authorial boots tramping off uphill again, in this spiralling Pilgrim’s Progress of a life.
In a time with so little faith in the achievement of higher destinations, it is surely Lessing’s ability to hold fast to her goal even as she records every stumble and collapse along the way which has made her work of near-inspirational value to so many. Oddly, Lessing the memoirist is unable to acknowledge this dual aspect of her appeal, and ends by belittling her own contribution. In “Walking in the Shade” she pronounces “The Golden Notebook” to be a failure, on the ground that even this most influential of her books hasn’t really changed the way people think. Characteristically, the standards of judgment expressed in her fiction are more reasonable, even more forgiving.
In a slender novel called “The Memoirs of a Survivor,” written in 1974 and called by the author “an attempt at autobiography,” Lessing fused all her visions and gifts into a single ravishing fable. Set on several levels of reality, it tells of a woman who is mysteriously entrusted with a child to care for just as the civilization around her crashes down. Despite its fantastic elements, the tone is natural, the people are true, the language is taut and glowing. Near the end, the woman sits in the ruins of a London-like city and pictures a garden that is now out of reach, a place she visited briefly and believes that people will return to someday. “It was hard to maintain a knowledge of that other world, with its scents and running waters and its many plants, while I sat here in this dull shabby daytime room,” she recalls. But then she goes on to assert, with quiet pride, just what Lessing might rightly claim for herself: “I did hold it. I kept it in my mind. I was able to do this.” ♦
Published in the print edition of the November 17, 1997, issue.
Claudia Roth Pierpont has contributed to The New Yorker since 1990 and became a staff writer in 2004.
****
Jean Gee was there at the start with Ann Cryer. Now 77, the former social worker helped to introduce the MP to the victims’ mothers in 2002. “I used to work with kids who were excluded from school,” she tells me. “And I’d see first-hand how they were picked up by men in their taxis.” At the time, Jean didn’t know that one of the girls would be a close family member.
Amber* was raped just over a decade ago by a man she believed was a friend of her father’s. Unlike a number of the town’s victims, who have left, fearful that their attackers still walk its streets, Amber still lives in Keighley. Every day is a reminder of her trauma. She suffers from a severe eating disorder that has left her unable to have children. Her body is skeletal, her arms tattooed. “It marks a girl for life,” Jean says.
Around the same time as Amber was being groomed, a gang of 12 men were targeting a 13-year-old girl, Autumn*. The torment she would endure — over 13 months between 2011 and 2012 — would become Keighley’s darkest chapter. During one incident, she was gang-raped by five men; during another, she was raped in an underground car park next to a wall brazenly graffitied with the names of some of her attackers.
In 2016, Autumn’s 12 attackers were convicted — and the judge found she’d been failed by police and social workers. After one attack, officers dismissed her as a prostitute; after another, they failed to progress a medical assessment. As for her abusers, the judge concluded that “they saw her as a pathetic figure who… served no purpose than to be an object that they could sexually misuse and cast aside”. In their mugshots, two of her attackers are smiling.
Autumn’s younger brother, Adam*, now in his early 20s, believes the past fortnight’s debate over grooming gangs is fuelled by hypocrisy. “Politicians of all stripes colluded with the police to engage in a cover-up,” he tells me. He blames the Conservative Party for “failing to act on this issue despite so many cases occurring under them”. And he blames Labour, whose leader this week suggested it was a “far-Right” issue, despite the “most impacted areas being run by that party”. Meanwhile, Nigel Farage’s Reform — which registered 10% of the vote in Keighley in last year’s general election, well below its national average — is also trying to make political hay. “At least the SDP here have always prioritised the issue of grooming gangs,” he says. As for Elon Musk, who described Labour MP Jess Phillips as a “rape genocide apologist” and called for Tommy Robinson to be released from prison, Adam views him as “clearly unstable” but welcomes his intervention. “Anything that brings attention to this issue is good,” he says.
Back in town, though, most are oblivious to this week’s political mudslinging, which culminated on Wednesday in a failed Conservative vote to force a new inquiry. Few of those I speak to — Pakistani and white, young and old — are aware of Musk’s recent comments. “If Tommy Robinson came to Keighley, he’d get beaten up,” says one white woman in her early-20s. “When will [Pretoria-born] Musk start tweeting about South Africa’s race problems?” jokes one bemused madrasah teacher.
There’s a similar lack of consensus over calls for a new inquiry into West Yorkshire’s grooming gangs. This is partly because people doubt its sincerity; neither the Conservatives nor Reform mentioned an inquiry in last year’s manifestos. But it’s mostly because few believe another investigation will be acted upon. “What would the value be?” says Gee, who voted Conservative last year. “How likely is it that something will happen? It’s not as if we have the money to change anything. Just look at our housing and social care system.”
Even Adam has his reservations. “What comes after? We need to deal with the source and not just address the past.” Typical was one stallholder in Keighley’s indoor market. “I know I’d feel different if my daughter was one of the victims,” she said, “but I don’t think it’s a priority now.” Keighley, she points out, may be pretty — but it isn’t thriving. In some neighbourhoods, 40% of households are classified as deprived.
Still, there are attempts to learn from the past. After school, youth workers patrol the shopping centre and adjoining bus station where many of the town’s victims were once ensnared. “There are still creeps around,” says one teenage girl. “But they’re not just Pakistani. To be honest, they’re more likely to be a 60-year-old white guy.”
There are those, however, who remain concerned that former groomers have gone unpunished. After all, if Cryer and those seven mothers were correct, and there were at least 35 offenders in the town, not all have been caught. “Membership of these gangs is informal and often it’s hard to pin down members,” says Adam, who still believes the abusers walk the town’s streets. “You also can’t blame girls who haven’t come forward given the police’s previous failures.” Jean agrees, though also believes many “grooming gangs” have been replaced by county lines gangs, whose members are both Pakistani and white. Just this month, more than 50 members of one such gang in Keighley — peddling heroin and crack cocaine — were arrested by police.
But such developments don’t fit into the narrative of Britain’s national debate — a binary war, fought mostly online, between those uncomfortable with highlighting the ethnicity of West Yorkshire’s grooming gangs and those who seek to exploit it. Meanwhile, the affected communities are viewed as collateral. As one fed up local told me: “We’ve got enough wars going on without your Tommy Robinsons starting another.”
In Keighley, few are worried about the return of the far-Right. Their Conservative MP, Robbie Moore, has been outspoken about the need for another inquiry, neutralising movements further to the Right. In 2017, the EDL tried to hold a protest in the town and were outnumbered by police. But there’s still disquiet. In 2022, three members of a neo-Nazi cell in Keighley were jailed after being caught buying a 3D printer to make a gun. The following year, a teenager was jailed after he planned to attack one of the town’s mosques while disguised as an armed police officer.
Nor has faith in its institutions been restored. Last November, the town was left horrified when a local police officer was jailed for having sex with a vulnerable domestic abuse victim whose complaints he had been tasked with investigating. When I asked West Yorkshire Police how it hopes to regain Keighley’s trust after decades of neglect, I was told the officer’s “offending was not connected to grooming gangs” and redirected to an old press release.
But it feels connected, feeding into a pattern of betrayal. Despite the best efforts of a noble MP, Keighley remains a case study in exploitation — first by a terrified establishment who ignored the abuse of the town’s young girls, and then by a far-Right menace who sought to capitalise on their cowardice. And now, as attempts are made to reheat their trauma, Keighley’s residents might be forgiven for their ambivalence. They’ve seen this before, and know how it ends. The fires of our digital ecosystem will consume its subjects. But in Keighley, cold resignation preserves them.
*Names have been changed
Jacob Furedi is
***
Dylan
Will Stenberg ·
Follow
oodenpstSre0m0325reuht4gl:4ct64a m8870u1c3cg9e2bht2 g0uDg037c ·
Alright, so I just saw the Bob Dylan movie. Some context: Bob Dylan is my favorite artist, in any medium, ever. Period. I'm not saying he's better than your favorite. Just, for me, he's the guy, in a profound way that's hard to articulate.
I'm one of those insufferable Bob Dylan nerds. I come to it naturally: my father is a Dylan bootleg collector who could easily, for instance, make you a playlist of 500 different renditions of "Like a Rolling Stone." As for me I can make the case that his 1983 album "Infidels" would be up there with "Blood on the Tracks" if he'd subbed out some of the official tracklist for outtakes (drop "Neighborhood Bully" and "Union Sundown," add "Blind Willie McTell" and "Lord Protect My Child" and, while you're at it, grab "Angelina" from the "Shot of Love" sessions; now you have a masterpiece). And that's just surface-level stuff.
I'm one of those guys - one of those who cringes when people make fun of his voice because it tends to show that they think his "Blonde on Blonde" vocal style was emblematic of his career when in fact very few popular singers have changed singing styles as often or as brazenly as Dylan; the type of fan who listens to his modern output as much as anything else and has seen him in concert many times and knows his interviews almost as well as his songs.
So, the movie was an experience.
In a way, it's impossible to make a movie about Bob Dylan because it's like taking a still picture of a kaleidoscope: the resulting impression will be inevitably deceptive. Todd Haynes knew this when he made "I'm Not There," but that was the right approach with a very uneven execution, as memorable as Cate Blanchett was in her segment. The perfect Dylan biopic, if such a thing is conceivable, might be some kind of middle ground between "A Complete Unknown" and the Haynes pic.
Anyway. This movie. I don't really care about the inaccuracies. They are abundant; too many to go over. As a screenwriter myself, a lot of them are forgivable. Lives don't play out like movies, with all the right beats falling into a Three Act structure, so if you're making a film from reality you can't escape playing with facts, molding the chaos of actual life into the form of a film.
For example, Johnny Cash was not at the Newport Fest where Dylan "went electric"; he was, however, a really vocal advocate of Dylan's transformation and gave him crucial encouragement to follow his muse. But it played out mostly in letters, which are shown in the movie but aren't very cinematic. So, you put him at Newport - he wasn't there in '65, but he was there in '64. Close enough. It's not really a lie; rather it is taking some truths (Cash played Newport, Cash defended Dylan) and finding a way to combine them that portrays Cash's advocacy dramatically. That's how you write a biopic.
I won't defend all the changes - I think the first meeting with Woody would play a lot more powerfully as it happened, without Pete Seeger present - but I'm just saying that I understand what was going on in principle.
The other thing is that Dylan obfuscates his own life and past to such a degree that any Dylan movie that is too faithful to the truth just wouldn't be Dylanesque. He's still doing this. Casual fans might not be aware, but the Netflix Martin Scorsese documentary on The Rolling Thunder Revue is full of mistruths, put in at Dylan's request, including actors portraying fictional interview subjects. At one point, Sharon Stone says she was with the tour as an underage groupie - she was not - which shows that Dylan will even lie in such a way that it makes him look bad, as long as it throws people off the trail. Those of us who are acolytes admire that about him, perversely or otherwise.
Okay, so, how was the movie? Fairly conventional, as one might expect from Mangold. Full of masterful performances, though they come off more often than not as highly-skilled impersonations rather than embodiments or impressionistic riffs, an approach which might be more suited to the subject, if not the tone, of the film.
Monica Barbaro as Joan Baez is superb. If she fails to fully capture the icy perfection of Joan's singing, that's not a knock - few can. (On the other hand, she nailed a really underrated facet of Joan's artistry - that she's a killer guitar player.) I don't know what Joan Baez was like in private in the 60s, but her performance also showcased a bit of the salty, foul-mouthed, free-spirited Baez we've seen in interviews since the turn of the century, who is a lot more fun-loving and relatable than the sometimes sanctimonious and overly pious folk goddess of the 60s. Barbaro's performance made me believe that Joan has always been both, which is probably true.
Speaking of pious, Norton's Pete Seeger might be the most pitch-perfect impression. I was able to suspend my disbelief most closely with his performance, so attuned was he to that collegiate mid-Atlantic accent, ramrod posture and seemingly unflappable demeanor (though they really kind of had it both ways with the infamous axe incident; I wanted to see Pete Hulk out as per the legend). The Pete Seeger I saw in the movie was exactly like the Pete Seeger of real life, in my humble opinion: artistically boring and ethically unimpeachable.
Elle Fanning was great, but her part was thin and mostly served as a foil to Chalamet. That's one point where a weak script showed through; another was some of the ridiculously expository dialogue (as Bob launches into "Like a Rolling Stone" in the studio, an engineer says something like, "Ooh boy, people aren't going to like this").
As for Timmy, I can't take anything away from him. It struck me as the impression of an excellent technician rather than a deep embodiment, but I don't think the script allowed for more. He did great. Particularly when he dons the dark glasses and high fashion, I caught glimpses of the real Dylan. (The 19 year-old Dyan who first came to New York and made that record was still very baby-faced so seeing the waifish Chalomet in this period was surreal, like the '65 Bob doing an impression of his own childhood.)
There are some reviews out there complaining that we never get to know who the "real" Bob Dylan is, why he's writing these songs, or what he wants. That's fine with me. We don't know that in real life and any film that tried to make a thesis statement on this subject would be making a crucial mistake. And, yes, Dylan's reputation as "kind of an asshole" is portrayed without a lot of caveats - arguably to Bob's credit, since he reviewed the script - although I will say that this knock on Bob is mostly down to him being rude and kind of careerist whereas there is shit about his contemporaries that is actually nefarious and criminal, so I'm not sure why he gets singled out.
Maybe a nitpick of mine would be how wishy-washy the script is about to what extent Dylan believed in the movements that adopted his songs and offered him a figurehead position that he ultimately rejected. There's literally no point where you see him talk about politics or racial justice, but, like, the guy went to Mississippi at a time when it was possible to be yanked off a bus and found in a ditch with the back of your head blown off for doing so. A lot of people didn't go; Bob did. He "opened" for MLK before the "I Have a Dream Speech." The movie shows this - in a montage, and further removed by taking place on a television screen.
In my view, Dylan rejected the mantle of spokesperson and the rigidity of ideological constraints on his artistic vision - and more power to him - but I don't think his participation was insincere. The film doesn't take a stand on this one way or another, and that felt a little off, like they were afraid of offering a perspective. But it's a hard topic to avoid given the years they decided to cover, and I'd rather they'd disagreed with my take than just refuse to have one.
On this topic, I always like to mention that there's an issue of Esquire Magazine from 1965 that features a facial conglomeration of JFK, Fidel Castro, Malcom X and ... Bob Dylan, a young guy who writes songs. When I see this, I get a visceral sense of the absolute dread that must have gripped the heart of someone who was essentially a middle-class Midwestern Jewish kid who happened to be possessed by artistic genius. He was being asked to die, by people who didn't even know him, and from this perspective I think his supposedly callous treatment of folks in the movement who refused to let him go feels more like the panic of self-preservation.
Anyway, that's a tangent. How was the movie? Fine. They made one really good decision: they put a ton of focus on the music, which is ultimately what matters. And Timmy did an incredible vocal impersonation of Dylan's singing at this juncture: the knife-like nasality that jumped through the speakers and forced you to listen; the weird flattened Midwest vowels filtered through an Okie impersonation; the deliberate articulation of each syllable like a judge pounding a gavel. He really nailed that.
It was good to begin and end with Woody, too. For all of the young Dylan's rudeness and ruthlessness, those visits showed "Another Side of Bob Dylan," as the record would be called. Not everyone in the folk scene was crossing the river to that grim place to spend time with a very sick man who could give nothing in return. And Bob wasn't inviting press to these things; we only know because he became famous. Those visits showed Bob Dylan's sense of the sacred which is ultimately what he has sought to preserve from anyone who would attempt to control or confine it, throughout all the years, eras and masks.
The young man who saw saw "a black branch with blood that kept drippin’," "a room full of men with their hammers a-bleedin’," "a white ladder all covered with water," and "ten thousand talkers whose tongues were all broken" isn't someone who can be portrayed onscreen because he exists in the interior, not the exterior, of the self. This is ultimately why biopics about artists are so difficult: that what we most admire about them - their art - comes from a place where cameras can't go, and Bob Dylan has spent more time there than most people who have ever lived.

pg
woke
January 2025
The word "prig" isn't very common now, but if you look up the definition, it will sound familiar. Google's isn't bad:
A self-righteously moralistic person who behaves as if superior to others.
This sense of the word originated in the 18th century, and its age is an important clue: it shows that although wokeness is a comparatively recent phenomenon, it's an instance of a much older one.
There's a certain kind of person who's attracted to a shallow, exacting kind of moral purity, and who demonstrates his purity by attacking anyone who breaks the rules. Every society has these people. All that changes is the rules they enforce. In Victorian England it was Christian virtue. In Stalin's Russia it was orthodox Marxism-Leninism. For the woke, it's social justice.
So if you want to understand wokeness, the question to ask is not why people behave this way. Every society has prigs. The question to ask is why our prigs are priggish about these ideas, at this moment. And to answer that we have to ask when and where wokeness began.
The answer to the first question is the 1980s. Wokeness is a second, more aggressive wave of political correctness, which started in the late 1980s, died down in the late 1990s, and then returned with a vengeance in the early 2010s, finally peaking after the riots of 2020.
(AA Gill on PC versus Wales and Claire Balding)
This was not the original meaning of woke, but it's rarely used in the original sense now. Now the pejorative sense is the dominant one. What does it mean now? I've often been asked to define both wokeness and political correctness by people who think they're meaningless labels, so I will. They both have the same definition:
An aggressively performative focus on social justice.
In other words, it's people being prigs about social justice. And that's the real problem — the performativeness, not the social justice.
Racism, for example, is a genuine problem. Not a problem on the scale that the woke believe it to be, but a genuine one. I don't think any reasonable person would deny that. The problem with political correctness was not that it focused on marginalized groups, but the shallow, aggressive way in which it did so. Instead of going out into the world and quietly helping members of marginalized groups, the politically correct focused on getting people in trouble for using the wrong words to talk about them.
As for where political correctness began, if you think about it, you probably already know the answer. Did it begin outside universities and spread to them from this external source? Obviously not; it has always been most extreme in universities. So where in universities did it begin? Did it begin in math, or the hard sciences, or engineering, and spread from there to the humanities and social sciences? Those are amusing images, but no, obviously it began in the humanities and social sciences.
Why there? And why then? What happened in the humanities and social sciences in the 1980s?
A successful theory of the origin of political correctness has to be able to explain why it didn't happen earlier. Why didn't it happen during the protest movements of the 1960s, for example? They were concerned with much the same issues. [1]
The reason the student protests of the 1960s didn't lead to political correctness was precisely that — they were student movements. They didn't have any real power. The students may have been talking a lot about women's liberation and black power, but it was not what they were being taught in their classes. Not yet.
But in the early 1970s the student protestors of the 1960s began to finish their dissertations and get hired as professors. At first they were neither powerful nor numerous. But as more of their peers joined them and the previous generation of professors started to retire, they gradually became both.
The reason political correctness began in the humanities and social sciences was that these fields offered more scope for the injection of politics. A 1960s radical who got a job as a physics professor could still attend protests, but his political beliefs wouldn't affect his work. Whereas research in sociology and modern literature can be made as political as you like. [2]
I saw political correctness arise. When I started college in 1982 it was not yet a thing. Female students might object if someone said something they considered sexist, but no one was getting reported for it. It was still not a thing when I started grad school in 1986. It was definitely a thing in 1988 though, and by the early 1990s it seemed to pervade campus life.
What happened? How did protest become punishment? Why were the late 1980s the point at which protests against male chauvinism (as it used to be called) morphed into formal complaints to university authorities about sexism? Basically, the 1960s radicals got tenure. They became the Establishment they'd protested against two decades before. Now they were in a position not just to speak out about their ideas, but to enforce them.
A new set of moral rules to enforce was exciting news to a certain kind of student. What made it particularly exciting was that they were allowed to attack professors. I remember noticing that aspect of political correctness at the time. It wasn't simply a grass-roots student movement. It was faculty members encouraging students to attack other faculty members. In that respect it was like the Cultural Revolution. That wasn't a grass-roots movement either; that was Mao unleashing the younger generation on his political opponents. And in fact when Roderick MacFarquhar started teaching a class on the Cultural Revolution at Harvard in the late 1980s, many saw it as a comment on current events. I don't know if it actually was, but people thought it was, and that means the similarities were obvious. [3]
College students larp. It's their nature. It's usually harmless. But larping morality turned out to be a poisonous combination. The result was a kind of moral etiquette, superficial but very complicated. Imagine having to explain to a well-meaning visitor from another planet why using the phrase "people of color" is considered particularly enlightened, but saying "colored people" gets you fired. And why exactly one isn't supposed to use the word "negro" now, even though Martin Luther King used it constantly in his speeches. There are no underlying principles. You'd just have to give him a long list of rules to memorize. [4]
The danger of these rules was not just that they created land mines for the unwary, but that their elaborateness made them an effective substitute for virtue. Whenever a society has a concept of heresy and orthodoxy, orthodoxy becomes a substitute for virtue. You can be the worst person in the world, but as long as you're orthodox you're better than everyone who isn't. This makes orthodoxy very attractive to bad people.
But for it to work as a substitute for virtue, orthodoxy must be difficult. If all you have to do to be orthodox is wear some garment or avoid saying some word, everyone knows to do it, and the only way to seem more virtuous than other people is to actually be virtuous. The shallow, complicated, and frequently changing rules of political correctness made it the perfect substitute for actual virtue. And the result was a world in which good people who weren't up to date on current moral fashions were brought down by people whose characters would make you recoil in horror if you could see them.
One big contributing factor in the rise of political correctness was the lack of other things to be morally pure about. Previous generations of prigs had been prigs mostly about religion and sex. But among the cultural elite these were the deadest of dead letters by the 1980s; if you were religious, or a virgin, this was something you tended to conceal rather than advertise. So the sort of people who enjoy being moral enforcers had become starved of things to enforce. A new set of rules was just what they'd been waiting for.
Curiously enough, the tolerant side of the 1960s left helped create the conditions in which the intolerant side prevailed. The relaxed social rules advocated by the old, easy-going hippy left became the dominant ones, at least among the elite, and this left nothing for the naturally intolerant to be intolerant about.
Another possibly contributing factor was the fall of the Soviet empire. Marxism had been a popular focus of moral purity on the left before political correctness emerged as a competitor, but the pro-democracy movements in Eastern Bloc countries took most of the shine off it. Especially the fall of the Berlin Wall in 1989. You couldn't be on the side of the Stasi. I remember looking at the moribund Soviet Studies section of a used bookshop in Cambridge in the late 1980s and thinking "what will those people go on about now?" As it turned out the answer was right under my nose.
One thing I noticed at the time about the first phase of political correctness was that it was more popular with women than men. As many writers (perhaps most eloquently George Orwell) have observed, women seem more attracted than men to the idea of being moral enforcers. But there was another more specific reason women tended to be the enforcers of political correctness. There was at this time a great backlash against sexual harassment; the mid 1980s were the point when the definition of sexual harassment was expanded from explicit sexual advances to creating a "hostile environment." Within universities the classic form of accusation was for a (female) student to say that a professor made her "feel uncomfortable." But the vagueness of this accusation allowed the radius of forbidden behavior to expand to include talking about heterodox ideas. Those make people uncomfortable too. [5]
Was it sexist to propose that Darwin's greater male variability hypothesis might explain some variation in human performance? Sexist enough to get Larry Summers pushed out as president of Harvard, apparently. One woman who heard the talk in which he mentioned this idea said it made her feel "physically ill" and that she had to leave halfway through. If the test of a hostile environment is how it makes people feel, this certainly sounds like one. And yet it does seem plausible that greater male variability explains some of the variation in human performance. So which should prevail, comfort or truth? Surely if truth should prevail anywhere, it should be in universities; that's supposed to be their specialty; but for decades starting in the late 1980s the politically correct tried to pretend this conflict didn't exist. [6]
Political correctness seemed to burn out in the second half of the 1990s. One reason, perhaps the main reason, was that it literally became a joke. It offered rich material for comedians, who performed their usual disinfectant action upon it. Humor is one of the most powerful weapons against priggishness of any sort, because prigs, being humorless, can't respond in kind. Humor was what defeated Victorian prudishness, and by 2000 it seemed to have done the same thing to political correctness.
Unfortunately this was an illusion. Within universities the embers of political correctness were still glowing brightly. After all, the forces that created it were still there. The professors who started it were now becoming deans and department heads. And in addition to their departments there were now a bunch of new ones explicitly focused on social justice. Students were still hungry for things to be morally pure about. And there had been an explosion in the number of university administrators, many of whose jobs involved enforcing various forms of political correctness.
In the early 2010s the embers of political correctness burst into flame anew. There were several differences between this new phase and the original one. It was more virulent. It spread further into the real world, although it still burned hottest within universities. And it was concerned with a wider variety of sins. In the first phase of political correctness there were really only three things people got accused of: sexism, racism, and homophobia (which at the time was a neologism invented for the purpose). But between then and 2010 a lot of people had spent a lot of time trying to invent new kinds of -isms and -phobias and seeing which could be made to stick.
The second phase was, in multiple senses, political correctness metastasized. Why did it happen when it did? My guess is that it was due to the rise of social media, particularly Tumblr and Twitter, because one of the most distinctive features of the second wave of political correctness was the cancel mob: a mob of angry people uniting on social media to get someone ostracized or fired. Indeed this second wave of political correctness was originally called "cancel culture"; it didn't start to be called "wokeness" till the 2020s.
One aspect of social media that surprised almost everyone at first was the popularity of outrage. Users seemed to like being outraged. We're so used to this idea now that we take it for granted, but really it's pretty strange. Being outraged is not a pleasant feeling. You wouldn't expect people to seek it out. But they do. And above all, they want to share it. I happened to be running a forum from 2007 to 2014, so I can actually quantify how much they want to share it: our users were about three times more likely to upvote something if it outraged them.
This tilt toward outrage wasn't due to wokeness. It's an inherent feature of social media, or at least this generation of it. But it did make social media the perfect mechanism for fanning the flames of wokeness. [7]
It wasn't just public social networks that drove the rise of wokeness though. Group chat apps were also critical, especially in the final step, cancellation. Imagine if a group of employees trying to get someone fired had to do it using only email. It would be hard to organize a mob. But once you have group chat, mobs form naturally.
Another contributing factor in this second wave of political correctness was the dramatic increase in the polarization of the press. In the print era, newspapers were constrained to be, or at least seem, politically neutral. The department stores that ran ads in the New York Times wanted to reach everyone in the region, both liberal and conservative, so the Times had to serve both. But the Times didn't regard this neutrality as something forced upon them. They embraced it as their duty as a paper of record — as one of the big newspapers that aimed to be chronicles of their times, reporting every sufficiently important story from a neutral point of view.
When I grew up the papers of record seemed timeless, almost sacred institutions. Papers like the New York Times and Washington Post had immense prestige, partly because other sources of news were limited, but also because they did make some effort to be neutral.
Unfortunately it turned out that the paper of record was mostly an artifact of the constraints imposed by print. [8] When your market was determined by geography, you had to be neutral. But publishing online enabled — in fact probably forced — newspapers to switch to serving markets defined by ideology instead of geography. Most that remained in business fell in the direction they'd already been leaning: left. On October 11, 2020 the New York Times announced that "The paper is in the midst of an evolution from the stodgy paper of record into a juicy collection of great narratives." [9] Meanwhile journalists, of a sort, had arisen to serve the right as well. And so journalism, which in the previous era had been one of the great centralizing forces, now became one of the great polarizing ones.
The rise of social media and the increasing polarization of journalism reinforced one another. In fact there arose a new variety of journalism involving a loop through social media. Someone would say something controversial on social media. Within hours it would become a news story. Outraged readers would then post links to the story on social media, driving further arguments online. It was the cheapest source of clicks imaginable. You didn't have to maintain overseas news bureaus or pay for month-long investigations. All you had to do was watch Twitter for controversial remarks and repost them on your site, with some additional comments to inflame readers further.
For the press there was money in wokeness. But they weren't the only ones. That was one of the biggest differences between the two waves of political correctness: the first was driven almost entirely by amateurs, but the second was often driven by professionals. For some it was their whole job. By 2010 a new class of administrators had arisen whose job was basically to enforce wokeness. They played a role similar to that of the political commissars who got attached to military and industrial organizations in the USSR: they weren't directly in the flow of the organization's work, but watched from the side to ensure that nothing improper happened in the doing of it. These new administrators could often be recognized by the word "inclusion" in their titles. Within institutions this was the preferred euphemism for wokeness; a new list of banned words, for example, would usually be called an "inclusive language guide." [10]
This new class of bureaucrats pursued a woke agenda as if their jobs depended on it, because they did. If you hire people to keep watch for a particular type of problem, they're going to find it, because otherwise there's no justification for their existence. [11] But these bureaucrats also represented a second and possibly even greater danger. Many were involved in hiring, and when possible they tried to ensure their employers hired only people who shared their political beliefs. The most egregious cases were the new "DEI statements" that some universities started to require from faculty candidates, proving their commitment to wokeness. Some universities used these statements as the initial filter and only even considered candidates who scored high enough on them. You're not hiring Einstein that way; imagine what you get instead.
Another factor in the rise of wokeness was the Black Lives Matter movement, which started in 2013 when a white man was acquitted after killing a black teenager in Florida. But this didn't launch wokeness; it was well underway by 2013.
Similarly for the Me Too Movement, which took off in 2017 after the first news stories about Harvey Weinstein's history of raping women. It accelerated wokeness, but didn't play the same role in launching it that the 80s version did in launching political correctness.
The election of Donald Trump in 2016 also accelerated wokeness, particularly in the press, where outrage now meant traffic. Trump made the New York Times a lot of money: headlines during his first administration mentioned his name at about four times the rate of previous presidents.
In 2020 we saw the biggest accelerant of all, after a white police officer asphyxiated a black suspect on video. At this point the metaphorical fire became a literal one, as violent protests broke out across America. But in retrospect this turned out to be peak woke, or close to it. By every measure I've seen, wokeness peaked in 2020 or 2021.
Wokeness is sometimes described as a mind-virus. What makes it viral is that it defines new types of impropriety. Most people are afraid of impropriety; they're never exactly sure what the social rules are or which ones they might be breaking. Especially if the rules change rapidly. And since most people already worry that they might be breaking rules they don't know about, if you tell them they're breaking a rule, their default reaction is to believe you. Especially if multiple people tell them. Which in turn is a recipe for exponential growth. Zealots invent some new impropriety to avoid. The first people to adopt it are fellow zealots, eager for new ways to signal their virtue. If there are enough of these, the initial group of zealots is followed by a much larger group, motivated by fear. They're not trying to signal virtue; they're just trying to avoid getting in trouble. At this point the new impropriety is now firmly established. Plus its success has increased the rate of change in social rules, which, remember, is one of the reasons people are nervous about which rules they might be breaking. So the cycle accelerates. [12]
What's true of individuals is even more true of organizations. Especially organizations without a powerful leader. Such organizations do everything based on "best practices." There's no higher authority; if some new "best practice" achieves critical mass, they must adopt it. And in this case the organization can't do what it usually does when it's uncertain: delay. It might be committing improprieties right now! So it's surprisingly easy for a small group of zealots to capture this type of organization by describing new improprieties it might be guilty of. [13]
How does this kind of cycle ever end? Eventually it leads to disaster, and people start to say enough is enough. The excesses of 2020 made a lot of people say that.
Since then wokeness has been in gradual but continual retreat. Corporate CEOs, starting with Brian Armstrong, have openly rejected it. Universities, led by the University of Chicago and MIT, have explicitly confirmed their commitment to free speech. Twitter, which was arguably the hub of wokeness, was bought by Elon Musk in order to neutralize it, and he seems to have succeeded — and not, incidentally, by censoring left-wing users the way Twitter used to censor right-wing ones, but without censoring either. [14] Consumers have emphatically rejected brands that ventured too far into wokeness. The Bud Light brand may have been permanently damaged by it. I'm not going to claim Trump's second victory in 2024 was a referendum on wokeness; I think he won, as presidential candidates always do, because he was more charismatic; but voters' disgust with wokeness must have helped.
So what do we do now? Wokeness is already in retreat. Obviously we should help it along. What's the best way to do that? And more importantly, how do we avoid a third outbreak? After all, it seemed to be dead once, but came back worse than ever.
In fact there's an even more ambitious goal: is there a way to prevent any similar outbreak of aggressively performative moralism in the future — not just a third outbreak of political correctness, but the next thing like it? Because there will be a next thing. Prigs are prigs by nature. They need rules to obey and enforce, and now that Darwin has cut off their traditional supply of rules, they're constantly hungry for new ones. All they need is someone to meet them halfway by defining a new way to be morally pure, and we'll see the same phenomenon again.
Let's start with the easier problem. Is there a simple, principled way to deal with wokeness? I think there is: to use the customs we already have for dealing with religion. Wokeness is effectively a religion, just with God replaced by protected classes. It's not even the first religion of this kind; Marxism had a similar form, with God replaced by the masses. [15] And we already have well-established customs for dealing with religion within organizations. You can express your own religious identity and explain your beliefs, but you can't call your coworkers infidels if they disagree, or try to ban them from saying things that contradict its doctrines, or insist that the organization adopt yours as its official religion.
If we're not sure what to do about any particular manifestation of wokeness, imagine we were dealing with some other religion, like Christianity. Should we have people within organizations whose jobs are to enforce woke orthodoxy? No, because we wouldn't have people whose jobs were to enforce Christian orthodoxy. Should we censor writers or scientists whose work contradicts woke doctrines? No, because we wouldn't do this to people whose work contradicted Christian teachings. Should job candidates be required to write DEI statements? Of course not; imagine an employer requiring proof of one's religious beliefs. Should students and employees have to participate in woke indoctrination sessions in which they're required to answer questions about their beliefs to ensure compliance? No, because we wouldn't dream of catechizing people in this way about their religion. [16]
One shouldn't feel bad about not wanting to watch woke movies any more than one would feel bad about not wanting to listen to Christian rock. In my twenties I drove across America several times, listening to local radio stations. Occasionally I'd turn the dial and hear some new song. But the moment anyone mentioned Jesus I'd turn the dial again. Even the tiniest bit of being preached to was enough to make me lose interest.
But by the same token we should not automatically reject everything the woke believe. I'm not a Christian, but I can see that many Christian principles are good ones. It would be a mistake to discard them all just because one didn't share the religion that espoused them. It would be the sort of thing a religious zealot would do.
If we have genuine pluralism, I think we'll be safe from future outbreaks of woke intolerance. Wokeness itself won't go away. There will for the foreseeable future continue to be pockets of woke zealots inventing new moral fashions. The key is not to let them treat their fashions as normative. They can change what their coreligionists are allowed to say every few months if they like, but they mustn't be allowed to change what we're allowed to say. [17]
The more general problem — how to prevent similar outbreaks of aggressively performative moralism — is of course harder. Here we're up against human nature. There will always be prigs. And in particular there will always be the enforcers among them, the aggressively conventional-minded. These people are born that way. Every society has them. So the best we can do is to keep them bottled up.
The aggressively conventional-minded aren't always on the rampage. Usually they just enforce whatever random rules are nearest to hand. They only become dangerous when some new ideology gets a lot of them pointed in the same direction at once. That's what happened during the Cultural Revolution, and to a lesser extent (thank God) in the two waves of political correctness we've experienced.
We can't get rid of the aggressively conventional-minded. [18] And we couldn't prevent people from creating new ideologies that appealed to them even if we wanted to. So if we want to keep them bottled up, we have to do it one step downstream. Fortunately when the aggressively conventional-minded go on the rampage they always do one thing that gives them away: they define new heresies to punish people for. So the best way to protect ourselves from future outbreaks of things like wokeness is to have powerful antibodies against the concept of heresy.
We should have a conscious bias against defining new forms of heresy. Whenever anyone tries to ban saying something that we'd previously been able to say, our initial assumption should be that they're wrong. Only our initial assumption of course. If they can prove we should stop saying it, then we should. But the burden of proof is on them. In liberal democracies, people trying to prevent something from being said will usually claim they're not merely engaging in censorship, but trying to prevent some form of "harm". And maybe they're right. But once again, the burden of proof is on them. It's not enough to claim harm; they have to prove it.
As long as the aggressively conventional-minded continue to give themselves away by banning heresies, we'll always be able to notice when they become aligned behind some new ideology. And if we always fight back at that point, with any luck we can stop them in their tracks.
The number of true things we can't say should not increase. If it does, something is wrong.
Notes
[1] Why did 1960s radicals focus on the causes they did? One of the people who reviewed drafts of this essay explained this so well that I asked if I could quote him:
The middle-class student protestors of the New Left rejected the socialist/Marxist left as unhip. They were interested in sexier forms of oppression uncovered by cultural analysis (Marcuse) and abstruse "Theory". Labor politics became stodgy and old-fashioned. This took a couple generations to work through. The woke ideology's conspicuous lack of interest in the working class is the tell-tale sign. Such fragments as are, er, left of the old left are anti-woke, and meanwhile the actual working class shifted to the populist right and gave us Trump. Trump and wokeness are cousins.
The middle-class origins of wokeness smoothed its way through the institutions because it had no interest in "seizing the means of production" (how quaint such phrases seem now), which would quickly have run up against hard state and corporate power. The fact that wokeness only expressed interest in other kinds of class (race, sex, etc) signalled compromise with existing power: give us power within your system and we'll bestow the resource we control — moral rectitude — upon you. As an ideological stalking horse for gaining control over discourse and institutions, this succeeded where a more ambitious revolutionary program would not have.
[2] It helped that the humanities and social sciences also included some of the biggest and easiest undergrad majors. If a political movement had to start with physics students, it could never get off the ground; there would be too few of them, and they wouldn't have the time to spare.
At the top universities these majors are not as big as they used to be, though. A 2022 survey found that only 7% of Harvard undergrads plan to major in the humanities, vs nearly 30% during the 1970s. I expect wokeness is at least part of the reason; when undergrads consider majoring in English, it's presumably because they love the written word and not because they want to listen to lectures about racism.
[3] The puppet-master-and-puppet character of political correctness became clearly visible when a bakery near Oberlin College was falsely accused of race discrimination in 2016. In the subsequent civil trial, lawyers for the bakery produced a text message from Oberlin Dean of Students Meredith Raimondo that read "I'd say unleash the students if I wasn't convinced this needs to be put behind us."
[4] The woke sometimes claim that wokeness is simply treating people with respect. But if it were, that would be the only rule you'd have to remember, and this is comically far from being the case. My younger son likes to imitate voices, and at one point when he was about seven I had to explain which accents it was currently safe to imitate publicly and which not. It took about ten minutes, and I still hadn't covered all the cases.
[5] In 1986 the Supreme Court ruled that creating a hostile work environment could constitute sex discrimination, which in turn affected universities via Title IX. The court specified that the test of a hostile environment was whether it would bother a reasonable person, but since for a professor merely being the subject of a sexual harassment complaint would be a disaster whether the complainant was reasonable or not, in practice any joke or remark remotely connected with sex was now effectively forbidden. Which meant we'd now come full circle to Victorian codes of behavior, when there was a large class of things that might not be said "with ladies present."
[6] Much as they tried to pretend there was no conflict between diversity and quality. But you can't simultaneously optimize for two things that aren't identical. What diversity actually means, judging from the way the term is used, is proportional representation, and unless you're selecting a group whose purpose is to be representative, like poll respondents, optimizing for proportional representation has to come at the expense of quality. This is not because of anything about representation; it's the nature of optimization; optimizing for x has to come at the expense of y unless x and y are identical.
Talha Chowdhury ·
Follow
ntsoprdeSoec193521 e4ct02auf76 m 0mi0064 0br21mDfa0l6:10e4c125f ·
This post got me curious. Then I saw Shane Parrish (one of my fav writers) sharing a comment regarding this.
The comment from Shane was link to a transcript of Hamming's talk that changed his life. I'm sharing the link to the blog in the comments--but let me tell you, it's a 10/10 reading.
I'm half way through the blog and here's my notes so far:
- the difference between having a vision and not having a vision is almost everything, and doing excellent work provides a goal which is steady in this world of constant change.
- “Why are you not working on and thinking about the important problems in your area?” If you do not work on important problems, then it is obvious you have little chance of doing important things.
- age is a factor physicists and mathematicians worry about. It is easily observed that the greatest work of a theoretical physicist, mathematician, or astrophysicist is generally done very early. But in literature, music composition, and politics, age seems to be an asset. The best compositions of a composer are usually the late ones, as judged by popular opinion.
- working with one’s door closed lets you get more work done per year than if you had an open door, but I have observed repeatedly that later those with the closed doors, while working just as hard as others, seem to work on slightly the wrong problems, while those who have let their door stay open get less work done but tend to work on the right problems!
- while I could never work as hard as John did, I could do a lot better than I had been doing.
- if you are to be a leader into the future, rather than a follower of others, I am now saying it seems to me to be necessary for you to look at the bigger picture on a regular, frequent basis for many years.
- a problem is important partly because there is a possible attack on it and not just because of its inherent importance. the three problems in physics—anti-gravity, teleportation, and time travel—are seldom worked on because we have so few clues as to how to start.
- “It ain’t what you do, it’s the way that you do it.” Look over what you have done, and recast it in a proper form.
************
Charles C Mann
Jefferson ink freeze
***
Breakfast for Eight Billion
thenewatlantis.com
16 min
January 10, 2025
HighlightShare
Sometime in the 1980s, an unprecedented change in the human condition occurred. For the first time in known history, the average person on Earth had enough to eat all the time.
Depending on their size, adult humans need to take in about 2,000 to 2,500 calories per day to thrive. For as far back as historians can see, a substantial number of Earth’s inhabitants spent much of their lives below this level. Famine and want were the lot of many — sometimes most — of our species.
Even wealthy places like Europe were not protected from hunger. France today is famed for its great cuisine and splendid restaurants. But its people did not reach the level of 2,000 to 2,500 calories per day until the mid-1800s. And even as the French left famine in the rear-view mirror, starvation was still claiming hundreds of thousands of Irish, Scots, and Belgians. As late as the winter of 1944–45, the Netherlands suffered a crippling famine — the Hongerwinter. More than 20,000 people perished in just a few months. Food shortages plagued rural Spain and Italy until at least the 1950s.
In poorer regions the situation was bleaker still. The United Nations created the Food and Agriculture Organization in 1945. One year later, the FAO issued the World Food Survey, a forty-page report that was the first-ever comprehensive attempt to measure what the world ate. Half the Earth’s inhabitants, it reported, subsisted on fewer than 2,250 calories per day. Globally, just one out of three people had clearly adequate diets.
Worse, many believed, humanity’s rising population meant that the task of feeding everyone would get harder. Between 1950 and 1990, human numbers doubled, from 2.5 billion to 5 billion. (The figure is now more than 8 billion.) So overwhelming seemed the task of providing for all these people that a stream of bestselling books argued it was impossible. The most famous warning — The Population Bomb, published in 1968 by Stanford ecologist Paul Ehrlich — began with a stark statement: “The battle to feed all of humanity is over.” The book promised that “hundreds of millions of people are going to starve to death” in the 1970s because farmers would not be able to grow enough food. No matter what we do, Ehrlich wrote, “nothing can prevent a substantial increase in the world death rate.”
But hundreds of millions did not starve to death. The world death rate did not substantially increase. Instead, harvests rose — and have kept rising. By the 1980s, farmers were producing so much food that the global average food consumption had reached 2,000 to 2,500 calories per day, a landmark in history. Today the average is closer to 3,000 calories per day, and the emerging problem is obesity — too much food, rather than too little.
The picture is not wholly rosy — still do not get enough to eat. But hunger today is generally due to low incomes and poor food distribution, rather than failing to grow enough food. Farmers produce enough for everyone, but not all get what they need. Still, our daily lives are nothing like those of previous generations.
What happened? Modern agriculture.
Farming is one of our species’ oldest activities, dating back roughly 13,000 years. About six thousand years ago, farmers began using draft animals — horses, oxen, and so on. For millennia after that, what farmers did in their fields changed little. But in the 1960s and 1970s, in what is called the “Green Revolution,” research scientists, government agencies, agricultural businesses, and farmers themselves put together a new, strikingly more productive version of agriculture — Farming 2.0, if you will.
Today, Farming-2.0-style agriculture — which began with innovations in field crops like wheat but spread to other parts of farming, such as cattle ranching and chicken-raising — is by almost any measure the world’s most critical industry. It is directly responsible for our daily bread. But despite its overwhelming importance, Farming 2.0 is in many ways unknown to most of us, because it has been so smoothly successful that we have almost no picture of the underpinnings of the vast system that provides us with breakfast, lunch, and dinner. Too few have any sense of its scope, what brought it into existence, and in what ways it will need to change.
Its scale is staggering. Some 40 percent of our planet’s land is covered by cropland and pasture, the overwhelming majority of which is devoted to this type of agriculture. Increasingly, the farms themselves are huge — one in China covers 22 million acres. And they are supplied and their products processed and shuttled about the world by a web of gigantic multinational companies — Cargill, BASF, Wilmar, Archer Daniels Midland, and others. Supervising this enormous network of production and exchange are agricultural agencies in every world government that set rules for and inspect what farmers produce and sell. The result of this huge, global system is on display in every supermarket. The gleaming arrays of highly networked international goods — fruit from Peru, pasta from Italy, sugar from Brazil, vegetables from the Netherlands, condiments from China, coffee from Ethiopia — that line the aisles would have astonished our grandmothers and great-grandmothers.
All of this was brought into being by the Green Revolution. In its simplest form, the Green Revolution was a mix of two ancient technologies — or, rather, modernized versions of them — and one brand-new science. The old-but-updated technologies were fertilization and irrigation; the new science was genetics. Combining these three into Farming 2.0 was, arguably, the most important event in the twentieth century — it literally reshaped the face of the Earth.
Fertilization is the older of the two old technologies. People have known for at least 8,000 years that adding organic substances — notably, animal feces and urine — to the soil helps crops grow and flourish. But for almost all that time nobody knew why manure is good for crops. Only in the first decades of the nineteenth century, in a scientific breakthrough, did several German researchers discover that plant growth was controlled by the amount of nitrogen they take in through their roots — and that manure and every other type of fertilizer work principally because they add nitrogen to the soil.
Without nitrogen, plants cannot perform photosynthesis, the process by which they capture power from sunlight and carbon dioxide from the air to drive their growth. More technically, plants primarily need nitrogen because it is an essential constituent of rubisco, the molecule at the heart of photosynthesis. Like military recruiters who induct volunteers into the army and then return to their desks, rubisco molecules take carbon dioxide from the air, insert it into the tangle of chemical reactions that makes up photosynthesis, then go back for more. Rubisco’s facilitating actions are the limiting step in photosynthesis, which means that the rate at which rubisco functions determines the rate of the entire process. The more rubisco, the more photosynthesis; the more photosynthesis, the more crop growth; the more crop growth, the greater the harvest. For a plant, putting nitrogen in the soil is a bit like recharging the batteries that its cellular engines draw upon.
At first glance, the notion that soil could somehow be short of nitrogen seems odd. Nitrogen makes up more than three-quarters of the atmosphere. How could it be in scant supply? The reason is that more than 99 percent of the Earth’s nitrogen is in the form of nitrogen gas. Nitrogen gas — N2, in chemical notation — consists of two nitrogen atoms bound together so tightly that plants’ cellular machinery cannot split them apart for use. Instead, plants can absorb nitrogen only when it is combined with other elements, such as hydrogen, carbon, and oxygen, into compounds that plants can break up.
Scientists refer to the process of creating usable nitrogen mixtures as “fixing” nitrogen. In the soil, nitrogen is mainly fixed by microorganisms. Some break down organic matter, making its nitrogen available to plant roots in easily digestible forms. Others, such as the symbiotic bacteria that live around the roots of clover, beans, lentils, and other legumes, use clever biochemical tricks to break apart nitrogen gas and change it into compounds that plants can take in. A small amount is fixed by lightning, which zaps apart airborne N2 molecules, after which they combine with oxygen into compounds that dissolve in rainwater.
In the mid–nineteenth century, some of the same scientists who had discovered nitrogen’s essential role in photosynthesis realized that factory-made nitrogen compounds — chemical fertilizers — could substitute for manure. The trick was to make those artificial compounds. And the key to making them, chemists believed, was to learn how to manufacture ammonia — the same substance that is in ordinary household ammonia today — as a precursor, because they knew that ammonia could be easily transformed into the kind of nitrogen-containing substances that plants can absorb.
Early in the twentieth century, a German scientist named Fritz Haber found that he could break up nitrogen gas by heating and compressing it in the presence of special alloys that catalyze — facilitate — the process. The catalyzing alloys were Haber’s special contribution. Building on Haber’s work, a German chemical engineer named Carl Bosch then figured out how to produce ammonia on the industrial scale needed for farmers’ fields. Both men received Nobel Prizes.
The prizes were richly deserved; the Haber–Bosch process, as it is called, was arguably the most important technological development of the twentieth century, and one of the most consequential human inventions of any time. It made it possible to win “bread from air,” as the German physicist Max von Laue wrote in an obituary of Haber.
Because Haber and Bosch completed their work in the run-up to the Second World War, the Haber–Bosch process was not widely deployed until after the fight was over. The first leg of the Green Revolution — the first part of the rise of Farming 2.0 — was for countries across the world to build fertilizer factories.
Today more than 1 percent of the world’s industrial energy is devoted to making ammonia fertilizer. “That 1 percent,” the futurist Ramez Naam says, “roughly doubles the amount of food the world can grow.”
Increasing the food supply fostered an increase in human numbers. The energy scientist Vaclav Smil has calculated that fertilizer from the Haber–Bosch process is responsible for about 40 percent of the world’s dietary protein. Roughly speaking, this is equivalent to feeding 40 percent of the world: about 3.2 billion people. More than three billion men, women, and children — an incomprehensibly vast cloud of dreams, hopes, and fears — owe their existence to two obscure early-twentieth-century German chemists and the fertilizer industry they spawned.
Equally important to Farming 2.0 was the second leg of the Green Revolution, irrigation. Irrigation dates back nearly as far as fertilizer — the oldest known example is from about 6000 b.c., in the Jordan Valley, which lies between Israel and Jordan. But it sprang up independently in many places, from ancient China to ancient Mexico to ancient Zimbabwe.
In every one of these places, the basic concept was simple: divert water from a river or lake above the farm into its fields, watering the crop. But irrigation was difficult to establish in most areas, because those rivers and lakes are typically lower than the farmland around them, and water doesn’t flow uphill. Mechanical pumps to extract water from wells, over riverbanks, and up slopes did not exist until classical Greece — the famous mathematician Archimedes is known for one especially useful type, a screw — and high-capacity pumps were unknown until the Renaissance. But even when better pumps became available, so much energy was required to pump the volume of water necessary to flood fields that more often than not irrigation was impractical.

In most cases, the arrival of fossil fuels — compact, easily transportable energy — answered the question of how to make large-scale irrigation practicable. In the late nineteenth century, farmers on the Great Plains of North America erected thousands of windmills to pump groundwater from wells into their fields. But those windmills could typically draw water from no deeper than 30 to 60 feet, and digging wells was expensive. In the mid–twentieth century, pumps powered by fossil fuels let farmers reach 300 feet below the surface to the great Ogallala Aquifer, an underground region of water the size of Lake Huron that stretches from South Dakota to Texas. Plains agriculture exploded. By the late 1970s, water from the Ogallala was responsible for much of the wheat, corn, alfalfa, and cotton grown in the United States. Forty percent of the nation’s cattle drank it.
When water couldn’t be pulled out of the ground, nations used new engineering techniques to build huge dams and canals to store irrigation water. Construction began in the mid-1800s on the first great modern water storage complex, in the Indus Basin between Pakistan and India. Slowly increasing in scale, and including one of the biggest dams in the world, this massive project grew to transform the Indus into a global agricultural power. Other big projects were built in the Ganges Valley in India, the Soviet Union’s Aral Sea, Egypt’s Nile Valley, and the Colorado, Columbia, and Central Valleys in the United States. Today, a quarter of the world’s cropland is irrigated — but it provides roughly 40 percent of all the food humans consume.
The third part of the Green Revolution was the introduction of genetics — or, more precisely, the use of genetic tools to create new varieties of crops that could take better advantage of the sudden rise in fertilizer and irrigation.
This effort largely traces back to a little-known project in Mexico during and after the Second World War. So many Mexicans then were suffering from hunger and malnutrition that the resultant unrest had led the United States to fear its neighbor would fall to a fascist coup. The U.S. government asked the Rockefeller Foundation, one of the biggest and most influential U.S. research-funding charities, to help Mexico increase its production of corn, the country’s primary staple food.
The corn project met with little success. But the foundation ended up changing the world anyway. As a side project, it brought down a young plant-science researcher, Norman Borlaug, to look at stem rust, an ancient disease that afflicted Mexico’s wheat crop. In conventional terms, hiring Borlaug for this task was an odd, even foolish choice. Borlaug knew little about wheat, had never been to Mexico, and didn’t speak Spanish. But, as it turned out, his intense ambition and endless willingness to work made up for these deficits. In an unprecedented move, he collected hundreds of varieties of wheat and bred them together in every imaginable combination, hoping to create a novel hybrid that could naturally fight off stem rust. The task was breathtakingly arduous — one reason it had never been tried.
Wheat flowers, known as florets, grow in little bunches atop the stalks. Like many other varieties of flower, each wheat floret has both male and female reproductive organs. Rising on thin stalks from the center of the floret are the stamens, the male parts of the plant, which contain the pollen in little pods at the tips. Below these are the delicate filaments of the female parts of the flower, the stigmas, with the ovary below. When the stigmas develop enough to be capable of reproduction, biochemical signals cause the stamen tips to burst, releasing thousands of dust-like grains of pollen. The grains settle on the ends of the stigmas. Sensing their arrival, the stigmas create a small tube that permits the pollen to link to the ovary beneath. Male and female mechanisms join and begin creating a seed, the grain that the farmer will harvest.
Because both pollen and ovary come from the same plant, the new seed has the same genes as its parent — it’s an identical copy. To create new varieties, plant breeders must stop wheat from fertilizing itself. In Borlaug’s case, this boiled down to him and a couple of assistants sitting on little home-made stools in the sun, opening up every floret on every plant of a particular variety, carefully plucking out the stamens (the male parts) with tweezers, hundreds of plants at a time. When they were done, the wheat was entirely female — the plants had been, so to speak, feminized. Borlaug and his assistants then covered all the altered florets with folded paper so that no pollen could get in.
In the next step, they snipped off the stigmas (the female parts) from a second wheat variety, making those plants entirely male. Opening the paper covers on the previously “feminized” florets, Borlaug inserted the stigma-less, male florets, twirled them around to release their pollen, then sealed the paper back up. And then he did this all over again for the next two wheat varieties, and the next two — hundreds of thousands of breedings in all, each and every one performed under the harsh Mexican sun. After a few days he removed the paper from all the hybrids, hand-planted them in fields, let them grow, and exposed them to stem rust. Most would die, but a few would survive, and he would breed the survivors to make new varieties, hoping to create resistant hybrids.
After years of labor, Borlaug bred rust-resistant varieties of wheat. But he did more than that. He bred wheat varieties that were more productive than others, and, realizing that those high-yield ones would bear so much grain that they would become top-heavy and fall over, he bred varieties that had much shorter, stronger stalks — dwarf wheat, as it is called. Even more than that, his varieties were unusually tolerant of local conditions — they could be planted almost anywhere.
Borlaug’s varieties doubled, tripled, even quadrupled yields — but only if the plants were massively fertilized and given plenty of water, which usually required irrigation systems. Within a decade, Mexico changed from a country that had to import much of its wheat to a wheat exporter. Borlaug’s wheat then went to India, Pakistan, Egypt, and other nations, raising their yields, too.
The results convinced the Rockefeller and Ford Foundations to try the same thing with rice. The staple crop of much of Asia, the world’s most populous region, rice is humankind’s single most important food. The foundations established a new laboratory in the Philippines for the rice project — the International Rice Research Institute. In the 1960s and 1970s, IRRI researchers used Borlaug’s mass-breeding methods to develop new, highly productive strains of rice. Like Borlaug’s wheat, the IRRI rice was part of a package that included increased irrigation and fertilizer use. Between 1961 and 2003, Asian irrigation more than doubled, from 182 million acres to 417 million acres, and fertilizer use went up by a factor of more than twenty, from 4 to 87 million tons. Combined with the new rice strains, the consequence was a near-tripling of Asian rice production.
The effects were staggering. In the 1970s, much of South and East Asia were plagued by hunger. By the twenty-first century, Asians had an average of 30 percent more calories in their diet. Millions upon millions of families had more food, and with that came so much else. Seoul and Shanghai, Jaipur and Jakarta; shining skyscrapers, pricey hotels, traffic-choked streets blazing with neon: all are built atop a foundation of laboratory-bred rice.
All this progress came at a cost. Farming 2.0 has transformed human life, but it has also wreaked environmental havoc. Agriculture has always caused erosion, water pollution, biodiversity loss, and other ecological problems. Green Revolution farming, which places more demands on the Earth, has worsened these issues, with irrigation mismanagement and fertilizer overuse being particularly alarming. Poor irrigation practice can poison the soil by filling it with the salts dissolved in water; fertilizer overuse can pollute rivers, lakes, and oceans with the runoff from fields. All these problems must be resolved.
And they are complicated by another. The sheer scale of Farming 2.0, with its giant farms and giant firms, has led food consumers increasingly to mistrust the industry. They don’t believe these enormous, profit-making enterprises have their best interests at heart. The mistrust is aggravated by the very success of modern agriculture, which has made it possible for much of the world’s population to live without having any connection to the farms and farmers who provide their food.
Meanwhile, the world’s population will keep increasing, probably to 10 billion this century. People will likely be more affluent, too, which means they will demand better food. The next task for the next generation of farmers, researchers, and agricultural companies will be to maintain the gains of the past for all these new people while preserving the environment for the future. The task for everyone else will be to learn enough about the food system to help them do it.
*****
part 2
We Live Like Royalty and Don’t Know It
Introducing “How the System Works,” a series on the hidden mechanisms that support modern life
At the rehearsal dinner I began thinking about Thomas Jefferson’s ink. My wife and I were at a fancy destination wedding on a faraway island in the Pacific Northwest. Around us were musicians, catered food, a full bar, and chandeliers, all set against a superb ocean sunset. Not for the first time, I was thinking about how amazing it is that relatively ordinary middle-class Americans could afford such events — on special occasions, at least.
How the System Works
We Live Like Royalty and Don’t Know It
My wife and I were at a tableful of smart, well-educated twenty-somethings — friends of the bride and groom. The wedding, with all its hope and aspiration, had put them in mind of the future. As young people should, they wanted to help make that future bright. There was so much to do! They wanted the hungry to be fed, the thirsty to have water, the poor to have light, the sick to be well.
But when I mentioned how remarkable it was that a hundred-plus people could parachute into a remote, unfamiliar place and eat a gourmet meal untroubled by fears for their health and comfort, they were surprised. The heroic systems required to bring all the elements of their dinner to these tables by the sea were invisible to them. Despite their fine education, they knew little about the mechanisms of today’s food, water, energy, and public-health systems. They wanted a better world, but they didn’t know how this one worked.
This is not a statement about Kids These Days so much as about Most People These Days. Too many of us know next to nothing about the systems that undergird our lives. Which is what put me in mind of Thomas Jefferson and his ink.

Jefferson was one of the richest men in the new United States. He had a 5,000-acre plantation worked by hundreds of slaves, a splendid mansion in Virginia that he had designed himself, one of the biggest wine collections in America, and one of the greatest private libraries in the world — it became the foundation of the Library of Congress. But despite his wealth and status his home was so cold in winter that the ink in his pen sometimes froze, making it difficult for him to write to complain about the chill.
Jefferson was rich and sophisticated, but his life was closer to the lives of people in the Iron Age than it was to ours. This is true literally, in that modern forms of steel and other metal alloys hadn’t been invented. But it is most true in the staggering fact that everyone at the rehearsal dinner was born and raised in luxury unimaginable in Jefferson’s time.
Delivered to your inbox:
Clear thinking on science and tech
The young people at my table were anxious about money: starter-job salaries, high rents, student loans. But they never worried about freezing in their home. They could go to the sink and get a glass of clean water without fear of getting sick. Most of all, they were alive. In 1800, when Jefferson was elected president, more than one out of four children died before the age of five. Today, it is a shocking tragedy if a child dies. To Jefferson, these circumstances would have represented wealth and power beyond the dreams of avarice. The young people at my table had debts, but they were the debts of kings.
Jefferson lived in a world of horse-drawn carriages, blazing fireplaces, and yellow fever. But what most separates our day from his is not our automobiles, airplanes, and high-rise apartments — it is that today vast systems provide abundant food, water, energy, and health to most people, including everyone at the rehearsal dinner. In Jefferson’s time, not even the president of the United States had what we have. But few of us are aware of that, or of what it means.

The privilege of ignorance was not available to Jefferson. Monticello’s water supply was a well, which frequently ran dry. The ex-president had to solve the problem on his own. Even if he had had a telephone, there was nobody to call — water utilities did not exist. To make his water supply more reliable, he decided to create a backup system: four cisterns, each eight feet long, wide, and deep, that would store rainwater. His original designs leaked and were vulnerable to contamination. Jefferson, aided by hired architects and slave labor, spent a decade working out how to improve them. He was immersed in his own infrastructure.
We, too, do not have the luxury of ignorance. Our systems serve us well for the most part. But they will need to be revamped for and by the next generation — the generation of the young people at the rehearsal dinner — to accommodate our rising population, technological progress, increasing affluence, and climate change.
The great European cathedrals were built over generations by thousands of people and sustained entire communities. Similarly, the electric grid, the public-water supply, the food-distribution network, and the public-health system took the collective labor of thousands of people over many decades. They are the cathedrals of our secular era. They are high among the great accomplishments of our civilization. But they don’t inspire bestselling novels or blockbuster films. No poets celebrate the sewage treatment plants that prevent them from dying of dysentery. Like almost everyone else, they rarely note the existence of the systems around them, let alone understand how they work.

Humane dissent from technocracy
Exhausted by science and tech debates that go nowhere?
We offer a richer science journalism
Jefferson believed that an informed citizenry was necessary to democratic self-rule — a mandate that extends all the way out to understanding the systems that envelop us. It’s easy to see why he believed this: Voters who understand how we are entwined with these systems will support maintaining and expanding them for our children and grandchildren. Food, electricity, water, and public-health systems obviously make our individual lives more comfortable. But they are also essential to our collective economic prosperity. Failed infrastructure is one big reason why so many poor countries remain poor. As a citizen and a parent, I don’t want our country to get anywhere near that territory.
There’s another, equally important reason for thinking about the systems around us. Water, food, energy, public health — these embody a gloriously egalitarian and democratic vision of our society. Americans may fight over red and blue, but everyone benefits in the same way from the electric grid. Water troubles and food contamination are afflictions for rich and poor alike. These systems are powerful reminders of our common purpose as a society — a source of inspiration when one seems badly needed.
Every American stands at the end of a continuing, decades-long effort to build and maintain the systems that support our lives. Schools should be, but are not, teaching students why it is imperative to join this effort. Imagine a course devoted to how our country functions at its most basic level. I am a journalist who has been lucky enough to have learned something about the extraordinary mechanisms we have built since Jefferson’s day. In this series of four articles, I want to share some of the highlights of that imaginary course, which I have taken to calling “How the System Works.”
We begin with our species’ greatest need and biggest system — food.
Read the first essay: “Breakfast for Eight Billion”
*******************************************
on fb about Musk^by Philip Low
I have known Elon Musk at a deep level for 14 years, well before he was a household name. We used to text frequently. He would come to my birthday party and invite me to his parties. He would tell me everything about his women problems. As sons of highly accomplished men who married venuses, were violent and lost their fortunes, and who were bullied in high school, we had a number of things in common most people cannot relate to. We would hang out together late in Los Angeles. He would visit my San Diego lab. He invested in my company.
Elon is not a Nazi, per se.
He is something much better, or much worse, depending on how you look at it.
Nazis believed that an entire race was above everyone else.
Elon believes he is above everyone else. He used to think he worked on the most important problems. When I met him, he did not presume to be a technical person — he would be the first to say that he lacked the expertise to understand certain data. That happened later. Now, he acts as if he has all the solutions.
All his talk about getting to Mars to “maintain the light of consciousness” or about “free speech absolutism” is actually BS Elon knowingly feeds people to manipulate them. Everything Elon does is about acquiring and consolidating power. That is why he likes far right parties, because they are easier to control. That is also why he gave himself $56 Billion which could have gone to the people actually doing the work and innovations he is taking credit for at Tesla (the reason his name is on so few patents is because putting a fake inventor on a patent would kill it and moreover it would reveal the superstars behind the work). His lust for power is also why he did xAI and Neuralink, to attempt to compete with OpenAI and NeuroVigil, respectively, despite being affiliated with them. Unlike Tesla and Twitter, he was unable to conquer those companies and tried to create rivals. He announced Neuralink just after I invited his ex-wife, which she and I notified him about, to a fundraising dinner for Hebrew University in London (The fact that she tried to kiss me — I immediately pushed her away — while taking a photo at that event, even if playfully, clearly may have added to the alienation and possible emasculation he may have felt when she spoke to me in a pool at a party when they were together and she was naked. To not be disrespectful to her or to him, I stayed but looked at the sky whilst talking to her). I fired him with cause in December 2021 when he tried to undermine NV. It is ironic that later, he clearly tried to undermine Twitter before buying it, and in my view, blowing it up and using it to manipulate the masses to lean to the far right in country after country, including the USA.
[Here is more detail as some people asked. After he received a press release draft confirming NV never took a penny from the US Government, he asked to be removed from the Business Advisory Board, but then tried to give the stock he bought back, including for no money, which could have completely crashed NV’s stock price. I told him he was fired from the BAB, with cause, as he admitted he had not been participating. That also meant he had no ability to exercise his stock options (years prior, despite not being allowed to discuss his investment because of a solid NDA, he/his people leaked to the press that he had invested twice as much in NV as he actually did, as if the stock options had been counted as stock). This is the email I sent him around that time:
“Elon,
Only one of us apparently knows the difference between Science and PR, and between friendship and phonies, and unfortunately you ain’t it.
Let’s cut ties here.
Your NV stock is not being transferred, and if you try to transfer it without my consent, in contravention of your stock purchase agreement, I will have to shove my boots so deep up your derrière, legally, that your pissing contest with Bezos will seem like it was from another life, one you want to get back to.
Good luck with your implants, all of them, and with building Pottersville on Mars.
Seriously, don’t fuck with me.”
Elon has less than 0.05% of NV and was never a principal or principal investor in NV as was falsely reported by some. I own between 80 and 90%. NV is the most valuable neurotech company in the world and does not regard Neuralink as a competitor because we have an arsenal of patents and introduced our technology to customers in 2009 and furthermore do not view their implantable technology as scalable. Moreover, the company is apparently under investigation regarding statements Elon made to investors and most of Neuralink’s co-founders ditched Elon and the company.]
Elon did two Nazi salutes.
He did them for five main reasons:
1. He was concerned that the “Nazi wing” of the MAGA movement, under the influence of Steve Bannon, would drive him away from Trump, somewhere in the Eisenhower Executive Office Building, rather than in the West Wing which is where he wants to be. He was already feeling raw over the fact that Trump did not follow his recommendation for Treasury Secretary and that the Senate also did not pick his first choice;
2. He was upset that he had had to go to Israel and Auschwitz to make up for agreeing with a Nazi sympathizer online and wanted to reclaim his “power” just like when he told advertisers to “go fuck yourself”. This has nothing to do with Asperger’s;
3. There are some Jews he actually hates: Sam Altman is amongst them;
4. He enjoys a good thrill and knew exactly what he was doing;
5. His narcissistic self was hoping the audience would reflect his abject gesture back to him, thereby showing complete control and dominion over it, and increasing his leverage over Trump. That did not happen.
Bottom line: Elon is not a Nazi but he did give two Nazi Salutes, which is completely unacceptable.
——————————————————————————-
N.B. For the few whining about my post “sans connaissance the cause” and either trembling about my having shattered their illusions about their cult leader or thinking I am defending Elon:
I. My point is that he is transactional rather than ideological;
II. That being said, I am not defending him or his actions, just explaining them and confirming that he did, in fact, do two Nazi Salutes if anyone had doubts or believed the doctored footage of Taylor Swift doing the same thing to normalize what Elon did;
III. At some point, it matters to few people if one is a Nazi or if one acts like one. My father was a Holocaust Survivor. 32 out of 35 of his family members were murdered by Nazis. My mother’s grandparents were murdered in Auschwitz;
IV. After Elon tried to manipulate NV’s stock in 2021, I fired him with cause, and he was unable to exercise his stock options. In the aftermath of the Nazi Salutes, I told both him and his wealth manager to fuck off. Any remaining friendship between us ended with the Nazi Salutes. He is blocked on my end and I am pretty sure I am blocked on his;
V. I did not share what he told me in confidence. I just happen to know him extremely well, the person, the aspirations and the Musk Mask;
VI. I know who I am, have no desire to be famous and give exceedingly few media interviews. I prefer to work in obscurity and let the work speak for itself. I am certainly not envious and would definitely not want Elon’s life, including living in a bubble and having to make one outlandish claim after another and manipulate the public, elections and governments to shore up my stock and prevent the bubble from bursting. Unlike Elon, I am an actual scientist and inventor and I am not pretending to be someone I am not, like a fellow who got his BA in Econ at 26 all of a sudden pretending to be an expert in mechanical engineering, chemistry, rocket science, neuroscience and AI and keeping the people actually doing the work hidden and paying people to play online games in his name to appear smart and feed his so-called “Supergenius” Personality Cult — the “Imperator” has no clothes, and he knows it. I am just very disappointed in what happened to someone I had a lot of deep admiration for and the first person to find out about my concerns about his behavior was always him;
VII. He is the one who betrayed a number of his friends, including Sergey, and, given his actions, many other people who believed him and believed in him. I have no sympathy for this behavior, and at some point, after having repeatedly confronted it in private, I believe the ethical thing to do is to speak out, forcefully and unapologetically, whatever the risks may be, so as to not be part of the timid flock remaining silent while evil is being done, including propping up far right governments around the world in part to deregulate his companies and become the first trillionaire and otherwise to “rule the planet” — he knows Mars won’t be terraformed in his lifetime and he really wants his planet. No joke… Ethics matter. People matter. The truth matters.
I took down Descartes (through the Cambridge Declaration on Consciousness) and I am definitely not afraid of a so-called inventor whose greatest invention is his image.
I will not be silent. You should not be either. I am a sovereign individual, and so are you. I stood up to bullies, and am stepping out of the dark to do it again.
Stop working for him and being exploited by him. Sell your Tesla and dump your Tesla stock. Nikola Tesla was a great, creative and courageous man who led with ethics and by example and he would not have wanted for his good name to have been used by him and would agree with my principled stance. Sign off of “X” which is boosting far right propaganda, and of your Starlink as well. He is a complete c^^ (British slang not meant to be offensive to women) who doesn’t give a shit about you — only about power. Just ask Reid Hoffman. He only wants to control, dominate and use you — don’t let him and cut him and his businesses out of your and your loved ones’ lives entirely. Remember he is a total miserable self-loathing poser, and unless you happen to be one too, he will be much more afraid of you than you should ever be of him.
He will probably come after me, and I am completely fine with that. I am a self-made multibillionaire with an armada of lawyers — literally — and most importantly, I know who I am and who I stand for, the people and their freedoms, whatever happens. He can send his dumb Proud Boys and Oath Keepers after me and they will be butchered on sight. Either way, I would rather die with honor than live as a coward.
“Silence encourages the tormentor, never the tormented.” — Elie Wiesel, Holocaust Survivor and Nobel Peace Prize laureate
PS. Days after this post went viral, Elon got a nomination for the Nobel Peace Prize, after other efforts to restore his made-up image (including an intervention by Bibi Netanyahu who does not want Elon against him when negotiating with Trump) all failed. Unless he is able to successfully manipulate the Nobel Committee, it is highly unlikely that it would award the Prize to anyone interfering with free elections, promoting hate speech, sympathizing with Nazis and doing Nazi Salutes.
A A Gill on Brexit
AA Gill on PC
https://www.spiked-online.com/2016/12/11/aa-gill-and-the-right-to-offend/
https://www.esquire.com/uk/culture/advice/a8146/aa-gill-on-politics/
https://theafterword.co.uk/great-singles-that-definitely-that-dont-seem-to-have-been-forgotten/
******
CIA from Unherd
https://unherd.com/2025/02/the-breakdown-of-the-cia/?tl_inbound=1&tl_groups[0]=18743&tl_period_type=3&utm_source=UnHerd+Today&utm_campaign=09f3f13f7e-EMAIL_CAMPAIGN_2025_02_20_09_33&utm_medium=email&utm_term=0_79fd0df946-09f3f13f7e-35034266
February 20, 2025 7 mins
John Ratcliffe, Trump’s appointee as CIA director, says that he wants officers who are “willing to go to places no one else can go and do things that no one else can do”. This, one might have thought, is a straightforward enough description of any intelligence operative worth his keep, just as country analysts in Langley must be really fluent in foreign languages to do their jobs effectively. Certainly, Ratcliffe seems keen to employ only the best at the CIA, and has offered eight months of pay and benefits to those who prefer to leave.
Yet barely had Ratcliffe opened his mouth than he faced furious attack. The CIA’s carefully cultivated friends in the press — media relations, Hollywood included, are the agency’s outstanding skill — assailed the director and the White House for a dangerous misstep. “He might be right that a leaner CIA could be meaner,” proclaimed David Ignatius in The Washington Post. “But how can he be sure the buyouts aren’t paring more muscle than fat?” Actually one must hope that many, very many, will take their chance to leave. The sad truth, confirmed by my extended work for one CIA Director, and also work as a contractor for the Agency in the field, is that it lost its way years ago — and now increasingly relies on secrecy to conceal its decay.
The CIA does have plenty of people who serve in “stations” overseas. That’s a dramatic term for humdrum offices in foreign chanceries. That is where CIA officers work when they serve abroad, in full view of their host country’s intelligence services, which can keep them under constant observation if they so wish. That happens in China and Russia, of course, but also in places like Athens. Because Greece is a country where CIA employees have been attacked even after the Cold War, officers stationed there are still monitored for their own good.
It is therefore obvious that officers working out of embassies find it impossible to “do things that no one else can do” — or indeed very much at all. In allied countries, CIA officers need not be detected, let alone followed, because they are “declared” to their host country. Not that this really matters: everyone knows who they are anyway.
The CIA does have another category of officers, one it strives very hard to misrepresent as the real thing, as people willing to do “what no one else can do”. These are the NOCs — the “non-official cover officers” — who do not live in diplomatic housing and do not work in diplomatic offices. Instead they live “on the economy” in regular flats and houses, pretending to be business people, or retirees, or artists, or anything else that sounds sufficiently innocuous.
That begs the question: why is Ratcliffe complaining? The NOCs seem to fit the bill of intrepid field officers, and the CIA certainly does its best to keep their true identity secret. Some years ago, in fact, its officials made a huge fuss when a NOC’s identity was compromised in the course of a political controversy leading up to the Iraq War.
What is missing though, is that crucial line: “going where no one else can go”. The truth is that the most secret of all CIA secrets is that NOCs only serve in very safe countries, most unlikely to arrest (let alone torture) agents if they are detected. Think of France, Italy or Thailand: all places where reporters, tourists and maiden aunts travel safely every day.
One NOC who tripped up while trying to cajole secrets from a trade official — the latter was willing if the NOC slept with him, became indignant when she refused, and reported her to local security — did all she did (and refused to do) in a major European capital. Once the scandal came out, she was flown back to the US without incident. Another NOC officer I knew was competent enough to operate covertly in Warsaw, but only when Poland was no longer a communist country and was trying to join Nato.
There have been a few cases of US citizens recruited to visit dangerous countries, including one case I know of which ended in disappearance and probable death. But that particular individual was not a trained CIA officer, willing to risk all for the country, but rather an older gent hired expressly for the job. Remarkably unqualified, he would not have uncovered any secrets even if he had stayed uncaught.
In other words, then, the CIA does not have true undercover agents, genuinely competent intelligence officers who can enter foreign countries covertly, that is through legal entry points but with a persuasive false identity, or else in clandestine fashion by slipping over the border undetected. Without one or the other, the CIA will always find it impossible to have officers in hostile countries.
“The CIA does not have true undercover agents.”
Take Iran for example. The CIA considers the Islamic Republic a no-go zone — because, ever since the seizure of its embassy in 1979, the US has had no diplomatic presence there. Therefore Langley has no officers who can enter the Islamic Republic, blend into the population, and begin to conduct operations.
Actually both those things are highly feasible: there is no way that the gendarmerie, the regular army or the Revolutionary Guards could possibly guard Iran’s 3,662 miles of land borders against infiltration. As for blending in, Tehran is full of people who do not speak Persian or only very badly. We do know that the Mossad gets in and out of Iran at will. Smuggling agents in either covertly or clandestinely, the Israelis regularly pull off spectacular coups against their Iranian foe. That includes everything from the theft of truckloads filled with nuclear programme documents, to the killing of heavily guarded nuclear scientists. Mossad even got Ismail Haniyeh, the erstwhile leader of Hamas, by blowing him up while he was staying in a heavily defended Revolutionary Guards VIP guesthouse — supposedly within a “secure” government zone in Tehran.
One could reasonably argue that the US is powerful enough not to need such exploits. Yet the CIA certainly needs to operate in Iran — and in China and Russia — to achieve something much less dramatic than assassinations: verifying “assets”. To take a theoretical example, imagine a medical doctor from Isfahan, recruited by the CIA on a visit to Frankfurt. Before returning home, he agrees to send information he hears from his son: a nuclear engineer, or perhaps an officer in the Revolutionary Guards, in exchange for money deposited to a German bank.
There is no need for James Bond skills to check the source’s credentials. A world-class holiday destination, complete with stunning Safavid architecture around its vast main square, Isfahan will always attract foreign tourists. Nor would an agent need much to verify the authenticity of the new asset. Things would be as simple as visiting the doctor in his office and verifying he exists: tourists get upset stomachs all the time. With a few questions, none of them compromising, the officer could also ensure that the man recruited in Frankfurt really is a doctor, and not a trolling security man or else just a con artist angling for a quick buck.
My Isfahan doctor is hypothetical, but there is clear evidence that verification has been a severe problem in the real world too. For decades, the Agency has struggled to verify its assets: it was only after the fall of the Soviet Union that the CIA realised that most of its “agents in place” were simply KGB officers loyally working for the USSR, while genuine assets were compromised by clumsy attempts at communication. CIA officers in Moscow were supposed to somehow evade detection when they left their diplomatic quarters to meet them, but in spite of much ingenuity it was simply mission impossible.
There have been plain intelligence failures too. Before Putin’s invasion of Ukraine, the Agency wrongly predicted that the Zelensky government would not fight in earnest, so that the Russians would conquer Kyiv within 24 hours. That frightened the White House into evacuating all US diplomats, which in turn caused another 20 countries to do the same. That might even have demoralised Zelensky into surrendering — but for the fact he already knew the CIA was incompetent.
The essential problem is the lack of language skills. Because they could not move about to talk to people, the CIA officers in Kyiv had no “situational awareness” and no understanding of the bitter determination to resist the Russians. Even Obama’s CIA director, famed for his supposed Middle East expertise, apparently struggled with Arabic. Despite studying the language in Cairo, and serving in Saudi Arabia, he asked me to stick to English when we once met. With personnel like that, it obviously becomes much harder to engage with sources abroad, let alone survive for months at a time in hostile territory.
The reason for this inadequacy, it turns out, is not that Americans are notoriously lazy about learning foreign languages. Rather, the wound is self-inflicted by the Agency itself, something I totally failed to understand for many years, even though I worked closely with one CIA director and was a close friend of two more. The situation only became clear when my truly stellar research assistant, who went on to a splendid career elsewhere in government, applied to join the Agency at my suggestion.
Despite knowing two difficult languages really well, my colleague was rejected very early in the process. Why? Because of the CIA’s inflexible method of “vetting” applicants. They were not interviewed by experienced operators, nor by accomplished analysts with a deep understanding of their patch. Instead, would-be agents have to fill out tedious security forms, listing every place where they ever lived, or even just slept in for a single night. They also have to list every person they have ever had dealings with — whether tenants or landlords, lovers or friends, no matter how fleeting the relationship ultimately was.
It goes without saying that the sort of young American suited to life as a NOC — those who have studied or lived overseas, and are equally comfortable working or flirting in foreign languages — stand no hope of passing the security screening. Many of the security people I have run into seem to be Mormons, disciplined folk who forgo alcohol and even coffee. Applicants born in Utah, raised in Utah, who studied in Utah and married a spouse from Utah sail through the application process. But when tasked with working an asset overseas, they are destined to fail.
That, of course, leads to one further question: why? Why has the CIA been so obsessed with security that it excludes the people it needs? One explanation is that it is just too big. With over 20,000 staff, it employs far too many people to be vetted by individual experts. Rather, it must rely on very stringent criteria, applied by rather simple people, to exclude all risk — and the most promising candidates. Whatever the cause, anyway, it’s clear that Ratcliffe is right to make room for fresh talent, whether hard-nosed agents in the field or insightful analysts back home.
Professor Edward Luttwak is a strategist and historian known for his works on grand strategy, geoeconomics, military history, and international relations.
************
Taibbi
from https://www.rollingstone.com/politics/politics-news/the-great-american-bubble-machine-195229/
The Great American Bubble Machine
From tech stocks to high gas prices, Goldman Sachs has engineered every major market manipulation since the Great Depression — and they're about to do it again
By Matt Taibbi
April 5, 2010
The first thing you need to know about Goldman Sachs is that it’s everywhere. The world’s most powerful investment bank is a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money. In fact, the history of the recent financial crisis, which doubles as a history of the rapid decline and fall of the suddenly swindled dry American empire, reads like a Who’s Who of Goldman Sachs graduates.
By now, most of us know the major players. As George Bush’s last Treasury secretary, former Goldman CEO Henry Paulson was the architect of the bailout, a suspiciously self-serving plan to funnel trillions of Your Dollars to a handful of his old friends on Wall Street. Robert Rubin, Bill Clinton’s former Treasury secretary, spent 26 years at Goldman before becoming chairman of Citigroup — which in turn got a $300 billion taxpayer bailout from Paulson. There’s John Thain, the asshole chief of Merrill Lynch who bought an $87,000 area rug for his office as his company was imploding; a former Goldman banker, Thain enjoyed a multi-billion-dollar handout from Paulson, who used billions in taxpayer funds to help Bank of America rescue Thain’s sorry company. And Robert Steel, the former Goldmanite head of Wachovia, scored himself and his fellow executives $225 million in golden-parachute payments as his bank was self-destructing. There’s Joshua Bolten, Bush’s chief of staff during the bailout, and Mark Patterson, the current Treasury chief of staff, who was a Goldman lobbyist just a year ago, and Ed Liddy, the former Goldman director whom Paulson put in charge of bailed-out insurance giant AIG, which forked over $13 billion to Goldman after Liddy came on board. The heads of the Canadian and Italian national banks are Goldman alums, as is the head of the World Bank, the head of the New York Stock Exchange, the last two heads of the Federal Reserve Bank of New York — which, incidentally, is now in charge of overseeing Goldman — not to mention …
But then, any attempt to construct a narrative around all the former Goldmanites in influential positions quickly becomes an absurd and pointless exercise, like trying to make a list of everything. What you need to know is the big picture: If America is circling the drain, Goldman Sachs has found a way to be that drain — an extremely unfortunate loophole in the system of Western democratic capitalism, which never foresaw that in a society governed passively by free markets and free elections, organized greed always defeats disorganized democracy.
The bank’s unprecedented reach and power have enabled it to turn all of America into a giant pump-and-dump scam, manipulating whole economic sectors for years at a time, moving the dice game as this or that market collapses, and all the time gorging itself on the unseen costs that are breaking families everywhere — high gas prices, rising consumer credit rates, half-eaten pension funds, mass layoffs, future taxes to pay off bailouts. All that money that you’re losing, it’s going somewhere, and in both a literal and a figurative sense, Goldman Sachs is where it’s going: The bank is a huge, highly sophisticated engine for converting the useful, deployed wealth of society into the least useful, most wasteful and insoluble substance on Earth — pure profit for rich individuals.
They achieve this using the same playbook over and over again. The formula is relatively simple: Goldman positions itself in the middle of a speculative bubble, selling investments they know are crap. Then they hoover up vast sums from the middle and lower floors of society with the aid of a crippled and corrupt state that allows it to rewrite the rules in exchange for the relative pennies the bank throws at political patronage. Finally, when it all goes bust, leaving millions of ordinary citizens broke and starving, they begin the entire process over again, riding in to rescue us all by lending us back our own money at interest, selling themselves as men above greed, just a bunch of really smart guys keeping the wheels greased. They’ve been pulling this same stunt over and over since the 1920s — and now they’re preparing to do it again, creating what may be the biggest and most audacious bubble yet.
If you want to understand how we got into this financial crisis, you have to first understand where all the money went — and in order to understand that, you need to understand what Goldman has already gotten away with. It is a history exactly five bubbles long — including last year’s strange and seemingly inexplicable spike in the price of oil. There were a lot of losers in each of those bubbles, and in the bailout that followed. But Goldman wasn’t one of them.
BUBBLE #1 The Great Depression
Goldman wasn’t always a too-big-to-fail Wall Street behemoth, the ruthless face of kill-or-be-killed capitalism on steroids —just almost always. The bank was actually founded in 1869 by a German immigrant named Marcus Goldman, who built it up with his son-in-law Samuel Sachs. They were pioneers in the use of commercial paper, which is just a fancy way of saying they made money lending out short-term IOUs to smalltime vendors in downtown Manhattan.
You can probably guess the basic plotline of Goldman’s first 100 years in business: plucky, immigrant-led investment bank beats the odds, pulls itself up by its bootstraps, makes shitloads of money. In that ancient history there’s really only one episode that bears scrutiny now, in light of more recent events: Goldman’s disastrous foray into the speculative mania of pre-crash Wall Street in the late 1920s.
This great Hindenburg of financial history has a few features that might sound familiar. Back then, the main financial tool used to bilk investors was called an “investment trust.” Similar to modern mutual funds, the trusts took the cash of investors large and small and (theoretically, at least) invested it in a smorgasbord of Wall Street securities, though the securities and amounts were often kept hidden from the public. So a regular guy could invest $10 or $100 in a trust and feel like he was a big player. Much as in the 1990s, when new vehicles like day trading and e-trading attracted reams of new suckers from the sticks who wanted to feel like big shots, investment trusts roped a new generation of regular-guy investors into the speculation game.
Beginning a pattern that would repeat itself over and over again, Goldman got into the investmenttrust game late, then jumped in with both feet and went hogwild. The first effort was the Goldman Sachs Trading Corporation; the bank issued a million shares at $100 apiece, bought all those shares with its own money and then sold 90 percent of them to the hungry public at $104. The trading corporation then relentlessly bought shares in itself, bidding the price up further and further. Eventually it dumped part of its holdings and sponsored a new trust, the Shenandoah Corporation, issuing millions more in shares in that fund — which in turn sponsored yet another trust called the Blue Ridge Corporation. In this way, each investment trust served as a front for an endless investment pyramid: Goldman hiding behind Goldman hiding behind Goldman. Of the 7,250,000 initial shares of Blue Ridge, 6,250,000 were actually owned by Shenandoah — which, of course, was in large part owned by Goldman Trading.
Taibblog: Commentary on Politics and the Economy by Matt Taibbi
The end result (ask yourself if this sounds familiar) was a daisy chain of borrowed money, one exquisitely vulnerable to a decline in performance anywhere along the line. The basic idea isn’t hard to follow. You take a dollar and borrow nine against it; then you take that $10 fund and borrow $90; then you take your $100 fund and, so long as the public is still lending, borrow and invest $900. If the last fund in the line starts to lose value, you no longer have the money to pay back your investors, and everyone gets massacred.
In a chapter from The Great Crash, 1929 titled “In Goldman Sachs We Trust,” the famed economist John Kenneth Galbraith held up the Blue Ridge and Shenandoah trusts as classic examples of the insanity of leveragebased investment. The trusts, he wrote, were a major cause of the market’s historic crash; in today’s dollars, the losses the bank suffered totaled $475 billion. “It is difficult not to marvel at the imagination which was implicit in this gargantuan insanity,” Galbraith observed, sounding like Keith Olbermann in an ascot. “If there must be madness, something may be said for having it on a heroic scale.”
BUBBLE #2 Tech Stocks
Fast-forward about 65 years. Goldman not only survived the crash that wiped out so many of the investors it duped, it went on to become the chief underwriter to the country’s wealthiest and most powerful corporations. Thanks to Sidney Weinberg, who rose from the rank of janitor’s assistant to head the firm, Goldman became the pioneer of the initial public offering, one of the principal and most lucrative means by which companies raise money. During the 1970s and 1980s, Goldman may not have been the planet-eating Death Star of political influence it is today, but it was a top-drawer firm that had a reputation for attracting the very smartest talent on the Street.
It also, oddly enough, had a reputation for relatively solid ethics and a patient approach to investment that shunned the fast buck; its executives were trained to adopt the firm’s mantra, “long-term greedy.” One former Goldman banker who left the firm in the early Nineties recalls seeing his superiors give up a very profitable deal on the grounds that it was a long-term loser. “We gave back money to ‘grownup’ corporate clients who had made bad deals with us,” he says. “Everything we did was legal and fair — but ‘long-term greedy’ said we didn’t want to make such a profit at the clients’ collective expense that we spoiled the marketplace.”
But then, something happened. It’s hard to say what it was exactly; it might have been the fact that Goldman’s cochairman in the early Nineties, Robert Rubin, followed Bill Clinton to the White House, where he directed the National Economic Council and eventually became Treasury secretary. While the American media fell in love with the story line of a pair of baby-boomer, Sixties-child, Fleetwood Mac yuppies nesting in the White House, it also nursed an undisguised crush on Rubin, who was hyped as without a doubt the smartest person ever to walk the face of the Earth, with Newton, Einstein, Mozart and Kant running far behind.
Rubin was the prototypical Goldman banker. He was probably born in a $4,000 suit, he had a face that seemed permanently frozen just short of an apology for being so much smarter than you, and he exuded a Spock-like, emotion-neutral exterior; the only human feeling you could imagine him experiencing was a nightmare about being forced to fly coach. It became almost a national clichè that whatever Rubin thought was best for the economy — a phenomenon that reached its apex in 1999, when Rubin appeared on the cover of Time with his Treasury deputy, Larry Summers, and Fed chief Alan Greenspan under the headline The Committee To Save The World. And “what Rubin thought,” mostly, was that the American economy, and in particular the financial markets, were over-regulated and needed to be set free. During his tenure at Treasury, the Clinton White House made a series of moves that would have drastic consequences for the global economy — beginning with Rubin’s complete and total failure to regulate his old firm during its first mad dash for obscene short-term profits.
The basic scam in the Internet Age is pretty easy even for the financially illiterate to grasp. Companies that weren’t much more than pot-fueled ideas scrawled on napkins by uptoolate bongsmokers were taken public via IPOs, hyped in the media and sold to the public for mega-millions. It was as if banks like Goldman were wrapping ribbons around watermelons, tossing them out 50-story windows and opening the phones for bids. In this game you were a winner only if you took your money out before the melon hit the pavement.
It sounds obvious now, but what the average investor didn’t know at the time was that the banks had changed the rules of the game, making the deals look better than they actually were. They did this by setting up what was, in reality, a two-tiered investment system — one for the insiders who knew the real numbers, and another for the lay investor who was invited to chase soaring prices the banks themselves knew were irrational. While Goldman’s later pattern would be to capitalize on changes in the regulatory environment, its key innovation in the Internet years was to abandon its own industry’s standards of quality control.
“Since the Depression, there were strict underwriting guidelines that Wall Street adhered to when taking a company public,” says one prominent hedge-fund manager. “The company had to be in business for a minimum of five years, and it had to show profitability for three consecutive years. But Wall Street took these guidelines and threw them in the trash.” Goldman completed the snow job by pumping up the sham stocks: “Their analysts were out there saying Bullshit.com is worth $100 a share.”
The problem was, nobody told investors that the rules had changed. “Everyone on the inside knew,” the manager says. “Bob Rubin sure as hell knew what the underwriting standards were. They’d been intact since the 1930s.”
Jay Ritter, a professor of finance at the University of Florida who specializes in IPOs, says banks like Goldman knew full well that many of the public offerings they were touting would never make a dime. “In the early Eighties, the major underwriters insisted on three years of profitability. Then it was one year, then it was a quarter. By the time of the Internet bubble, they were not even requiring profitability in the foreseeable future.”
Goldman has denied that it changed its underwriting standards during the Internet years, but its own statistics belie the claim. Just as it did with the investment trust in the 1920s, Goldman started slow and finished crazy in the Internet years. After it took a little-known company with weak financials called Yahoo! public in 1996, once the tech boom had already begun, Goldman quickly became the IPO king of the Internet era. Of the 24 companies it took public in 1997, a third were losing money at the time of the IPO. In 1999, at the height of the boom, it took 47 companies public, including stillborns like Webvan and eToys, investment offerings that were in many ways the modern equivalents of Blue Ridge and Shenandoah. The following year, it underwrote 18 companies in the first four months, 14 of which were money losers at the time. As a leading underwriter of Internet stocks during the boom, Goldman provided profits far more volatile than those of its competitors: In 1999, the average Goldman IPO leapt 281 percent above its offering price, compared to the Wall Street average of 181 percent.
How did Goldman achieve such extraordinary results? One answer is that they used a practice called “laddering,” which is just a fancy way of saying they manipulated the share price of new offerings. Here’s how it works: Say you’re Goldman Sachs, and Bullshit.com comes to you and asks you to take their company public. You agree on the usual terms: You’ll price the stock, determine how many shares should be released and take the Bullshit.com CEO on a “road show” to schmooze investors, all in exchange for a substantial fee (typically six to seven percent of the amount raised). You then promise your best clients the right to buy big chunks of the IPO at the low offering price — let’s say Bullshit.com’s starting share price is $15 — in exchange for a promise that they will buy more shares later on the open market. That seemingly simple demand gives you inside knowledge of the IPO’s future, knowledge that wasn’t disclosed to the day trader schmucks who only had the prospectus to go by: You know that certain of your clients who bought X amount of shares at $15 are also going to buy Y more shares at $20 or $25, virtually guaranteeing that the price is going to go to $25 and beyond. In this way, Goldman could artificially jack up the new company’s price, which of course was to the bank’s benefit — a six percent fee of a $500 million IPO is serious money.
Goldman was repeatedly sued by shareholders for engaging in laddering in a variety of Internet IPOs, including Webvan and NetZero. The deceptive practices also caught the attention of Nicholas Maier, the syndicate manager of Cramer & Co., the hedge fund run at the time by the now-famous chattering television asshole Jim Cramer, himself a Goldman alum. Maier told the SEC that while working for Cramer between 1996 and 1998, he was repeatedly forced to engage in laddering practices during IPO deals with Goldman.
“Goldman, from what I witnessed, they were the worst perpetrator,” Maier said. “They totally fueled the bubble. And it’s specifically that kind of behavior that has caused the market crash. They built these stocks upon an illegal foundation — manipulated up — and ultimately, it really was the small person who ended up buying in.” In 2005, Goldman agreed to pay $40 million for its laddering violations — a puny penalty relative to the enormous profits it made. (Goldman, which has denied wrongdoing in all of the cases it has settled, refused to respond to questions for this story.)
Another practice Goldman engaged in during the Internet boom was “spinning,” better known as bribery. Here the investment bank would offer the executives of the newly public company shares at extra-low prices, in exchange for future underwriting business. Banks that engaged in spinning would then undervalue the initial offering price — ensuring that those “hot” opening-price shares it had handed out to insiders would be more likely to rise quickly, supplying bigger first-day rewards for the chosen few. So instead of Bullshit.com opening at $20, the bank would approach the Bullshit.com CEO and offer him a million shares of his own company at $18 in exchange for future business — effectively robbing all of Bullshit’s new shareholders by diverting cash that should have gone to the company’s bottom line into the private bank account of the company’s CEO.
In one case, Goldman allegedly gave a multimillion-dollar special offering to eBay CEO Meg Whitman, who later joined Goldman’s board, in exchange for future i-banking business. According to a report by the House Financial Services Committee in 2002, Goldman gave special stock offerings to executives in 21 companies that it took public, including Yahoo! cofounder Jerry Yang and two of the great slithering villains of the financial-scandal age — Tyco’s Dennis Kozlowski and Enron’s Ken Lay. Goldman angrily denounced the report as “an egregious distortion of the facts” — shortly before paying $110 million to settle an investigation into spinning and other manipulations launched by New York state regulators. “The spinning of hot IPO shares was not a harmless corporate perk,” then-attorney general Eliot Spitzer said at the time. “Instead, it was an integral part of a fraudulent scheme to win new investment-banking business.”
Such practices conspired to turn the Internet bubble into one of the greatest financial disasters in world history: Some $5 trillion of wealth was wiped out on the NASDAQ alone. But the real problem wasn’t the money that was lost by shareholders, it was the money gained by investment bankers, who received hefty bonuses for tampering with the market. Instead of teaching Wall Street a lesson that bubbles always deflate, the Internet years demonstrated to bankers that in the age of freely flowing capital and publicly owned financial companies, bubbles are incredibly easy to inflate, and individual bonuses are actually bigger when the mania and the irrationality are greater.
Nowhere was this truer than at Goldman. Between 1999 and 2002, the firm paid out $28.5 billion in compensation and benefits — an average of roughly $350,000 a year per employee. Those numbers are important because the key legacy of the Internet boom is that the economy is now driven in large part by the pursuit of the enormous salaries and bonuses that such bubbles make possible. Goldman’s mantra of “long-term greedy” vanished into thin air as the game became about getting your check before the melon hit the pavement.
The market was no longer a rationally managed place to grow real, profitable businesses: It was a huge ocean of Someone Else’s Money where bankers hauled in vast sums through whatever means necessary and tried to convert that money into bonuses and payouts as quickly as possible. If you laddered and spun 50 Internet IPOs that went bust within a year, so what? By the time the Securities and Exchange Commission got around to fining your firm $110 million, the yacht you bought with your IPO bonuses was already six years old. Besides, you were probably out of Goldman by then, running the U.S. Treasury or maybe the state of New Jersey. (One of the truly comic moments in the history of America’s recent financial collapse came when Gov. Jon Corzine of New Jersey, who ran Goldman from 1994 to 1999 and left with $320 million in IPO-fattened stock, insisted in 2002 that “I’ve never even heard the term ‘laddering’ before.”)
For a bank that paid out $7 billion a year in salaries, $110 million fines issued half a decade late were something far less than a deterrent —they were a joke. Once the Internet bubble burst, Goldman had no incentive to reassess its new, profit-driven strategy; it just searched around for another bubble to inflate. As it turns out, it had one ready, thanks in large part to Rubin.
BUBBLE #3 The Housing Craze
Goldman’s role in the sweeping global disaster that was the housing bubble is not hard to trace. Here again, the basic trick was a decline in underwriting standards, although in this case the standards weren’t in IPOs but in mortgages. By now almost everyone knows that for decades mortgage dealers insisted that home buyers be able to produce a down payment of 10 percent or more, show a steady income and good credit rating, and possess a real first and last name. Then, at the dawn of the new millennium, they suddenly threw all that shit out the window and started writing mortgages on the backs of napkins to cocktail waitresses and ex-cons carrying five bucks and a Snickers bar.
None of that would have been possible without investment bankers like Goldman, who created vehicles to package those shitty mortgages and sell them en masse to unsuspecting insurance companies and pension funds. This created a mass market for toxic debt that would never have existed before; in the old days, no bank would have wanted to keep some addict ex-con’s mortgage on its books, knowing how likely it was to fail. You can’t write these mortgages, in other words, unless you can sell them to someone who doesn’t know what they are.
Goldman used two methods to hide the mess they were selling. First, they bundled hundreds of different mortgages into instruments called Collateralized Debt Obligations. Then they sold investors on the idea that, because a bunch of those mortgages would turn out to be OK, there was no reason to worry so much about the shitty ones: The CDO, as a whole, was sound. Thus, junk-rated mortgages were turned into AAA-rated investments. Second, to hedge its own bets, Goldman got companies like AIG to provide insurance — known as credit default swaps — on the CDOs. The swaps were essentially a racetrack bet between AIG and Goldman: Goldman is betting the ex-cons will default, AIG is betting they won’t.
There was only one problem with the deals: All of the wheeling and dealing represented exactly the kind of dangerous speculation that federal regulators are supposed to rein in. Derivatives like CDOs and credit swaps had already caused a series of serious financial calamities: Procter & Gamble and Gibson Greetings both lost fortunes, and Orange County, California, was forced to default in 1994. A report that year by the Government Accountability Office recommended that such financial instruments be tightly regulated — and in 1998, the head of the Commodity Futures Trading Commission, a woman named Brooksley Born, agreed. That May, she circulated a letter to business leaders and the Clinton administration suggesting that banks be required to provide greater disclosure in derivatives trades, and maintain reserves to cushion against losses.
More regulation wasn’t exactly what Goldman had in mind. “The banks go crazy — they want it stopped,” says Michael Greenberger, who worked for Born as director of trading and markets at the CFTC and is now a law professor at the University of Maryland. “Greenspan, Summers, Rubin and [SEC chief Arthur] Levitt want it stopped.”
Clinton’s reigning economic foursome — “especially Rubin,” according to Greenberger — called Born in for a meeting and pleaded their case. She refused to back down, however, and continued to push for more regulation of the derivatives. Then, in June 1998, Rubin went public to denounce her move, eventually recommending that Congress strip the CFTC of its regulatory authority. In 2000, on its last day in session, Congress passed the now-notorious Commodity Futures Modernization Act, which had been inserted into an 11,000-page spending bill at the last minute, with almost no debate on the floor of the Senate. Banks were now free to trade default swaps with impunity.
But the story didn’t end there. AIG, a major purveyor of default swaps, approached the New York State Insurance Department in 2000 and asked whether default swaps would be regulated as insurance. At the time, the office was run by one Neil Levin, a former Goldman vice president, who decided against regulating the swaps. Now freed to underwrite as many housing-based securities and buy as much credit-default protection as it wanted, Goldman went berserk with lending lust. By the peak of the housing boom in 2006, Goldman was underwriting $76.5 billion worth of mortgage-backed securities — a third of which were sub-prime — much of it to institutional investors like pensions and insurance companies. And in these massive issues of real estate were vast swamps of crap.
Take one $494 million issue that year, GSAMP Trust 2006S3. Many of the mortgages belonged to second-mortgage borrowers, and the average equity they had in their homes was 0.71 percent. Moreover, 58 percent of the loans included little or no documentation — no names of the borrowers, no addresses of the homes, just zip codes. Yet both of the major ratings agencies, Moody’s and Standard & Poor’s, rated 93 percent of the issue as investment grade. Moody’s projected that less than 10 percent of the loans would default. In reality, 18 percent of the mortgages were in default within 18 months.
Not that Goldman was personally at any risk. The bank might be taking all these hideous, completely irresponsible mortgages from beneath-gangster-status firms like Countrywide and selling them off to municipalities and pensioners — old people, for God’s sake — pretending the whole time that it wasn’t grade D horseshit. But even as it was doing so, it was taking short positions in the same market, in essence betting against the same crap it was selling. Even worse, Goldman bragged about it in public. “The mortgage sector continues to be challenged,” David Viniar, the bank’s chief financial officer, boasted in 2007. “As a result, we took significant markdowns on our long inventory positions … However, our risk bias in that market was to be short, and that net short position was profitable.” In other words, the mortgages it was selling were for chumps. The real money was in betting against those same mortgages.
“That’s how audacious these assholes are,” says one hedge fund manager. “At least with other banks, you could say that they were just dumb — they believed what they were selling, and it blew them up. Goldman knew what it was doing.”
I ask the manager how it could be that selling something to customers that you’re actually betting against — particularly when you know more about the weaknesses of those products than the customer — doesn’t amount to securities fraud.
“It’s exactly securities fraud,” he says. “It’s the heart of securities fraud.”
Eventually, lots of aggrieved investors agreed. In a virtual repeat of the Internet IPO craze, Goldman was hit with a wave of lawsuits after the collapse of the housing bubble, many of which accused the bank of withholding pertinent information about the quality of the mortgages it issued. New York state regulators are suing Goldman and 25 other underwriters for selling bundles of crappy Countrywide mortgages to city and state pension funds, which lost as much as $100 million in the investments. Massachusetts also investigated Goldman for similar misdeeds, acting on behalf of 714 mortgage holders who got stuck holding predatory loans. But once again, Goldman got off virtually scot-free, staving off prosecution by agreeing to pay a paltry $60 million — about what the bank’s CDO division made in a day and a half during the real estate boom.
The effects of the housing bubble are well known — it led more or less directly to the collapse of Bear Stearns, Lehman Brothers and AIG, whose toxic portfolio of credit swaps was in significant part composed of the insurance that banks like Goldman bought against their own housing portfolios. In fact, at least $13 billion of the taxpayer money given to AIG in the bailout ultimately went to Goldman, meaning that the bank made out on the housing bubble twice: It fucked the investors who bought their horseshit CDOs by betting against its own crappy product, then it turned around and fucked the taxpayer by making him pay off those same bets.
And once again, while the world was crashing down all around the bank, Goldman made sure it was doing just fine in the compensation department. In 2006, the firm’s payroll jumped to $16.5 billion — an average of $622,000 per employee. As a Goldman spokesman explained, “We work very hard here.”
But the best was yet to come. While the collapse of the housing bubble sent most of the financial world fleeing for the exits, or to jail, Goldman boldly doubled down — and almost single-handedly created yet another bubble, one the world still barely knows the firm had anything to do with.
BUBBLE #4 $4 a Gallon
By the beginning of 2008, the financial world was in turmoil. Wall Street had spent the past two and a half decades producing one scandal after another, which didn’t leave much to sell that wasn’t tainted. The terms junk bond, IPO, sub-prime mortgage and other once-hot financial fare were now firmly associated in the public’s mind with scams; the terms credit swaps and CDOs were about to join them. The credit markets were in crisis, and the mantra that had sustained the fantasy economy throughout the Bush years — the notion that housing prices never go down — was now a fully exploded myth, leaving the Street clamoring for a new bullshit paradigm to sling.
Where to go? With the public reluctant to put money in anything that felt like a paper investment, the Street quietly moved the casino to the physical-commodities market — stuff you could touch: corn, coffee, cocoa, wheat and, above all, energy commodities, especially oil. In conjunction with a decline in the dollar, the credit crunch and the housing crash caused a “flight to commodities.” Oil futures in particular skyrocketed, as the price of a single barrel went from around $60 in the middle of 2007 to a high of $147 in the summer of 2008.
That summer, as the presidential campaign heated up, the accepted explanation for why gasoline had hit $4.11 a gallon was that there was a problem with the world oil supply. In a classic example of how Republicans and Democrats respond to crises by engaging in fierce exchanges of moronic irrelevancies, John McCain insisted that ending the moratorium on offshore drilling would be “very helpful in the short term,” while Barack Obama in typical liberal-arts yuppie style argued that federal investment in hybrid cars was the way out.
But it was all a lie. While the global supply of oil will eventually dry up, the short-term flow has actually been increasing. In the six months before prices spiked, according to the U.S. Energy Information Administration, the world oil supply rose from 85.24 million barrels a day to 85.72 million. Over the same period, world oil demand dropped from 86.82 million barrels a day to 86.07 million. Not only was the short-term supply of oil rising, the demand for it was falling — which, in classic economic terms, should have brought prices at the pump down.
So what caused the huge spike in oil prices? Take a wild guess. Obviously Goldman had help — there were other players in the physical commodities market — but the root cause had almost everything to do with the behavior of a few powerful actors determined to turn the once-solid market into a speculative casino. Goldman did it by persuading pension funds and other large institutional investors to invest in oil futures — agreeing to buy oil at a certain price on a fixed date. The push transformed oil from a physical commodity, rigidly subject to supply and demand, into something to bet on, like a stock. Between 2003 and 2008, the amount of speculative money in commodities grew from $13 billion to $317 billion, an increase of 2,300 percent. By 2008, a barrel of oil was traded 27 times, on average, before it was actually delivered and consumed.
As is so often the case, there had been a Depression-era law in place designed specifically to prevent this sort of thing. The commodities market was designed in large part to help farmers: A grower concerned about future price drops could enter into a contract to sell his corn at a certain price for delivery later on, which made him worry less about building up stores of his crop. When no one was buying corn, the farmer could sell to a middleman known as a “traditional speculator,” who would store the grain and sell it later, when demand returned. That way, someone was always there to buy from the farmer, even when the market temporarily had no need for his crops.
In 1936, however, Congress recognized that there should never be more speculators in the market than real producers and consumers. If that happened, prices would be affected by something other than supply and demand, and price manipulations would ensue. A new law empowered the Commodity Futures Trading Commission — the very same body that would later try and fail to regulate credit swaps — to place limits on speculative trades in commodities. As a result of the CFTC’s oversight, peace and harmony reigned in the commodities markets for more than 50 years.
All that changed in 1991 when, unbeknownst to almost everyone in the world, a Goldman-owned commodities-trading subsidiary called J. Aron wrote to the CFTC and made an unusual argument. Farmers with big stores of corn, Goldman argued, weren’t the only ones who needed to hedge their risk against future price drops — Wall Street dealers who made big bets on oil prices also needed to hedge their risk, because, well, they stood to lose a lot too.
This was complete and utter crap — the 1936 law, remember, was specifically designed to maintain distinctions between people who were buying and selling real tangible stuff and people who were trading in paper alone. But the CFTC, amazingly, bought Goldman’s argument. It issued the bank a free pass, called the “Bona Fide Hedging” exemption, allowing Goldman’s subsidiary to call itself a physical hedger and escape virtually all limits placed on speculators. In the years that followed, the commission would quietly issue 14 similar exemptions to other companies.
Now Goldman and other banks were free to drive more investors into the commodities markets, enabling speculators to place increasingly big bets. That 1991 letter from Goldman more or less directly led to the oil bubble in 2008, when the number of speculators in the market — driven there by fear of the falling dollar and the housing crash — finally overwhelmed the real physical suppliers and consumers. By 2008, at least three quarters of the activity on the commodity exchanges was speculative, according to a congressional staffer who studied the numbers — and that’s likely a conservative estimate. By the middle of last summer, despite rising supply and a drop in demand, we were paying $4 a gallon every time we pulled up to the pump.
What is even more amazing is that the letter to Goldman, along with most of the other trading exemptions, was handed out more or less in secret. “I was the head of the division of trading and markets, and Brooksley Born was the chair of the CFTC,” says Greenberger, “and neither of us knew this letter was out there.” In fact, the letters only came to light by accident. Last year, a staffer for the House Energy and Commerce Committee just happened to be at a briefing when officials from the CFTC made an offhand reference to the exemptions.
“I had been invited to a briefing the commission was holding on energy,” the staffer recounts. “And suddenly in the middle of it, they start saying, ‘Yeah, we’ve been issuing these letters for years now.’ I raised my hand and said, ‘Really? You issued a letter? Can I see it?’ And they were like, ‘Duh, duh.’ So we went back and forth, and finally they said, ‘We have to clear it with Goldman Sachs.’ I’m like, ‘What do you mean, you have to clear it with Goldman Sachs?'”
The CFTC cited a rule that prohibited it from releasing any information about a company’s current position in the market. But the staffer’s request was about a letter that had been issued 17 years earlier. It no longer had anything to do with Goldman’s current position. What’s more, Section 7 of the 1936 commodities law gives Congress the right to any information it wants from the commission. Still, in a classic example of how complete Goldman’s capture of government is, the CFTC waited until it got clearance from the bank before it turned the letter over.
Armed with the semi-secret government exemption, Goldman had become the chief designer of a giant commodities betting parlor. Its Goldman Sachs Commodities Index — which tracks the prices of 24 major commodities but is overwhelmingly weighted toward oil — became the place where pension funds and insurance companies and other institutional investors could make massive long-term bets on commodity prices. Which was all well and good, except for a couple of things. One was that index speculators are mostly “long only” bettors, who seldom if ever take short positions — meaning they only bet on prices to rise. While this kind of behavior is good for a stock market, it’s terrible for commodities, because it continually forces prices upward. “If index speculators took short positions as well as long ones, you’d see them pushing prices both up and down,” says Michael Masters, a hedge fund manager who has helped expose the role of investment banks in the manipulation of oil prices. “But they only push prices in one direction: up.”
Complicating matters even further was the fact that Goldman itself was cheerleading with all its might for an increase in oil prices. In the beginning of 2008, Arjun Murti, a Goldman analyst, hailed as an “oracle of oil” by The New York Times, predicted a “super spike” in oil prices, forecasting a rise to $200 a barrel. At the time Goldman was heavily invested in oil through its commodities trading subsidiary, J. Aron; it also owned a stake in a major oil refinery in Kansas, where it warehoused the crude it bought and sold. Even though the supply of oil was keeping pace with demand, Murti continually warned of disruptions to the world oil supply, going so far as to broadcast the fact that he owned two hybrid cars. High prices, the bank insisted, were somehow the fault of the piggish American consumer; in 2005, Goldman analysts insisted that we wouldn’t know when oil prices would fall until we knew “when American consumers will stop buying gas-guzzling sport utility vehicles and instead seek fuel-efficient alternatives.”
But it wasn’t the consumption of real oil that was driving up prices — it was the trade in paper oil. By the summer of 2008, in fact, commodities speculators had bought and stockpiled enough oil futures to fill 1.1 billion barrels of crude, which meant that speculators owned more future oil on paper than there was real, physical oil stored in all of the country’s commercial storage tanks and the Strategic Petroleum Reserve combined. It was a repeat of both the Internet craze and the housing bubble, when Wall Street jacked up present-day profits by selling suckers shares of a fictional fantasy future of endlessly rising prices.
In what was by now a painfully familiar pattern, the oil-commodities melon hit the pavement hard in the summer of 2008, causing a massive loss of wealth; crude prices plunged from $147 to $33. Once again the big losers were ordinary people. The pensioners whose funds invested in this crap got massacred: CalPERS, the California Public Employees’ Retirement System, had $1.1 billion in commodities when the crash came. And the damage didn’t just come from oil. Soaring food prices driven by the commodities bubble led to catastrophes across the planet, forcing an estimated 100 million people into hunger and sparking food riots throughout the Third World.
Now oil prices are rising again: They shot up 20 percent in the month of May and have nearly doubled so far this year. Once again, the problem is not supply or demand. “The highest supply of oil in the last 20 years is now,” says Rep. Bart Stupak, a Democrat from Michigan who serves on the House energy committee. “Demand is at a 10-year low. And yet prices are up.”
Asked why politicians continue to harp on things like drilling or hybrid cars, when supply and demand have nothing to do with the high prices, Stupak shakes his head. “I think they just don’t understand the problem very well,” he says. “You can’t explain it in 30 seconds, so politicians ignore it.”
BUBBLE #5 Rigging the Bailout
After the oil bubble collapsed last fall, there was no new bubble to keep things humming — this time, the money seems to be really gone, like worldwide-depression gone. So the financial safari has moved elsewhere, and the big game in the hunt has become the only remaining pool of dumb, unguarded capital left to feed upon: taxpayer money. Here, in the biggest bailout in history, is where Goldman Sachs really started to flex its muscle.
It began in September of last year, when then-Treasury secretary Paulson made a momentous series of decisions. Although he had already engineered a rescue of Bear Stearns a few months before and helped bail out quasi-private lenders Fannie Mae and Freddie Mac, Paulson elected to let Lehman Brothers — one of Goldman’s last real competitors — collapse without intervention. (“Goldman’s superhero status was left intact,” says market analyst Eric Salzman, “and an investment banking competitor, Lehman, goes away.”) The very next day, Paulson green-lighted a massive, $85 billion bailout of AIG, which promptly turned around and repaid $13 billion it owed to Goldman. Thanks to the rescue effort, the bank ended up getting paid in full for its bad bets: By contrast, retired auto workers awaiting the Chrysler bailout will be lucky to receive 50 cents for every dollar they are owed.
Immediately after the AIG bailout, Paulson announced his federal bailout for the financial industry, a $700 billion plan called the Troubled Asset Relief Program, and put a heretofore unknown 35-year-old Goldman banker named Neel Kashkari in charge of administering the funds. In order to qualify for bailout monies, Goldman announced that it would convert from an investment bank to a bank holding company, a move that allows it access not only to $10 billion in TARP funds, but to a whole galaxy of less conspicuous, publicly backed funding — most notably, lending from the discount window of the Federal Reserve. By the end of March, the Fed will have lent or guaranteed at least $8.7 trillion under a series of new bailout programs — and thanks to an obscure law allowing the Fed to block most congressional audits, both the amounts and the recipients of the monies remain almost entirely secret.
Converting to a bank-holding company has other benefits as well: Goldman’s primary supervisor is now the New York Fed, whose chairman at the time of its announcement was Stephen Friedman, a former co-chairman of Goldman Sachs. Friedman was technically in violation of Federal Reserve policy by remaining on the board of Goldman even as he was supposedly regulating the bank; in order to rectify the problem, he applied for, and got, a conflict of interest waiver from the government. Friedman was also supposed to divest himself of his Goldman stock after Goldman became a bank holding company, but thanks to the waiver, he was allowed to go out and buy 52,000 additional shares in his old bank, leaving him $3 million richer. Friedman stepped down in May, but the man now in charge of supervising Goldman — New York Fed president William Dudley — is yet another former Goldmanite.
The collective message of all this — the AIG bailout, the swift approval for its bank holding conversion, the TARP funds — is that when it comes to Goldman Sachs, there isn’t a free market at all. The government might let other players on the market die, but it simply will not allow Goldman to fail under any circumstances. Its edge in the market has suddenly become an open declaration of supreme privilege. “In the past it was an implicit advantage,” says Simon Johnson, an economics professor at MIT and former official at the International Monetary Fund, who compares the bailout to the crony capitalism he has seen in Third World countries. “Now it’s more of an explicit advantage.”
Once the bailouts were in place, Goldman went right back to business as usual, dreaming up impossibly convoluted schemes to pick the American carcass clean of its loose capital. One of its first moves in the post-bailout era was to quietly push forward the calendar it uses to report its earnings, essentially wiping December 2008 — with its $1.3 billion in pretax losses — off the books. At the same time, the bank announced a highly suspicious $1.8 billion profit for the first quarter of 2009 — which apparently included a large chunk of money funneled to it by taxpayers via the AIG bailout. “They cooked those first quarter results six ways from Sunday,” says one hedge fund manager. “They hid the losses in the orphan month and called the bailout money profit.”
Two more numbers stand out from that stunning first-quarter turnaround. The bank paid out an astonishing $4.7 billion in bonuses and compensation in the first three months of this year, an 18 percent increase over the first quarter of 2008. It also raised $5 billion by issuing new shares almost immediately after releasing its first quarter results. Taken together, the numbers show that Goldman essentially borrowed a $5 billion salary payout for its executives in the middle of the global economic crisis it helped cause, using half-baked accounting to reel in investors, just months after receiving billions in a taxpayer bailout.
Even more amazing, Goldman did it all right before the government announced the results of its new “stress test” for banks seeking to repay TARP money — suggesting that Goldman knew exactly what was coming. The government was trying to carefully orchestrate the repayments in an effort to prevent further trouble at banks that couldn’t pay back the money right away. But Goldman blew off those concerns, brazenly flaunting its insider status. “They seemed to know everything that they needed to do before the stress test came out, unlike everyone else, who had to wait until after,” says Michael Hecht, a managing director of JMP Securities. “The government came out and said, ‘To pay back TARP, you have to issue debt of at least five years that is not insured by FDIC — which Goldman Sachs had already done, a week or two before.”
And here’s the real punch line. After playing an intimate role in four historic bubble catastrophes, after helping $5 trillion in wealth disappear from the NASDAQ, after pawning off thousands of toxic mortgages on pensioners and cities, after helping to drive the price of gas up to $4 a gallon and to push 100 million people around the world into hunger, after securing tens of billions of taxpayer dollars through a series of bailouts overseen by its former CEO, what did Goldman Sachs give back to the people of the United States in 2008?
Fourteen million dollars.
That is what the firm paid in taxes in 2008, an effective tax rate of exactly one, read it, one percent. The bank paid out $10 billion in compensation and benefits that same year and made a profit of more than $2 billion — yet it paid the Treasury less than a third of what it forked over to CEO Lloyd Blankfein, who made $42.9 million last year.
How is this possible? According to Goldman’s annual report, the low taxes are due in large part to changes in the bank’s “geographic earnings mix.” In other words, the bank moved its money around so that most of its earnings took place in foreign countries with low tax rates. Thanks to our completely fucked corporate tax system, companies like Goldman can ship their revenues offshore and defer taxes on those revenues indefinitely, even while they claim deductions upfront on that same untaxed income. This is why any corporation with an at least occasionally sober accountant can usually find a way to zero out its taxes. A GAO report, in fact, found that between 1998 and 2005, roughly two-thirds of all corporations operating in the U.S. paid no taxes at all.
This should be a pitchfork-level outrage — but somehow, when Goldman released its post-bailout tax profile, hardly anyone said a word. One of the few to remark on the obscenity was Rep. Lloyd Doggett, a Democrat from Texas who serves on the House Ways and Means Committee. “With the right hand out begging for bailout money,” he said, “the left is hiding it offshore.”
BUBBLE #6 Global Warming
Fast-forward to today. It’s early June in Washington, D.C. Barack Obama, a popular young politician whose leading private campaign donor was an investment bank called Goldman Sachs — its employees paid some $981,000 to his campaign — sits in the White House. Having seamlessly navigated the political minefield of the bailout era, Goldman is once again back to its old business, scouting out loopholes in a new government-created market with the aid of a new set of alumni occupying key government jobs.
Gone are Hank Paulson and Neel Kashkari; in their place are Treasury chief of staff Mark Patterson and CFTC chief Gary Gensler, both former Goldmanites. (Gensler was the firm’s co-head of finance.) And instead of credit derivatives or oil futures or mortgage-backed CDOs, the new game in town, the next bubble, is in carbon credits — a booming trillion dollar market that barely even exists yet, but will if the Democratic Party that it gave $4,452,585 to in the last election manages to push into existence a groundbreaking new commodities bubble, disguised as an “environmental plan,” called cap-and-trade.
The new carbon credit market is a virtual repeat of the commodities-market casino that’s been kind to Goldman, except it has one delicious new wrinkle: If the plan goes forward as expected, the rise in prices will be government-mandated. Goldman won’t even have to rig the game. It will be rigged in advance.
Here’s how it works: If the bill passes, there will be limits for coal plants, utilities, natural-gas distributors and numerous other industries on the amount of carbon emissions (a.k.a. greenhouse gases) they can produce per year. If the companies go over their allotment, they will be able to buy “allocations” or credits from other companies that have managed to produce fewer emissions. President Obama conservatively estimates that about $646 billion worth of carbon credits will be auctioned in the first seven years; one of his top economic aides speculates that the real number might be twice or even three times that amount.
The feature of this plan that has special appeal to speculators is that the “cap” on carbon will be continually lowered by the government, which means that carbon credits will become more and more scarce with each passing year. Which means that this is a brand new commodities market where the main commodity to be traded is guaranteed to rise in price over time. The volume of this new market will be upwards of a trillion dollars annually; for comparison’s sake, the annual combined revenues of all electricity suppliers in the U.S. total $320 billion.
Goldman wants this bill. The plan is (1) to get in on the ground floor of paradigm-shifting legislation, (2) make sure that they’re the profit-making slice of that paradigm and (3) make sure the slice is a big slice. Goldman started pushing hard for cap-and-trade long ago, but things really ramped up last year when the firm spent $3.5 million to lobby climate issues. (One of their lobbyists at the time was none other than Patterson, now Treasury chief of staff.) Back in 2005, when Hank Paulson was chief of Goldman, he personally helped author the bank’s environmental policy, a document that contains some surprising elements for a firm that in all other areas has been consistently opposed to any sort of government regulation. Paulson’s report argued that “voluntary action alone cannot solve the climate change problem.” A few years later, the bank’s carbon chief, Ken Newcombe, insisted that cap-and-trade alone won’t be enough to fix the climate problem and called for further public investments in research and development. Which is convenient, considering that Goldman made early investments in wind power (it bought a subsidiary called Horizon Wind Energy), renewable diesel (it is an investor in a firm called Changing World Technologies) and solar power (it partnered with BP Solar), exactly the kind of deals that will prosper if the government forces energy producers to use cleaner energy. As Paulson said at the time, “We’re not making those investments to lose money.”
The bank owns a 10 percent stake in the Chicago Climate Exchange, where the carbon credits will be traded. Moreover, Goldman owns a minority stake in Blue Source LLC, a Utah-based firm that sells carbon credits of the type that will be in great demand if the bill passes. Nobel Prize winner Al Gore, who is intimately involved with the planning of cap-and-trade, started up a company called Generation Investment Management with three former bigwigs from Goldman Sachs Asset Management, David Blood, Mark Ferguson and Peter Harris. Their business? Investing in carbon offsets. There’s also a $500 million Green Growth Fund set up by a Goldmanite to invest in green-tech … the list goes on and on. Goldman is ahead of the headlines again, just waiting for someone to make it rain in the right spot. Will this market be bigger than the energy futures market?
“Oh, it’ll dwarf it,” says a former staffer on the House energy committee.
Well, you might say, who cares? If cap-and-trade succeeds, won’t we all be saved from the catastrophe of global warming? Maybe — but cap-and-trade, as envisioned by Goldman, is really just a carbon tax structured so that private interests collect the revenues. Instead of simply imposing a fixed government levy on carbon pollution and forcing unclean energy producers to pay for the mess they make, cap-and-trade will allow a small tribe of greedy-as-hell Wall Street swine to turn yet another commodities market into a private tax collection scheme. This is worse than the bailout: It allows the bank to seize taxpayer money before it’s even collected.
“If it’s going to be a tax, I would prefer that Washington set the tax and collect it,” says Michael Masters, the hedge fund director who spoke out against oil futures speculation. “But we’re saying that Wall Street can set the tax, and Wall Street can collect the tax. That’s the last thing in the world I want. It’s just asinine.”
Cap-and-trade is going to happen. Or, if it doesn’t, something like it will. The moral is the same as for all the other bubbles that Goldman helped create, from 1929 to 2009. In almost every case, the very same bank that behaved recklessly for years, weighing down the system with toxic loans and predatory debt, and accomplishing nothing but massive bonuses for a few bosses, has been rewarded with mountains of virtually free money and government guarantees — while the actual victims in this mess, ordinary taxpayers, are the ones paying for it.
It’s not always easy to accept the reality of what we now routinely allow these people to get away with; there’s a kind of collective denial that kicks in when a country goes through what America has gone through lately, when a people lose as much prestige and status as we have in the past few years. You can’t really register the fact that you’re no longer a citizen of a thriving first-world democracy, that you’re no longer above getting robbed in broad daylight, because like an amputee, you can still sort of feel things that are no longer there.
But this is it. This is the world we live in now. And in this world, some of us have to play by the rules, while others get a note from the principal excusing them from homework till the end of time, plus 10 billion free dollars in a paper bag to buy lunch. It’s a gangster state, running on gangster economics, and even prices can’t be trusted anymore; there are hidden taxes in every buck you pay. And maybe we can’t stop it, but we should at least know where it’s all going.
This article originally appeared in the July 9-23, 2009 of Rolling Stone.
https://www.rollingstone.com/politics/politics-news/the-great-american-bubble-machine-195229/