Anti-Catholicism and the English Language

Hating the Catholic Church is as English as beef mutton. The roots go back deep. In 1570, Pope Gregory VIII issued a fatwa against the Queen, saying anyone who assassinated that wretched Protestant Elizabeth Tudor was performing a Godly act. From then on, Englishness and Catholicism were defined as being at odds: Guy Fawkes and the gunpowder plot, the two revolutions (one in which a Catholic-sympathizing king was beheaded, one in which a Catholic King was sent into exile), the many wars against continental Catholic regimes, and even as late as the 1930s the abdication of a monarch for marrying a Catholic.

It’s all ancient history, you could say, but history has way of lingering on like an old injury, an occasional spasm of pain that shoots up long after the initial cause has been forgotten. To this day, Guy Fawkes is burned in effigy.

There is a curious linguistic legacy as well. In English, many words that have a Catholic connotation carry a bad smell with them, a suggestion of being alien and authoritarian. When we accuse someone of pontificating, we’re not saying that they speak as authoritatively as the Pope; rather, we’re saying that they are pompously talking with an exaggerated sense of the value of their words. Similarly, it’s not good to be doctrinaire or dogmatic, although for Catholics doctrine and dogma are the essence of the faith. The Jesuits are learned men and they teach a form of moral reasoning called casuistry. Yet to be Jesuitical is to be cunning, dissembling, and equivocating. Casuistry in English means clever and false reasoning, the very type of manipulative trickery practiced, supposedly, by the Jesuits. And propaganda: that comes from the Congregatio de Propaganda Fide (“congregation for propagating the faith”), a committee of cardinals which was set up in 1622 to promote foreign missions. For us, propaganda inevitably also means manipulative arguments designed to stir up false feelings. (Propaganda didn’t join the English language until the early 20th century, but surely its distant Catholic origins explain why it sounds so bad to our ears.)

In using these ordinary English words – pontificating, doctrinaire, dogmatic, Jesuitical, casuistry, propaganda – we’re echoing far off religious wars. Implicit in the English language is a form of unthinking anti-Catholicism. Daniel Defoe had something like this in mind when he wrote in 1726 that London has “ten thousand stout fellows that would spend the last drop of their blood against Popery that do not know whether it be a man or a horse.” We don’t know if the pontiff is a man or a horse but we know we don’t want to be pontificating.

Something similar is happening now in the current wars that have a religious undercurrent. The Islamic words that get anglicized (jihad, fatwa, mullah, and ayatollah) are the ones that suggest conflict and alien authority. Linguistically, our wars will be with us for a long time yet.

Science Fiction: The Other God That Failed

Science fiction, it is often plausibly argued, is a literature about technology and what it does to humans. But what if this view of the genre is wrong? What if science fiction (SF) is not really about technology at all but something else. What if SF is at its core a religious genre, a literature about the search for transcendent meaning in a post-Christian world?

The story of L. Ron Hubbard is well known: he started off as a successful pulp writer of science fiction (and other popular genres) in the 1930s and 1940s. By the tail-end of the 1940s, he claimed to have discovered a new science of the mind, Dianetics. This purported discovery eventually morphed into the religious movement called Scientology, which now has many thousands of adherents world-wide, including celebrities like Tom Cruise and John Travolta.

Hubbard’s roots in science fiction were hardly an accident. The birth of Dianetics was completely and organically tied with the evolution of American science fiction. In the 1930s and 1940s, SF was very much a messianic, utopian genre. Many writers were Marxists or quasi-Marxists (including Isaac Asimov, Frederick Pohl, Judith Merrill) or adhered to other plans to radically remake society (Robert Heinlein seems to have been some sort of Social Credit acolyte, a fact later suppressed). These utopian hopes were often invested in the genre itself. Both fans and professional writers argued that SF had a real-world mission: by grappling with the new ideas, SF could help humanity come to terms with technology and solve the problems of economic distress and warfare that plagued the 20th century. Science fiction could, it was earnestly argued, save the world.

By the late 1940s, these social hopes for amelioration increasingly moved away from communal projects towards the dream of personal development and self-improvement. Perhaps through evolution or mutation or the cultivation of untapped mental powers, a new type of humanity could emerge to save the world: the superman as messiah. Socially marginal and even alienated, working for tawdry magazines that paid them a penny a word at best, totally despised by the intellectual and cultural elite, SF writers of the 1940s maintained grandiose visions of what they could accomplish through their writings.

When L. Ron Hubbard came up with Dianetics, he found a ready and expectant audience in the science fiction world. The first announcement of this new science was in Astounding Science Fiction in 1950, where it appeared as a special “fact” article. Under the stewardship of John W. Campbell, Astounding was the leading magazine of the genre, renowned for publishing Isaac Asimov’s Foundation series and Robert Heinlein “future history” stories. Astounding prided itself on being the home of “hard science fiction”, SF that adhered as closely as possible to the real laws of physics and extrapolated with rigor future developments in technology. Yet for all his pretences of being a hard-headed just-the-facts engineer, Campbell had a mystical streak to him which Hubbard cunningly tapped. For at least a while Campbell became one of Dianetics loudest advocates. Even after he gave up on Dianetics, Campbell became a perpetual sucker for all sorts of pseudo-sciences. His magazine became a haven for those who believed in extra-sensory perception (or psionics) and the Dean Drive (an anti-gravity device that required an unfortunate suspension of Newton’s third law).

Aside from Campbell, many members of the SF community got caught up in the Dianetics craze: Katherine MacLean, James Blish, A.E. van Vogt, and Forrest J. Ackerman. More importantly, the underlying promise of Dianetics, the hope for a new science of mind that would unleash hidden mental powers, became a central theme in the genre. Telepathy and psionics became staple concerns in SF magazines, as common as guns in detective novels. Throughout the 1950s and early 1960s, writer after writer dealt with this messianic hope of unleashing the hidden potential of the human mind. This theme shows up in the most famous and widely read books in the genre, running form Alfred Bester’s The Demolished Man (1953), to Theodore Sturgeon’s More Than Human (1953) to Robert Heinlein’s Stranger in a Strange Land (1963). All these books are charged with a strong transcendentalist yearning, and the Heinlein novel is very explicitly about the birth of a new religion, created by a messianic Martian. By the late 1960s, some hippies had taken the Heinlein book as a new gospel and started to enact communal ritual ceremonies based on Heinlein’s fictional religion.

It’s hard not to find religion in almost all science fiction, a current that is always running a few feet underground. Think of the major movies in the genre: 2001: A Space Odyssey ends on an appropriately mystical note. What is “the force” in Star Wars but a pop version of Zen? In Blade Runner the replicants search for their creator hoping he can offer them immortality.

The true history of science fiction has yet to be written. In most accounts of the genre, Hubbard is treated as an embarrassing digression. He was much more than that: through chicanery he uncovered the true meaning of science fiction. Science fiction is the only literary genre that has led to the creation of a new religion. Why? Because science fiction at its core is a religious genre.  

In early 1970s Philip K. Dick, the greatest science fiction writer since H.G. Wells, had a series of bizarre visions and auditions. He heard and saw things that weren’t there. If he had wanted to, Dick could have become the second L. Ron Hubbard. Science fiction fans who heard him speak about his visions were prepared to make him a guru and follow his prophetic teachings. It is part of Dick heroism, the real bravery of a flawed but honest man, that he chose not to become a God, preferring instead to work his visions into writing and remain a writer of science fiction. Science fiction may be a religious genre but there is no need to make a religion out of every science fiction vision. As Dick proved, the demarcation between literature and religion can be maintained even in the face of the temptation to be worshipped.

Becoming an Ink Stud

Here’s a link to a radio interview where Chris Ware and I talk to the hosts of the “Ink Studs” program about Sundays With Walt and Skeezix, a giant size reprint volume of old Gasoline Alley comic strip pages. Chris, of course, designed the book and I wrote the introduction. Find out why Joe Matt’s favorite activities are 1) self-pleasuring and 2) reading Gasoline Alley

Affirmative Action, Meritocracy, Nepotism and the Podhoretz Clan

 Over the last four decades, there has been no more vocal and insistent enemy of affirmative action than Norman Podhoretz. From his influential perch at Commentary magazine (which he edited from 1960 to 1995), Podhoretz launched attack after attack on affirmative action as an affront to the sacred principal of meritocracy.

In a speech given at the National Review Institute in early 1993, Podhoretz made the argument with typical forcefulness: “Affirmative action and quotas represent the most radical assault yet on the traditional American ethos. It is an assault on the idea that was the revolutionary principle of the American Revolution, and still is a revolutionary principle: the idea that it doesn’t matter who your father was. The counter-revolutionary principle to which we are now succumbing is that all individuals are to be judged by their ancestry, by the group to which they belong – racial, ethnic, religious, sexual.”

Consider the implications of what Podhoretz is saying here: that apart from affirmative action, America is a meritocracy where ancestry is irrelevant. Or at the very least, the ideal America is one where “it doesn’t matter who your father was.”

Does this really describe the America produced by the revolution of 1776, where the daughter of a slave was still a slave?

Is this really an accurate description of the America governed by George W. Bush, a man whose entire career from prep school to presidency has been aided by his family connections?

Does the ideal of meritocracy, an indifference to parental powers, even describe Podhoretz’s own career? As editor of Commentary the elder Podhoretz very much governed as a benevolent paterfamilias, publishing countless essays and even short stories by his wife Midge Decter, his son John Podhoretz, his step-daughter Naomi Munson (nee Decter), his other step-daughter Rachel Abrams (nee Decter), his son-in-law Steven C. Munson, his other son-in-law Elliot Abrams, and his grand-son Sam Munson. (This is a very incomplete list: it would take an army of genealogists to do justice to Podhoretz’s editorship by blood-line. Nor does this list include all the other neo-conservatives families that clutter the Commentary table of contents: the Kristols, the Himmelfarbs, the Pipes, the Kagans, etc. No less than dry-cleaning, a wag observed, neoconservatism is a family business).

Some of these branches of the family tree made their appearance after Podhoretz stopped the grubby job of editing the magazine and was elevated to the title of editor at large. Podhoretz’s replacement as editor was Neal Kozodoy, a capable and loyal underling but not a fit genetic heir. Now Kozodoy in turn is ready to step aside. His designated replacement? John Podhoretz, the only full-blood son of Norman.

Thank God America is a meritocracy and “it doesn’t matter who your father was.” Now if we can only get rid of affirmative action to make sure that jobs only go to those who have proven their worth in the marketplace.

Fish need bicycles (and planes and trains)

I have a number of reservations with the ‘eat local’ movement – and I don’t mean in restaurants:

1) De gustibus non disputandum est. Local food does not necessarily mean lower carbon emissions. If we are after lower carbon (and we should be), let’s pursue this goal directly. Let’s be transparent in tracking and pricing carbon across all sectors instead of acting on the misplaced and dangerous faith that local can serve as a proxy for lower. Let’s pay the carbon premium on, say, imported pineapples so as to encourage less carbon-intensive transportation routes, and enjoy our pineapples guilt free.

2) Caveat emptor. Local food can of course be bad for people and the environment: local farming practice may do more damage to the environment in the form of pesticides or water use than alternate farming practices farther afield. Just as the lifecycle environmental impact of one ethanol source can be quite different from another, the same is true for food production. Local food in parts of China might be full of toxins, no matter how proximate. Pilot dolphins might very well be ‘local’ to the Japanese whaling city of Taiji against their better judgement, but Taiji school kids still shouldn’t eat those dolphins for lunch, because their meat has dangerously high concentrations of mercury (among other arguments for not eating dolphins).

3) Qui bono? How convenient for protectionists, and how damaging to development! Japan has a 600% tariff on rice imports: its rice farmers don’t need any more help to discourage Japanese imports of rice from China, Thailand and Vietnam – even though the removal of formal and informal barries to rice imports in Japan would greatly enrich farmers in those countries. The same logic applies elsewhere: in many parts of the world, notably Africa, agricultural exports represent the fastest path to development – but Africans run up against import barriers in the EU and elsewhere. So what would the ‘buy local’ idealists envision for farming-based communities in Africa?

4) Reductio ad absurdum. Why restrict the ‘buy local’ injunction to food? Why not also encourage people to buy local electronic products, automotive parts, clothing, housing materials and vacations? It makes no sense to single out food while ignoring all other forms of consumption that entail emissions through transport. If the point is indeed to reduce carbon, a zero footprint commitment makes much more sense than a fetish for geographic distance, because substantial emissions can be generated without ever leaving home. If the point is to reduce consumption and waste altogether, the buy local movement is a mere poseur when compared to freegans who sustain themselves by dumpster diving. Sure, it might appear at first blush that there is no contradiction between buy local activists and freegans, but they are in fact motivated by inconsistent impulses: the former want to help local farmers grow their business within the marketplace, while the latter aim at nothing less than a wholesale rejection of modern consumer life. And of course freegans won’t throw away vegetables from the local dumpster just because they were imported before they were discarded.

One man, two votes

Ontario’s recent general election offered voters an interesting experience, namely the chance to vote for two things at once.

The first vote was on a referendum question: “Which electoral system should Ontario use to elect members to the provincial legislature?” The two choices — and for many people, it should be noted, having only two choices felt unnecessarily limiting — were (a) the existing first-past-the-post system, or (b) the proposed Mixed Member Proportional (MMP) system.

Now, from what I could tell, MMP had a substantial amount of money and airtime backing it. I can’t count the number of times radio ads bombarded me with the cheerful news that I’d be relieved of the terrible dilemma of voting for a good party with a bad local candidate, or for a good local candidate working for a bad party. You can have it all, the ads implied. By contrast, I can’t remember seeing or hearing a single ad supporting the existing system. Imperfect old first-past-the-post was apparently heading for the knacker’s yard.

And yet the existing system won, by 63% to MMP’s 37%. This, for me, was a happy result, unexpected though it was. Although the critique most commonly aimed at systems of proportional representation is that they generally result in minority governments and the associated need for coalition building, this doesn’t drive much of my thinking. I’m not terribly worried about minority governments, although I’ll admit the politicking that comes with coalition building does have a certain anti-democratic flavour to it.

Rather, my concern is more of a bottom-up and philosophical one. Our democracy evolved as a system in which each locality sends its representative to a central parliament. This is the original tie that connects a people to its central government, and along which legitimacy is transmitted. Parties are an ideological and functional overlay on this basic system. Their existence makes it easier to raise funds, coordinate campaigns, and maintain a solid base of parliamentary support for the term of a single government. They are a useful and permanent part of the machinery. But they are not fundamental to the system in the way that the principle of local representation is.

So proportional representation (mixed or unmixed) has always seemed to me to be a system that seeks to fix a set of modest problems — the fact that seats do not get allocated according to the parties’ proportion of the aggregate vote, the fact that a citizen cannot vote for parties and local candidates separately (and I’m not even sure that this is a “problem” more than a perceived inconvenience in an age of consumer choice) — at the cost of seriously weakening the foundation of our democratic system itself. Representative democracy was not invented to be easy, convenient, or even particularly “fair”. It was invented (though that’s too cut-and-dried a word) to give the people a voice in the running of the government through the specific representatives that each locality sends on its own behalf. And it’s this very specificity, this tangible connection between place and parliamentary seat, that keeps our democratic government rooted, legitimate, and accountable.

Having now displayed my Burkean conservative side, in my next post I’ll tell you why I voted Green.

To deliver an opinion, is the right of all men; that of constituents is a weighty and respectable opinion, which a representative ought always to rejoice to hear; and which he ought always most seriously to consider. But authoritative instructions; mandates issued, which the member is bound blindly and implicitly to obey, to vote, and to argue for, though contrary to the clearest conviction of his judgment and conscience,–these are things utterly unknown to the laws of this land, and which arise from a fundamental mistake of the whole order and tenor of our constitution.

Parliament is not a congress of ambassadors from different and hostile interests; which interests each must maintain, as an agent and advocate, against other agents and advocates; but parliament is a deliberative assembly of one nation, with one interest, that of the whole; where, not local purposes, not local prejudices, ought to guide, but the general good, resulting from the general reason of the whole. You choose a member indeed; but when you have chosen him, he is not member of Bristol, but he is a member of parliament.

– Edmund Burke, Speech to the Electors of Bristol, 3 Nov. 1774

No Exit From No Exit

As every freelance writer knows, most magazine articles come and go without a trace. Only a small handful trigger any reaction when they appear. But how many continue to be denounced and debated over a decade after their publication? I know of only one: “No Exit” by Elizabeth McCaughey, which was The New Republic’s cover story for February 7, 1994.

McCaughey’s article was an attack on Bill Clinton’s plan to extend health insurance to all U.S. citizens. McCaughey’s analysis was seriously misleading, for reasons I tried to explain in a 2004 article for The Believer called “Reckless Falsehoods” (linked below). Now American journalist Ezra Klein has revisited McCaughey’s essay, in a blog post that characterizes it as a “dishonest, fearmongering article.”

Klein is right to continue to make an issue of McCaughey’s story and the pernicious role it played in defeating the Clinton plan. I also second his recommendation of the work of James Fallows, who gives a lucid summary of the McCaughey affair in his 1997 book Breaking The News: How the Media Undermine American Democracy. What is especially noteworthy about Klein’s post, however, is that it has generated a response from Andrew Sullivan, who was editor of The New Republic when McCaughey’s essay appeared.

Sullivan’s reply runs together the question of whether the 1994 plan was good legislation with whether or not McCaughey and TNR conducted themselves in a defensible manner. In regard to the second issue, Sullivan writes the following:

I don’t think it’s fair to expose the internal editing of a piece but there was a struggle and it’s fair to say I didn’t win every skirmish. I was aware of the piece’s flaws but nonetheless was comfortable running it as a provocation to debate. It sure was. The magazine fully aired subsequent criticism of the piece. And if the readers of TNR are incapable of making their own minds up, then we might as well give up on the notion of intelligent readers. The piece also won a National Magazine Award.

Like Klein, I want Sullivan to be “more honest” about the McCaughey episode. To that end, I feel compelled to note that Sullivan’s reply is seriously misleading.

After McCaughey won the National Magazine Award, TNR columnist Mickey Kaus wrote a column pointing out how inaccurate her essay was (“No Exegesis,” May 8, 1995). Shortly after Kaus’s column appeared, TNR received the following letter:

April 27, 1995

To the editors:

I was on the panel of judges for the National Magazine Awards and cast my personal vote in the public interest category for the entry from the New Republic, “No Exit” by Elizabeth McCaughey. I did so because I thought it was the magazine article that had the greatest effect on public policy in 1994. I first read “No Exit” and McCaughey’s subsequent reply to administration critics of her article (the reply was also part of the entry) when they appeared in the New Republic. They were convincing to me during the judging of the awards. Perhaps I was right to be convinced, perhaps not. But I now know something for certain: I was wrong to believe the New Republic.

Your magazine endorsed Bill Clinton. The health care plan was a central, if not the central, piece of legislation of Clinton’s presidency. You put a devastating story about the health plan on the cover and then, a few issues later, heralded McCaughey’s reply to her critics with the cover line “Elizabeth McCaughey: White House Lies.” Lies! How could a magazine endorse a story and its author more strongly? As a reader I assume that such endorsement means, at the very least, that the basic facts in the article will be correct. Now I read Mickey Kaus saying in the New Republic that, among other important errors, McCaughey was wrong when she said that the Clinton plan would not allow a patient to pay his doctor directly for medical care but must allow the doctor to be paid by the government plan. Her errors, Kaus writes, “completely distorted the debate on the biggest public policy issue of 1994.” But where was Kaus when the story came in? Didn’t anyone there bother to check McCaughey’s citations to see if she was accurately reading and quoting the plan? It couldn’t have been that hard. If it turned out that you slipped up and McCaughey’s story was wrong, you should have said so yourselves back then rather than waiting for Kaus to shoulder the load at this late date. Then again, how does a reader know that Kaus is right? Did anyone there bother to check his story when it came in?

I am not talking about the difference of opinion between McCaughey and Kaus. A magazine is a chorus of many voices. There is lots of room for disagreement. But that’s not the problem here. Clinton’s plan says what it says. Any article on that plan must be based on accurate statements about what the plan says. Making sure that an article is accurate is one of the things an editor does. If you are not going to do that for a cover story on a central piece of legislation by a president that you endorsed, if you are not going to do that for a follow-up in which you call the administration liars, when are you going to do it? If Kaus was wrong and McCaughey is right after all, then how could you have published Kaus’s column? I can imagine a good magazine publishing neither McCaughey’s story nor Kaus’s story. But I cannot imagine a magazine with respect for its readers publishing both.


Gregory Curtis

Curtis’s letter puts paid to the idea that McCaughey’s article was simply a “provocation to debate.” Publishing both Kaus and McCaughey raises fundamental questions about accuracy that Sullivan has never adequately answered. The fact that Curtis and the other judges did not have all the relevant information when they chose McCaughey’s piece calls into question the legitimacy of the National Magazine Award. Finally, the fact that Sullivan declined to publish Curtis’s letter makes a mockery of his claim to have “fully aired” criticisms of her piece.

I admire Sullivan for admitting he made an error of judgment in supporting the Iraq War. He needs to do the same thing here, and admit he made a serious mistake in the way he handled McCaughey. Only then will he have lived up Orwell’s dictum that “to see what is in front of one’s nose is a constant struggle.”

I argue that McCaughey was a worse journalist than Stephen Glass in reckless-falsehoods.pdf.

Barry Malzberg is alive and well…

“Barry Malzberg? He can’t possibly have a new book out. He died in the early 1980s.” That’s what a clerk at Bakka, a Toronto book store specializing in science fiction, confidently informed me when I asked for a copy of Malzberg’s latest essay collection Breakfast in the Ruins: Science Fiction in the Last Millennium (Baen Books, 2007). Her mistake was, of course, entirely understandable and even predictable.

Three decades ago Malzberg was a force to be reckoned with, a large presence not just in science fiction but in the broader realm of paperbackdom. In his first decade as a writer (1967-1976) he published at least 30 books under his own name, and many more books under a wide variety of pseudonyms. As “Mike Barry”, Malzberg wrote, in the space of 2 years, 14 men’s adventure novels in the “Lone Wolf” series (a knock-off of Don Pendleton’s mafia-killing vigilante, the Executioner).

Not even Malzberg remembers all the books he wrote during this feverish period. A reasonable guess would say that they add up to about 80 or so books, probably more. As one fan recalled of this period, “back during the 1970s, it seemed as though there was a new Malzberg SF novel almost every week.” In fact the average was slightly slower: a book every two months. Most of these were without doubt hack-work, especially the porn novels he wrote under the name Mel Johnson (titles like I, Lesbian and Nympho Nurse).

But Malzberg took his craft seriously, he was deeply versed in the history of science fiction and many of the stories and novels he wrote in that mode were excellent. Like his contemporaries Robert Silverberg and Harlan Ellison, he brought mainstream literary values to a genre that still had its roots in the purple prose of the pulp magazines. His 1972 novel Beyond Apollo tells the story of a failed expedition to Venus as Samuel Beckett might have: a harrowing, involuted and internalized account of how the vast alienness of space could drive an astronaut mad. (The recent scandal involving astronaut Lisa Nowak seemed like a Malzberg novel come to life).

But by the end of the 1970s while still shy of his 40th birthday, Malzberg was already a burnt-out case. He had written too much, too quickly, too sloppily, with too much repetition and almost no re-writing. Always fluid and rhetorically resourceful, he must have grown sick of his own ever-glib voice. His dark vision of the future, plausible enough during the Vietnam War and Watergate, became increasingly unpopular as fans of science fiction sought relief in Star Wars style space operas. Malzberg’s central theme is that technological progress comes with a heavy price in psychic pain: an acceptable enough idea in the early 1970s but dismissed (unfairly so) as Luddite and reactionary in subsequent decades.

He repeatedly announced his intention to give up writing, resolutions that had the same half-life as a smoker’s vow to quit. By any reasonable standard he remained prolific enough, writing 150 stories in the last 3 decades; but compared to what his earlier torrential overflow of words, he had gone dry. Moreover, these newer stories, although written at a new level of care, were scattered through countless anthologies and magazines. Only a few of these stories have been gathered together in book form and these omnibus volumes were put out by small literary presses. Once a dominant writer in science fiction, Malzberg had become near invisible.

One outgrowth of his quiescent period was a collection of essays, The Engines of the Night (1982), a reflection of on the genre of science fiction using his career as a test case. Bitter and bracingly funny, Engines of the Night is perhaps the best book ever written on what it is like to be a commercial writer of fiction, someone who churns out novels by the dozen for a few thousand dollars a pop.

Here’s Malzberg’s account of how, over the course of 4 days, he turned the science fiction short story “Closed Sicilian” (about a chess game to decide the fate of the world) into the novel Tactics of Conquest: “Now you may think that you would have trouble expanding a twenty-six-hundred-word story into a fifty-five-thousand-word novel. You would be right. My oh my did I pad and overload. Sentences became pages, paragraphs became chapters. Megalomania became grandiosity with lots of examples. Whole flashback chapters were devoted to his life as a chess champion: scenes in Berne and Moscow and Philadelphia, the traveling life of the chess master. Also some sex scenes, but within good taste because this is the science fiction market. It turns out that the narrator has really had a secret homosexual relationship with his opponent for years but it is said in a subtle way.”

This honesty about the grubby facts of life on Grub Street makes Engines of the Night the best book ever written on science fiction, better than comparable critical studies by Brian Aldiss, Damon Knight, Algis Budrys, James Blish, and Thomas Disch (all working science fiction writers who have written with liveliness about their craft). Science fiction, Malzberg makes clear, is both a marketing category and a literary form. These two aspects of science fiction are so closely intertwined as to be inseparable. It’s the commercial framework of the genre that makes it possible for it to exist and to find its audience. But this commercial framework also sets the limits for literary achievement in the field: if you move too far away from the expectations of the audience you risk alienation. That was the lesson that Malzberg learned in the 1970s (his peers Ellison and Silverberg were similarly chastened by that disillusioning decade).

Long out of print, Engines of the Night is now available again in an expanded form as Breakfast in the Ruins. Augmented with many fresh essays on writers like Isaac Asimov and J.G. Ballard, the book proves that, bookstore clerks to the contrary, Barry Malzberg is alive and well. That’s good news. Even better, Malzberg’s distinctive voice (mordant, morose, morbid, hyperbolic, florid, dirgic and dire) retains its hypnotic flow. Listen to this sentence: “I abandoned critical essays and reviewing not because I felt I had nothing to say – I had plenty to say, at least to myself, and there is no silencing that raving, chattering internal voice, that thread of consciousness and disputation which rambles on and on and turns some writers into alcoholics and almost all of them into obsessives of one sort or the other – but because I felt that I had said enough and the integrity of Engines of the Night seemed to hinge upon reasonable silence.” I love that sentence-within-a-sentence about “that raving, chattering, internal voice”. Has there ever been a better account of the psychic cost wrought by the writing life? And since he’s a science fiction writer, Malzberg knows that the writing life is itself an outgrowth of technology.

Forbidden Lies

Norma Khouri, author of Forbidden Love.

I saw a good documentary last night called Forbidden Lies. It tells the story of Norma Khouri, who became famous in 2003 as the author of the international bestseller Forbidden Love. The non-fiction book was set in Jordan and addressed the issue of honour killing. It told the story of a woman named Dalia, described as the author’s best friend, who was killed by her Muslim father for dating a Christian man.

In 2004 an Australian journalist exposed Khouri as a fraud, and the documentary recounts her rise and fall. It turns out Khouri is a con woman and a pathological liar. Throughout the course of the film her falsehoods become so blatant and so desperate that my wife and I squirmed in our seats with pure embarassment. The highlight of the film is a trip to Jordan Khouri takes with the documentary-makers, to prove to them that the events of Forbidden Love really took place. She winds up so completely destroying her credibility that you partly feel sorry for her instead.

Norma Khouri’s book Forbidden Love being removed for sale from an Australian bookstore in 2004. (Photo: The Guardian)

Khouri’s book was translated into 16 languages, and in hindsight, part of its success may be due to the fact that it had cross-political appeal. The issues of honour killings is one that will resonate with many left-wingers on feminist grounds. The image of Muslim Jordanians as barbarian wife killers, by contrast, reinforced conservative prejudices and fit with the American mood when the book was published (between 9/11 and the Iraq invasion). The film points out that honour killings do happen in Jordan, but rather than in the thousands, as Khouri suggests, it is closer to 12-17 per year. That is obviously 12-17 too many, but as several Jordanian human rights activists in the film argue, it is not a problem that will be solved by spreading an image of Jordanian Muslims that reinforces the worst instincts of the Bush administration.

On a side note, the film has many scenes in which the director is shown talking directly to Khouri. I always dislike it when documentary makers do that, as it seems both vain and a distraction from the main subject. I prefer the approach of documentarians such as Joe Berlinger and Bruce Sinofsky (Paradise Lost, Some Kind of Monster) who usually try to keep themselves out of the story as much as possible.

Philologists: a species report

“Pardon me,” a newcomer asked Rubin, “what is your name?”

“Lev Grigorich”

“Are you an engineer too?”

“No, I’m not an engineer, I’m a philologist.”

“Philologist? They even keep philologists here?”

– Aleksandr Solzhenitsyn, The First Circle

Ah, when an average day’s roundup of intelligentsia would almost always yield a philologist or two amongst the catch! Not so today. Philologists are as rare as hen’s teeth; if one does turn up in the system, the government’s policy is one of immediate release. It’s a question of maintaining the breeding population, you see.

But seriously, where have all the philologists gone? I ask not because I grew up in a world filled with philologists, and now notice their absence. I’ve never met a philologist, not once in my life; I haven’t spotted one across a campus; haven’t heard one on the radio nor seen one on TV. From the day I was born, I’ve lived in an effectively philologist-free environment.

Over the years, however, I’ve stumbled across evidence that such people did exist, and that, in fact, their numbers were not insignificant when compared to other academic professions. For one thing, they frequently turned up as characters in literary novels usually written by (or about) continental Europeans. For another, famous people who one had been brought up to think of as philosophers turned out on later inspection to be philologists. Nietzsche a philosopher? Wrong.

In fact, as I recently discovered from a review in the Times Literary Supplement (of Joep Leerssen’s National Thought in Europe: A Cultural History), philologists were once considered rather hip:

In 1848, the year of revolutions, a “National Assembly” was convened at Frankfurt, to discuss unification of the German lands, civil rights and a constitution for a future Reich. The strangest thing about the assembly was its seating plan. Delegates were placed in a semi-circle facing the Speaker, but there was one seat in the centre of the semi-circle, directly opposite the Speaker, set apart from all the others. It was reserved for Jacob Grimm. Can one imagine a British durbar to decide the future of the Empire, deliberately and symbolically centred on a professor of linguistics, also known as a collector of fairy tales? But Grimm was not a mere linguist, he was a Philolog, and by 1848, as Joep Leerssen points out in his exceptionally wide-ranging study, philology was a combination of linguistics, literary history and cultural anthropology with the prestige of a hard science and the popular appeal of The Lord of the Rings. Grimm was there to speak, not for the nation, for there was no German nation, but for an imaginary Deutschland which he had very largely created in an unmatched though repeatedly imitated feat of “cultural consciousness-raising”.

More prosaically — when they were not creating nations out of thin air — what philologists generally did was to study historical texts, and by painstaking linguistic and contextual analysis, to discover the authoritative original text (or Ur-text), now cleaned of centuries of copyist errors, translator distortions, and the fabrications of forgers. This, of course, was a tremendously useful service for historians, who now no longer had to decide to accept or reject as a whole the documentary evidence they required to write history. In The Historian’s Craft, medieval historian Marc Bloch described the importance of the change:

True progress began on the day when, as Volney put it, doubt became an “examiner”; or in other words, when there had gradually been worked out objective rules which permitted the separation of truth from falsehood. The Jesuit Papebroeck, in whom the reading of The Lives of the Saints had instilled a profound mistrust of the entire heritage of the early Middle Ages, considered all Merovingian charters which had been preserved in the monasteries to be forgeries. No, replied Mabillon. There are unquestionably some charters which have been retouched, some which have been interpolated, and some which have been forged in their entirety. There are also some which are authentic, and this is how it is possible to distinguish the bad from the good. That year, 1681, the year of the publication of the De Re Diplomatica, was truly a great one in the history of the human mind, for the criticism of the documents of archives was definitely established.

Given how important this function is to the writing of accurate history, one should not be surprised to know that the profession has not, in fact, vanished. But — and this quite apart from philologists’ newfound and almost Hobbit-like ability to vanish into the background — it has begun to change. Rather than searching for an individual and authoritative Ur-text, some philologists have recently argued for an acceptance of the essential mobility of certain texts (particularly medieval ones), insofar as such texts have been continually and intentionally re-written, their meanings linked not to any original intent but more to the moment of their performance. This approach is sometimes referred to as the New Philology, and the key book here is Bernard Cerquiglini’s Éloge de la variante: Histoire critique de la philologie (1989) (published in 1999 by Johns Hopkins as In Praise of the Variant: A Critical History of Philology).

There’s no cause for worry, then. Philologists do still exist, and the species seems to be evolving at a healthy pace. But good luck catching one.