THE BABEL TRILOGY

What really happened at the Tower of Babel? 

​What happened is that the Architects came down to us. They were the source for all our myths and religions. They gave us the strange and powerful virus we call 'civilization.'

 

In a sense, they gave us our humanity. 
 
They just lied about why. 

BOOK TWO: GHOSTS IN THE MACHINE

Morag Chen doesn’t believe in the supernatural. Or not until a thousand gods show up in front of her, dripping like oil from a clear blue sky.

 

The Architects are terrifying, hypnotically attractive, and real—but what are they? What do they want? And why have they stolen the mind of her brother, Daniel?

 

Ancient gods? Invading aliens?

 

Everyone has a theory, but no one has guessed the truth. 

A secret lab. The house of a dying billionaire. The hidden home of a strange and forgotten people.

 

In each of these places, Morag and Daniel will come a step closer to answers, hope, and a way of fighting back.

“Outstanding.”

GHOSTS IN THE MACHINE:
NOTES ON FACT AND FICTION

 

(A more comprehensive version of the notes that appear in the book)

Fang Lizhi

I’m guessing most readers of this book won’t know of Fang Lizhi, a Chinese scientist and activist of great courage who died in 2012. An astrophysicist by trade, he spoke and wrote eloquently on the connections between openness, equality, democracy, and science. “Science begins with doubt” was the first of his “five axioms,” which attempt to sum up the kind of intellectual environment—respectful of all evidence, skeptical of all authority—that science needs in order to operate effectively. His message, stated briefly, is this: we don’t yet know everything there is to know about the world, so science is needed; but this is also true of the human (social, economic, and political) world; therefore, science itself shows us why it’s evil for governments to control what their citizens may think and say.

The Chinese Communist Party rewarded Fang Lizhi for this insight in a way that would have been instantly familiar to the guardians of absolute truth (and absolute power over what is to be counted as the truth) in the medieval Catholic Church: prison, “re-education,” and exile.

Of course, the five axioms are about how science aspires to work. Fang Lizhi knew very well that it doesn’t always live up to its own ideals. Scientists are almost as prone as authoritarian bureaucrats to thinking they know more than they do; see especially the note below on the very word “unscientific.”

The great institutional difference between science on the one hand, and both late-medieval Catholicism and China’s peculiar brand of pseudo-communism on the other, is that science—usually, eventually—rewards skepticism.

You can find out more about Fang Lizhi in an excellent series of articles by China scholar Perry Link; search “Lizhi Link New York Review.”
 

 “Become what you are”

The German version, “Werde, der du bist,” was a favorite saying of nineteenth century philosopher Friedrich Nietzsche, who learned it from the Greek poet Pindar. Nietzsche and Pindar are both talking about discovering your real, inner nature, and setting that nature free from the social and psychological constraints into which it was born. Both men were highly skeptical of an afterlife, so they’d have been surprised and troubled by the spin being given to the idea here by the leader of the Seraphim: his view is that our true nature will be revealed to us only in an afterlife.
 

 “Grabs your attention even more when you’re an atheist”

The philosopher Bertrand Russell nearly died in 1921 after he contracted pneumonia during a visit to Peking (Beijing). The experience led to one of the funniest lines in his Autobiography: “I was told that the Chinese said they would bury me by the Western Lake and build a shrine to my memory. I have some slight regret that this did not happen, as I might have become a god, which would have been very chic for an atheist.”
 

 Bill Calder, the supernatural, and Zeus having a snit

In response to Bill Calder, you could argue that the Greek idea about about Zeus and lightning was a perfectly sensible proto-scientific theory, until we came along with a better theory that explains what static electricity does inside clouds. In other words, the Zeus theory, which we think of as “supernatural,” was the only intelligible “natural” option at the time, and shows that the Greeks didn’t think of Zeus as “supernatural” in our sense—they thought of the gods as a part of the world, and interacting with the world. That’s probably right, but it doesn’t undermine Bill’s argument against supernatural explanation.

Let’s suppose there are unexplained bumps-in-the-night, and you tell me it’s a poltergeist, which you say is “an immaterial or supernatural spirit that can’t be explained scientifically.” The right response is surely this: either we can make sense of these bumps by doing more scientific or common sense investigating, or we can’t. If we can (Aha, it was the plumbing all along), then the evidence that there’s a poltergeist vanishes. But if we can’t, to say “See, told you, it was a poltergeist!” is just to dishonestly admit-but-not-admit that as yet we still have no idea (repeat: no idea) what the cause really is. Evidence for a “poltergeist” would count as evidence only if we could make sense of that term in a way that links it up with the rest of our understanding of the world. (‘Tell me more about these polter-thingys. Are they an electromagnetic phenomenon, or not? Do they have mass, or not? Are they ever visible, or not? How do they work? And how do you know any of this?’) Without good answers to these kinds of questions, the concept is empty, since you’ve given me no reason not to be equally impressed (or unimpressed) by infinitely many alternative theories, like the Well-Hidden Domestic Dragon theory, the Clumsy Dude from Another Dimension theory, and creepier Undead Wall Insulation theory—to invent and name just three. So instead of saying “See, told you, it was a poltergeist,” you might as well say: “See, told you, it was, um, Something We Don’t Know About Yet.” And the only response to that is: “Precisely. Let’s keep investigating.”

Notice that some modern believers think God is, as it were, above and beyond the physical—an immaterial creator-spirit who doesn’t interact with the world. Others, on the contrary, think that, like Zeus, He makes decisions and then acts on those decisions (by answering your prayer for an easy chem test, drowning Pharaoh’s army, etc.) That raises interesting questions about what you commit yourself to when you say that God (or anything, for that matter) is “supernatural.” According to Bill’s argument, the former doesn’t even make sense, because it sounds superficially like a claim about what God’s like but really it’s a disguised admission that we cannot know anything about what He’s like. On the other hand, the latter seems to have the consequence—weird to most people today, but a commonplace in the eighteenth century—that God’s nature is a possible object (even the object) of scientific knowledge.
 
 
Einstein in delighted free fall

One of the key insights leading Einstein to the general theory of relativity was the equivalence principle, which says that being in a gravitational field is physically indistinguishable from being accelerated at an equivalent rate. A special case of this is that being in no gravitational field is indistinguishable from not being accelerated. That’s free fall, and it’s why astronauts say that the transition from the high-g launch phase to the zero-g of orbit is like falling off a cliff.
 

 Khor Virap

Worth looking up (or visiting) for the spectacular location, it’s built on the site where Saint Gregory the Illuminator was imprisoned in a pit for thirteen years for trying to convert the Armenians to Christianity. Eventually he did convert the king, who made Armenia the first Christian kingdom in the world, in the year 301. (The monastery was founded in 642; the main church was completed in the 17th century.)
 

 “Limbo … a traffic jam in the afterlife”

Catholic theologians struggled for centuries with the question of what happened to children who died unbaptized. Heaven or hell? Neither seemed to be the right answer, and Limbo, which literally means “border,” was conceived of as a place between the two, a sort of celestial no-man’s-land where such souls would at least temporarily reside. Vatican theologians more or less abandoned the idea early this century. However, why they now think unbaptized souls don’t go to Limbo seems to me every bit as puzzling as why they previously thought they did. (See the note on futurists, theology, and unicorns.)

 
 “Freshly-dead saints in corny baroque paintings”

The florid Baroque style in European painting runs from about 1600 to 1725. Morag might be thinking of Sebastiano Ricci’s Apotheosis of Saint Sebastian, or any of dozens more in the genre. Even that sublime genius Gian Lorenzo Bernini goes for it in his famous statue Ecstasy of Saint Teresa (1652). An unexpected “saint” gets a similar treatment more than a century later—though the expression is more constipated than amazed—in John James Barralet’s epically unfortunate The Apotheosis of Washington.
 

 “Macedonian badass Cleopatra”

Cleopatra VII and her family became perhaps the most famous Egyptians, but they weren’t really Egyptian. Like Alexander the Great, they came from Macedonia, on the northern border of Greece—though by Cleopatra’s time they’d ruled Egypt for almost three hundred years. The dynasty was started by Ptolemy I, who had been a general in Alexander’s army. In a sense, he and his descendants were even more spectacularly successful than the great conqueror: by taking control of Egypt, they were able to become gods.
 

 “The beast with two backs”

Shakespeare uses this euphemism for sex in Othello, but it was invented at least a century earlier. I’m not sure about a gold bed, but it’s no fiction that Jules and Cleo were having a very cozy time together; she gave birth to “Caesarion”—little Caesar—in the summer of the year following his visit.
 

Caesar and the library

He probably was responsible for a fire at the Library of Alexandria in 48 BCE, but it wasn’t devastating: in reality, the institution survived for centuries after that. Alexandria remained a polytheistic city, with many ethnicities and languages and a rich intellectual life, until the middle of the fourth century. In 313, the emperor Constantine may have converted to Christianity; in any case, over the next two decades, until his baptism and death in 337, he made Christianity more and more the semi-official religion of the Roman Empire, with an atmosphere increasingly hostile to the old pagan religions. There was a brief respite for non-Christians after his death, but in 380 the emperor Theodosius I made Christianity the state religion, began to ban pagan rites throughout the empire, passed laws that made it economically difficult and even dangerous to be a non-Christian, and encouraged the destruction of pagan temples. Alexandria’s newly monotheist rulers drove out Jews and other non-Christian groups, and—in a startling echo of current policies by radical Sunni Muslims— took it upon themselves to destroy everything pre-Christian in the city, including books, monuments, and even the Serapeum, Alexandria’s most magnificent Greek temple. Hatred of the past, and the firm conviction that you’re right about everything and that only the future of your own faith matters, are not a new invention. (See the note ‘“A recovering fundamentalist”—and what Adam could have learned from Socrates.’)

When the great library was finally destroyed or abandoned is unclear, but its contents were probably lost because of piecemeal destruction followed by long neglect, rather than a single great fire. Whatever the exact cause of the loss, during this period most of ancient culture disappeared. You could fill a big lecture hall with the major ancient figures in geography, medicine, history, mathematics, science, drama, poetry, and philosophy from whose writings we have either fragments or nothing. A few examples: Leucippus and Democritus, who invented atomic theory; the mathematician Pythagoras; the philosophers Cleanthes of Assos, Chrysippus, and Zeno of Elea; the great polymath Posidonius of Rhodes, who features in The Fire Seekers; the poet Anacreon; and last but not least, the most famous female intellectual of the entire ancient world, the poet Sappho.

The situation in drama sums it up pretty well. Aeschylus, Sophocles, and Menander are famous on the basis of fifteen surviving plays, plus some fragments. But we know from other evidence that between them they wrote over three hundred plays. All the rest have vanished. It’s like knowing the Harry Potter books from one damaged photocopy of the bits about Hagrid.
 

Futurists

Morag’s dig about futurists and fortune-tellers is probably well-deserved, but I’ve always thought the term has more in common with “theologian”—and “unicorn expert.”

If I claim to know a lot about unicorns, you might reasonably assume this means that I can tell you what shape their horns are supposed to be, which cultures refer to them in their folklore, what magical powers they’re alleged to have, and so on. This is perfectly reasonable—and is consistent with the idea that, in another sense, I can’t possibly know anything about unicorns, because they’re not a possible object of knowledge: they don’t exist.

The very idea that there’s a legitimate subject called theology could be said to trade on a related conflation (or confusion) of two different things the term could mean. The etymology (theos = god + logos = thought/study/reasoning) seems clear enough, but it raises the question: Does doing theology result in knowledge about God—for example: “Ah: we find, after careful investigation, that He’s male, bearded, and eternal, wears an old bed-sheet, and kicked Lucifer out of heaven”? Or does it result only in historical knowledge about what other people have thought they knew about God—for example: “Martin Luther set off the Protestant Reformation in 1517 by disagreeing with the Catholic Church about their alleged power to influence what He does to souls in purgatory.” The second kind of knowledge is unproblematic, or as unproblematic as any kind of historical knowledge can be. But no amount of it shows that the first kind isn’t an illusion. And we do at least have reason to worry that the first kind is an illusion, because it’s unclear (relative to the ordinary standards we insist on, in any other kind of inquiry) what the evidence for that sort of knowledge could possibly be. (See the note about Limbo.)

Similarly, we can ask whether a “futurist” is (a) someone who charges large sums of money to intellectually naïve corporate executives for spouting opinions about the future of human technology and society (including of course opinions about other futurists’ opinions about that future), or (b) someone who actually knows something the rest of us don’t know about that future. As with the other two examples, one might worry that (b) is implausible even in principal. (A good starting point for a discussion of this would be the observation that, as a potential object of knowledge, the future shares an important property with unicorns: it doesn’t exist.)

In all three cases, if knowledge of type (b) really is illusory, then knowledge of type (a) seems a lot less worth paying for.
 

“Not even the extent of your own ignorance”

At his trial for impiety in 399 BCE, Socrates shocked the Athenians by claiming, with apparent arrogance, that he was the wisest man in Athens. It must be true, he insisted: no less an authority than the great Oracle at Delphi had said so to his friend Chaerephon! He was puzzled by the oracle’s judgment too, he said, so he went about questioning many people who claimed to have some special expertise or knowledge (such as Euthyphro: see the note ‘“A recovering fundamentalist”—and what Adam could have learned from Socrates’). At last Socrates grasped that the oracle’s meaning was simply this: everyone else believed they understood matters that in fact they didn’t understand, whereas he, Socrates, knew how poor and limited his knowledge really was. (See also the note above on Fang Lizhi, who might equally have said “Science begins with philosophy, and philosophy begins with doubt.”)

But surely, you might say, in most fields there are reliable experts? Yes, Socrates agrees: if you want a box, go to a carpenter; if you want to get across the sea, trust a ship’s captain. But we love to think we know more than we do. And, even when we do know a subject well, expertise is paradoxical. In studies Socrates would have loved, Canadian psychologist Philip Tetlock and others have shown that in some areas so-called experts are often systematically worse at judging the truth than non-experts. How is that possible? One reason is “overconfidence bias”: amateurs tend to notice when they’re wrong, and accept that they’re wrong, whereas experts have a vested interest in (and are good at) explaining away their past mistakes—and thus persuading even themselves that they were “not really” mistakes.

In his essay “Notes on Nationalism,” George Orwell identifies a closely-related problem, well worth remembering next time you watch television news:

"Political or military commentators, like astrologers, can survive almost any mistake, because their most devoted followers do not look to them for an appraisal of the facts but for the stimulation of nationalistic loyalties."

In short, there are many circumstances in which both “experts” and those who look to them for “enlightenment” can be poor judges of whether what they say is believable.
 

The Schrödinger equation

Classical mechanics offers equations to predict how a wave (in water, air, a piece of string, whatever) will evolve over time. The Schrödinger Wave Equation is a “quantum” version of this, and is the fundamental equation for predicting the position and momentum of individual sub-atomic particles such as electrons.
 

 A billion gigatons of water

The Earth’s oceans weighs about 1.4 x10^18 metric tons (1.4 billion billion tons). That’s over 96% of all the Earth’s surface H2O. Groundwater and ice add less than another 2% each; everything else (lakes, rivers, soil moisture, atmospheric moisture) is measured in hundredths or thousandths of one per cent. The scientific consensus has shifted from “it all arrived by comet, or possibly in a plastic jug the size of Texas” to “most of it has been here since the Earth formed.”

There’s a nice illustration at water.usgs.gov: the blue sphere of water perched over central North America may seem implausibly small—but then the world’s oceans are only a couple of miles deep, on average, and that sphere is 860 miles in diameter.
 

The Slipher Space Telescope

As a big fat hint to NASA, I’ve launched this multi-billion dollar fictional planet-hunter in honor of Vesto Slipher, one of the greatest and most inadequately recognized American astronomers. Along with many other achievements, in 1912 he established for the first time the very high relative velocity of the Andromeda Galaxy (then known as the Andromeda “nebula”), and thus, along with Henrietta Swan Leavitt and others, paved the way for Edwin Hubble’s momentous discovery that the universe is expanding. Hubble was a great man, but he doesn’t deserve to be incorrectly credited with both achievements.

(A note for the nerdy: Slipher showed that Andromeda is moving at about 300 km/s towards us. The enormously high velocity was puzzling, and encouraged a general survey of Doppler shift in “nebular” light. Only after much more data had been collected did it become clear that Andromeda is a special case of gravitational attraction within the Local Group, and that in general the galaxies are flying apart.)
 

Zeta Langley S-8A, and Goldilocks

For how to name an exoplanet—I know you’ve been dying to find out—see the website of the International Astronomical Union. The conventions are on the messy side, but Zeta Langley S-8A can be taken to mean “Slipher discovery 8A, orbiting Zeta Langley,” where “Zeta Langley” means the sixth brightest star, as seen from Earth, in the Langley star cluster.

Like the planet, the Langley star cluster is fictional. Sci-fi nuts may detect here a whisper of a reference to HAL's instructor, as mentioned in the film version of 2001: A Space Odyssey. (“My instructor was Mr. Langley, and he taught me to sing a song. If you’d like to hear it, I can sing it for you.” Oooh, oooh, I love that scene.)

The “Goldilocks Zone” (not too hot, not too cold, just right) is the orbital region around a given star in which life as we know it is possible—roughly, the zone within which liquid surface water is possible. Or that’s the short version. If you look up “circumstellar habitable zone,” you’ll find all sorts of stuff explaining why it’s far more complicated than that—and then you’ll be able to amaze your friends by going on at length about topics like tidal heating, nomad planets, and carbon chauvinism.
 

“Oxygen-rich atmospheres don’t come from nowhere”

The oxygen you’re breathing as you read this sentence is essentially an accumulation of bacteria farts. Until about 3.5 billion years ago, when early bacteria invented photosynthesis, the Earth had very little atmospheric oxygen. By 2.8 million years ago, cyanobacteria were letting rip with huge quantities of it (in a process they would later spread far and wide, when they learned to live symbiotically inside green plants). Oxygen is so fantastically corrosive that by 2.5 billion years ago there’s rust in rocks.
 

Kelvin’s basement

William Thomson, 1st Baron Kelvin, got “degrees Kelvin” named after him not because he thought of the idea of absolute zero but because he was the first to accurately calculate its value. But Morag is wrong on the detail: apparently, if you really want to try cryogenic self-storage, the optimal condition for your experiment in time-travel is a significantly warmer nitrogen slush. An alternative method, which gets around the need to power a freezer reliably for the next century, involves removing all your brain’s fluids and replaces them with a chemical fixative; after that, you can store yourself much more cheaply, at room temperature, in a glass jar in your friend’s attic. There are two things to note about this. First, if you think this has any hope of working, I have a nice pre-owned bridge to sell you. Second, at the risk of getting technical, euuuw.
 

Horyu-ji temple

This is one of the oldest and most famous of all Buddhist places of worship, and allegedly the oldest wooden building in the world. It dates from the 6th-7th centuries, so is about 1,300 years old—though it has been disassembled and reconstructed more than once during that time.
 

Cinnamon rolls

I’m thinking of Schnecken (‘Snails’), which are one of my favorite things to make for a special breakfast. I use a version of a recipe by Seattle chef Tom Douglas, in Tom Douglas’s Seattle Kitchen. They’re bad for you, a bit of a pain to put together, and utterly wonderful. Fortunately or unfortunately, one taste will destroy forever your ability to enjoy the frosted insulating foam they’ve been selling you as “cinnamon buns” at your local supermarket.
 

“Same myths in different forms, over and over”

In The Fire Seekers, Bill Calder is struck by the way similar myths emerge in cultures that have had no contact with one another, and in the notes there I mention some interesting cases of other Babel-like myths, or combinations of an “Eden/Tree of Knowledge” myth with a “Babel” myth. While writing Ghosts in the Machine, I read Sabine Kuegler’s memoir about growing up among the Fayu, a tribe in Indonesian West Papua, during the 1980s. Before the Kueglers showed up, the Fayu had had no contact with Western influences such as Christianity, and yet part of their creation myth was the story of Bisa and Beisa. As Kuegler’s Fayu friend Kloru relates it:
     
There once was a large village with many people who all spoke the same language. These people lived in peace. But one day, a great fire came from the sky, and suddenly there were many languages. Each language was only spoken by one man and one woman, who could communicate only with one another and not with anyone else. So they were spread out over the earth. Among them were a man and a woman named Bisa and Beisa. They spoke in the Fayu language. For days they traveled, trying to find a new home. One day they arrived at the edge of the jungle, and it began to rain. The rain wouldn’t stop. Days and weeks it rained and the water kept rising.
      Bisa and Beisa built themselves a canoe and collected many animals that were trying to escape from the water. As they paddled, they kept repeating, “Rain, stop! Thunder, stop! We are scared.”
      But the rain wouldn’t stop. The water rose until it covered all the trees. Everything died in the flood. Everything except for Bisa, Beisa, and the animals in their canoe.
      They had given up all hope when, days later, they suddenly came upon land. Bisa, Beisa and the animals got out of the boat and found themselves on top of a small hill. Before them they saw a cave leading into the earth. They crawled inside its cover, feeling great relief.
     Soon afterward, it stopped raining, and the water disappeared. The animals swarmed into the jungle, but Bisa and Beisa stayed in the cave. They built themselves a home and had children, who themselves had children until they became a great tribe known as the Fayu.


And then there’s the Meakambut, another Papuan tribe. I’d already had the idea for the I’iwa when I came across this Meakambut myth, as summarized by Mark Jenkins in National Geographic:

In the beginning, Api, the Earth spirit, came to this place and found the rivers full of fish and the bush full of pigs, and many tall sago trees, but there were no people. Api thought: This would be a good place for people, so he cracked the cave open. The first people to pull themselves out were the Awim, and then the Imboin and other groups, and finally the Meakambut. They were all naked and could barely squeeze out into the light. Other people were inside, but after the Meakambut came out, Api closed the crack, and the others had to stay behind in darkness.
 
 
Tok Pisin … creole

A pidgin is a shared vocabulary that helps users of different languages communicate. That’s how Tok Pisin (‘talk pidgin’) began in the nineteenth century. But Tok Pisin evolved from a salad of English, German, Dutch, and Malay words, with bits of Malay grammar, into a full-blown language of its own, capable of a full range of expression and with a grammar distinct from any of the parent languages. That’s a creole.

A striking feature of Tok Pisin is that it has a very small underlying vocabulary, and makes up for this with long descriptive expressions. So for instance corridor is ples wokabaut insait long haus (literally: place to walk inside a building), and embassy is haus luluai bilong longwe ples (literally: house of a chief from a distant place).

In a curious irony, the Tok Pisin for Bible is Baibel. But the two English words bible and Babel have nothing in common historically; in a further irony, it’s the second, not the first, that’s connected at its root with religion. Bible comes from the Greek byblos (scroll). The place-name Babel is from the Akkkadian bab-ilu (gate of god).

Word nerds who find the simplicity of Tok Pisin intriguing may enjoy looking up (or even learning) Toki Pona, a language invented by Sonja Lang in 2001. In Toki Pona, toki means speech, word, language, talk, conversation (etc. etc.); pona means positive, friendly, good, right (etc., etc.); the entire vocabulary of the language is constructed from just 120 such root words.
 

Josef Kurtz

Some readers will suspect, correctly, that I stole the name from Joseph Conrad’s Heart of Darkness. Given the novel’s central theme—who are the “savages,” really?—it seemed appropriate.
 

“Upper Paleolithic”

These terms identify (in, unfortunately, a pretty inconsistent and confusing way) different periods of human and pre-human tool use. “Paleolithic” means “old stone age”—anything from the very beginnings to about 10,000 years ago. Within that range, “Upper Paleolithic” is most recent—from about 40,000 to 10,000 years ago. Stone tools showing more recent technology than that are either Mesolithic (from about 20,000 to 5,000 ago) or Neolithic (10,000 to 2,000 ago). The overlaps are partly due to inconsistency and partly because the relevant technologies developed at different rates in different regions.

Speaking of “the very beginning”: until recently, we thought the earliest stone tools were made about 2.6 million years ago in Tanzania, by Homo habilis. But in 2011-2014 a team at Lake Turkana in Kenya discovered stone tools from around 3.3 million years ago, which is before the entire Homo genus evolved. The difference between 3.3 million and 2.6 million doesn’t sound like much—until you realize it’s three times as long as the entire 0.2 million-year history of Homo sapiens.

Note that chimps and other apes, even if they use stones as tools, seem unable to make sharp implements by modifying stone, in which case our ancestors surpassed them several million years ago. Compare the note “Language: a crazy thing that shouldn’t exist.”
 

Messier 33

French astronomer Charles Messier was a comet-hunter. In the 1750s he began to make a list of annoying objects that were not comets but could easily be mistaken for them; his catalog of “nebulae” ended up listing more than a hundred of the most beautiful objects in the sky. The majority are star clusters (for instance M13), or galaxies (M31, which is Andromeda, and M33). The Crab Nebula, M1, is a supernova remnant. The Orion Nebula, M42, visible with the naked eye in Orion’s sword, is a cloud of gas and dust in which stars are forming.
 

The Plague of Justinian

This was the earliest-recorded instance of (almost certainly) bubonic plague, and one of the worst. It seems to have originated in Egypt in the year 540. In 541-2, it ravaged Constantinople (modern Istanbul), and then much of the rest of Europe. In some areas, half the population died. There was a second wave of plague in 558. Some historians think the Plague of Justinian was critical to the decline of the Byzantine Empire, the rise of Islam, and the onset of the European Dark Ages.
 

Socrates in Eden


In Milton’s Paradise Lost, Adam asks the archangel Raphael some probing questions about the way God has constructed the universe. He casts the questions as inquiries into astronomy: Does the Earth move or stand still? Why are there so many stars, if all they do is decorate the Earth’s sky? Why do six of them (all the known planets, in Milton’s time) wander back and forth among the fixed stars? But astronomy is really a place-holder for other things; it’s Milton’s way of expressing, obliquely, the fact that there are deeper questions begging to be asked, none of which Adam quite dares to voice. How does this whole Creation thing work? Who is the mysterious “God” person, really? Where is Heaven anyway? (As Raphael revealingly admits, God has placed Heaven an immense distance from the Earth partly to ensure His divine privacy.) And you can easily imagine that Adam is itching to ask one more really big one: Run this by me again, Raph. Can I call you Raph? Great. So take it slow, and tell me again: Why is it that I must obey this guy “God”?

(Note, by the by, that a strange object looms suggestively in the background of this conversation. In fact, it has just starred in Eve’s description to Adam of the world’s first recorded nightmare: the enticing, alluring, and puzzlingly forbidden Tree of Knowledge.)

Raphael’s response to Adam’s questions seems indulgent, at first; or, given what’s coming, we might say that his tone is greasily flattering. Naturally you are inquisitive, he says, for your Divine origin means you’ve been touched with the intellectual gifts of God Himself! But Raphael quickly turns waspish, and our “first father” ends up getting a sharp slap on the wrist for asking the wrong questions:

Sollicit not thy thoughts with matters hid.
Leave them to God above, Him serve and feare …
… Heav’n is for thee too high
To know what passes there;
be lowlie wise.

“Be lowlie wise”: ouch. It carries both the condescending, almost contemptuous meaning “Stay focused on the low, ordinary things that suit your low, ordinary nature” and also a more threatening one: “If you know what’s good for you, stop asking questions about what goes on in the executive suite.”

Unfortunately, Milton’s Adam is all too willing to play his lowlie part: after hearing God’s messenger put on a display of spectacularly bad reasoning about why Adam should be “lowlie wise,” he goes all weak at the knees, says he no longer wants to know a thing, and claims to be miraculously “cleerd of doubt.” He’s grateful, even: total obedience will mean not having “perplexing thoughts” that might “interrupt the sweet of life.” He even says, in a toe-curling display of meekness and surrender, “How fully hast thou satisfied me.”

It’s an embarrassing moment for the human race, and you might wonder how the exchange would have gone if, instead of Adam, Raphael had confronted someone with a better brain and a stiffer spine.

Socrates, for instance?

Wonder no more! Plato, in his dialogue Euthyphro, imagines Socrates having just this sort of discussion—though the pompous character with a thing about sticking to the rules is the eponymous Athenian passerby, not an archangel.

As Socrates points out to Euthyphro, during a discussion about justice, many people think they should do X and not Y just because God approves of X and disapproves of Y. In other words, to know right and wrong, all we need to know is what God commands. That’s the position Raphael recommends to Adam.

There’s a large problem with this, which Adam really could have raised. Wait: aren’t we missing a step? Why should I be confident that my understanding of what God approves is what He in fact approves? But let’s leave that aside for a minute. In what has become known as Euthyphro’s Dilemma, Socrates argues that there’s a deeper problem lurking here, even after we allow ourselves the staggeringly arrogant (and, alas, routine) assumption that we know what God wants. For, Socrates says, to say something is good just because God approves of it, and for no other reason, is to say that divine morality is arbitrary.

“So what?”, you might reply: “God is God! He can be as arbitrary as He likes! He made the universe. So He gets to make up the rules!”

But, Socrates says, that can’t be what you really think. If it were, it would imply that whenever you say “God is good,” or “God’s judgments are good,” or “God is the ultimate good” (which, it seems, everyone does want to do), those judgments must be mistaken. Think about it again: if God’s judgments are arbitrary, then He just is what He is, and to insist in addition that the way God is “is good,” is to say “We judge/believe/accept that God is good.” But that implies what we just denied, which is that we can appeal to a standard for what’s good that‘s independent of what God says about it.

Euthyphro’s Dilemma leads Socrates to a startling conclusion: even if people think they think “X is good just because God approves it,” what they must actually think is something radically different, namely, “If God approves of X, He does so because He judges that X is good.” But to say this is to say that God, just like us, appeals to moral reasoning about what’s good. And that means goodness is something that must exist independently of both our judgment and His.

You can see versions of this argument in many skeptical philosophers, such as John Stuart Mill, but it’s interesting to hear a Christian theologian, Don Cupitt, also grasp the Socratic nettle with both hands. In his book Taking Leave of God, which defends what he calls “non-realist Christianity” or Christian Buddhism,” he says:

Moral principles prescribed to me by a very powerful and authoritative being who backs them up with threat and promise are not truly moral, for my moral principles —if they are to be truly moral—must be freely acknowledged by me as intrinsically authoritative and freely adopted as my own.

With this re-thinking of moral justification, Socrates opened the door to a powerfully subversive chain of ideas. Part of my exercise of free will is the freedom to base my actions on my own reasoning, including reasoning about what’s right and wrong. But that’s meaningless unless I can decide whether someone else’s alleged justification for controlling or guiding my actions is persuasive or not. And how can I possibly decide whether I should find God’s reasoning persuasive (for example, about staying away from the irresistibly yummy-looking fruit on that Tree of Knowledge) if Wing-Boy is cracking his knuckles and telling me it’s naughty and rude and inappropriate to even ask what God’s reasons are?

This is important stuff, because arguably our failure to understand Socrates’s argument—and our willingness to be bullied by Raphael’s—has shaped our entire civilization. The second-century Christian writer Tertullian was trying to mimic the “good,” meekly obedient Adam when he wrote that the Gospels contained all truth and that therefore, for the faithful, “curiosity is no longer necessary.” This infamous quotation is from The Prescription of Heretics, chapter 7. Some Christian commentators say it’s misunderstood, so in fairness a fuller version is worth giving:

Away with those who put forward a Stoic or Platonic or dialectic Christianity. For us, curiosity is no longer necessary after we have known Christ Jesus; nor of search for the Truth after we have known the Gospel. [Nobis curiositate opus non est post Christum Iesum nec inquisitione post euangelium.] When we become believers, we have no desire to believe anything else. The first article of our belief is that there is nothing else we ought to believe.

The first sentence might suggest that you can defend Tertullian by arguing that he’s not so much saying “Thinking is no longer necessary” as “There’s no point going back to Greek authors, specifically, and trying to interpret them, because everything that matters in them is already incorporated into the Gospels.” But I don’t think this is a plausible way to defend Tertullian, for two reasons.

First: if that is what he’s saying, it’s hopelessly wrong. The idea that all Greek ethical thought of any value is incorporated into the Gospels may be traditional, and Christians may have been taught for centuries that they ought to believe it, but nobody ought to believe it, because (a) nothing about being a good Christian depends on believing it, and (b) it’s unmitigated hogwash.

Second: for reasons that the rest of the passage suggests, it really can’t be all Tertullian is saying. He’s very clear here that it’s not just the wisdom of particular pagan Greeks that we no longer need, but rather the very type of inquiry (call it science, or philosophy, or critical thinking) that they invented.


Why don’t we need critical thinking, according to Tertullian? Because the Gospels contain a complete and perfect source of moral truth. And it follows (!?!) that skeptical questions about the origin and veracity of that truth undermine the ability of the faithful to believe it. And therefore (!?!) skeptical questions are dangerous, and should be condemned as heretical.

This kind of reasoning (a form of which, alas, Saint Augustine shared: see his Confessions, chapter 35) is one of history’s great intellectual and moral catastrophes. It infected early Christianity, quite unnecessarily, with the guiding principles common to all fundamentalism. Because of Christianity’s subsequent success, that fundamentalism went on to shape the viciously anti-pagan, anti-pluralist, anti-intellectual attitudes that dominated so much of the late-Roman and post-Roman world. Its results are illustrated in the fate that, over the next fifteen centuries or so, befell the Library of Alexandria, the entire literary civilization of the Maya (ten thousand codices were destroyed in the 1560s by a single individual, the Spanish bishop Diego de Landa, who thought they were the work of the devil—four of them survive), and a thousand pyres on which it was not mere words that were set alight.

Which brings us back to today’s headlines, and to that first large problem, set aside a few paragraphs ago. Fundamentalists think it’s arrogant and dangerous to question the will of God. But they are confused. It’s arrogant and dangerous to believe that you already know the will of God—and no one ever accuses someone of committing the first error without having already committed the second.
 

“Better rockets?”

Konstantin Tsiolkovsky, born in 1857, was two or three generations ahead of his time: around 1900, he published a large number of papers covering such arcane matters as the minimum velocity needed to reach Earth orbit, how to design multi-stage rockets and space stations, the use of solid and liquid fuels, and what would be needed for planetary exploration. Unfortunately—so much for all the propaganda we keep hearing about how fast our technology is advancing!—not much has changed in the field of rocketry since Tsiolkovsky invented it. And it’s a humbling problem: relative to the scale of interstellar space, never mind intergalactic space, rockets are many thousands of times too expensive, inefficient, and slow to be much use. Exploring the stars is going to require a technology as different from rockets as rockets are from feet.
 

The Bretz Erratic

The Bretz Erratic doesn’t exist, but it seemed like a nice gesture to invent it. Harlen Bretz taught High School bio in Seattle, and later worked at the University of Washington and the University of Chicago. He was the brilliant, visionary, stubborn geologist who endured decades of ridicule from his peers for insisting that the amazing geology and topography of Eastern Washington’s “channeled scablands” could be explained only by cataclysmic flooding. In an earlier era, no doubt he would have been praised for finding evidence of Noah’s flood; instead the experts said his ideas were preposterous—where could all that water have come from?

The answer wasn’t the wrath of God, but two thousand-foot-deep Lake Missoula. Formed repeatedly by giant ice-dams during a period roughly fifteen thousand years ago, it emptied every time the ice-dams failed. These “Missoula Floods” happened about twenty times, at intervals of about forty years, sending ice-jammed floodwaters, hundreds of feet deep, racing west and south towards the Columbia River gorge. Boulders embedded in remnants of the ice dams were carried hundreds of miles from the other side of the Bitterroot Range in present-day Idaho/Montana.

These big glacial “erratics” are dramatic exclamation points in an otherwise empty Eastern Washington landscape. But the largest one in Washington State, and possibly the world, is the Lake Stevens Erratic, which was discovered (or recognized for what it is) only recently, hiding in a scrap of suburban woodland half an hour north of my home in Seattle.
 

Brunhilde

Partridge has named his VW Kombi after Brynhildr, the Valkyrie or warrior goddess of Icelandic legend. She features in various adventures, most famously the Völsunga Saga, in which she angers the god Odin. Asked to decide a contest between two kings, she picks the “wrong” man; Odin punishes her by excluding her from Valhalla and making her live as a mortal.
 

Psychiatry and “’Unscientific’ is a bully word … evidence-free drivel”

My character Professor Partridge could be thinking of the behaviorist John B. Watson. His immensely influential writings, from 1913 on, persuaded many psychologists and self-styled child development “experts” to be concerned about the alleged danger of too much parental affection. This must have seemed like an interesting hunch, but after so many decades the shocking truth is still worth emphasizing. First, Watson and his school—while hypocritically vocal about the need for psychology to be rigorously scientific and therefore evidence-driven—had no evidence whatever for a causal connection between affectionate parenting and any particular psychological harm. Second, and more significantly, they seem to have been incapable of even entertaining the intrinsically more plausible “mirror” hypothesis: that if parents were to take such ideas seriously, and change their parenting style as a result of such advice, this itself might cause children terrible psychological harm.

Tragically, Watson produced his own body of evidence, treating his own children appallingly, by any normal humane standard. One committed suicide, one repeatedly tried to, and the other two seem to have been consistently unhappy.

Sigmund Freud’s follower and rival, Carl Jung, managed to arrive at a similar and similarly baseless and dangerous “scientific theory of parenting” from a different direction. He encouraged parents to worry that close affection would create what Freud had called an “Oedipal attachment” of child to mother. It has been suggested that Jung’s advice was partly responsible for the terrible upbringing of Michael Ventris, the ultimate decipherer of the Linear B script, since both his parents were “psychoanalyzed” by Jung and seem to have become even colder and more distant from their son in response to their Swiss guru’s “expert” advice.

There are at least three distinct problems with that advice. First: many people have concluded that there’s simply “no there there”; on this view, “Oedipal attachment” is like the “black bile” referred to in medieval medical texts, in that it simply doesn’t exist. Second: even if it does exist, the people who believe in it have been unable to agree on whether it’s a natural and inevitable stage of childhood development, or a dangerous perversion of that development. Third: even if it exists, and is a dangerous perversion of normal development, there is (at the risk of sounding repetitive) no evidence of any specific causal connections that would justify any advice aimed at improving the situation through a change in parenting style.

If you’re in the mood for a big dose of irony, at this point it’s worth looking up “refrigerator mother theory,” a campaign started in the late 1940s by Leo Kanner and championed endlessly by Bruno Bettelheim, in which mothers of autistic children were assured that their children’s problems had all been caused by their parenting not being warm enough. This turned out to be another case of bad science—lots of “expert” pronouncement, little or no underlying evidence, a complete unwillingness to take alternative hypotheses seriously, decades of largely unquestioned influence, and a vast sea of unnecessary suffering.

For just one more example of psychotherapeutic overreach— allegedly expert, allegedly scientific, and with devastating effects on real families—see The Myth of Repressed Memory by Elizabeth Loftus, or The Memory Wars by Frederick Crews. The “memory wars debate” of the 1990s illustrated a lamentably common theme in the history of psychiatry: abject failure to distinguish between potentially illuminating conjectures (ideas that we have essentially no evidence for, yet, but that it might one day be possible to confirm or refute), and well-established theories (general explanations that we have reason to believe are probably true, because they’ve survived rigorous testing against all plausible rivals in a context of related theories and bodies of evidence).

The problem with failing to make this distinction is profound. Suppose you inject your patients with a drug, after representing it to them as an established method of treatment when in reality it’s a dangerous experiment. This is about the grossest possible violation of medical ethics, short of setting out to murder people. In effect, though, this is what Watson, Jung, Bettelheim, and their many followers were doing to their thousands of victims, all under the phony guise of “my ideas have a scientific basis and yours don’t.”
 

Supernova

The ultimate stellar show ought to occur within our galaxy once every few decades, but not one has been observed since Tycho’s Star in 1572 and Kepler’s Star in 1604; both of these just barely predate the invention of the telescope. Still, if Antares blows, you won’t need a telescope: for a few days it will outshine the rest of the Milky Way, and will be visible as a bright dot even during daylight.
 

“Bullshit … a philosopher who wrote a whole book about it”

It’s true. Harry Frankfurt’s On Bullshit is a fascinating analysis of what makes liars different from bullshitters. In brief: liars care about steering people away from the truth; bullshitters don’t care one way or the other about truth, but only about using cheap rhetoric to sell either themselves or their stuff. So bullshit isn’t the opposite of the truth, but a kind of gilded truth that’s not honest.

Nearly the entire vocabulary of marketing and advertising consists of bullshit in this sense—think of expressions like all-new, all-natural, farm-fresh, hand-crafted, revolutionary, exclusive, executive, select, luxury, gourmet, and artisanal. Only the most gullible consumer literally believes what these words offer to imply, but we’re all happy to engage in a sort of conspiracy of pretending to believe what they imply, because we feel better about spending the money if we’re being bullshitted. You could even say that being bullshitted is the service we’re paying for. Do you really want them to tell you that your “revolutionary” new phone is—as, I’m sorry to say, it certainly is—pretty much the same as the last model? Or that your “rustic Italian loaf” was baked—as it probably was—from Canadian ingredients in batches of a hundred thousand by Korean robots in New Jersey? Of course not. You’d rather pay for the bullshit. That’s why there’s so much of it.
 

Right hemisphere, left hemisphere, and “internal struggles”

The best book on the split brain is Ian McGilchrist’s spookily, evocatively titled The Master and his Emissary. Saint Augustine captures the oddness of our inner division in a famous line from the Confessions: “When the mind commands the body, it is at once obeyed, but when the mind commands itself, it is at once resisted.”
 

Teosinte

Modern corn (maize) shows up as a complete surprise in the archaeological record about nine thousand years ago, as if thrown out of the car window by passing aliens. Where did this bizarre-looking plant come from? In the 1930s, working at Cornell University, George Beadle worked out that it was a domesticated version of teosinte, a grass from the Balsas River in southern Mexico—and it shows up as a surprise because the work of domestication took almost no time at all. Look up a picture of teosinte, and be suitably amazed that its genome is almost identical to that of the fat, juicy, bright yellow botanical freak you just covered in salt and butter.

As the chimp never said to the human, “Isn’t it amazing what a big difference small genetic changes can make?”
 

Breath, nostrils, and the creation of Adam

“And the Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul” (Genesis 2:7).
 

God’s monster and Mary Shelley

Such a story—if I’d made it up, you wouldn’t believe it. Hang on to your hat.

In the summer of 1816, the rock-star-famous poet Lord Byron was living with his servants and personal physician at Villa Diodati, a grand rented house on Lake Geneva in Switzerland. Six months earlier, his wife Annabella had given birth to a daughter, and then scandalized England by separating from her husband amid accusations of physical and mental abuse, homosexuality, and incest. (It was probably all true. The last bit was almost certainly true: Byron seems to have been having an affair with his half-sister Augusta Leigh, and may have been the father of one of her children.) The publicity was too much even for the flamboyant Lord B, who fled the country in April and never saw mother or baby (or England) again.

Mary Shelley was still Mary Godwin, and still just eighteen years old, when she too fled abroad with her lover, the poet Percy Shelley, who had abandoned his wife Harriet and their two children. (He already had two children with Mary. Meanwhile Harriet, back in England, was pregnant with their third, and his—probably, but see below—fifth.)

Mary and Percy went to stay in a house near Byron’s. Just to keep things nice and complicated, they were traveling with Mary’s stepsister, Claire Claremont, who had been another of Byron’s lovers in England—and who, as it turned out, was already pregnant with another of his children. She insisted on going to Switzerland with Godwin and Shelley because she wanted to resume her relationship with Byron; he (initially—maybe for about half an hour) insisted he didn’t want anything more to do with her.

Lady Caroline Lamb, yet another Byron lover from a few years before, had famously described the poet as “mad, bad, and dangerous to know”; this (and his absolute cynicism) comes out especially in the relationship with Claremont. Of being with her again in Geneva he later wrote:

I never loved her nor pretended to love her, but a man is a man, and if a girl of eighteen comes prancing to you at all hours of the night there is but one way. The suite of all this is that she was with child, and returned to England to assist in peopling that desolate island ....

Claire’s daughter Allegra was indeed born in England, but bizarrely enough she was taken by the Shelleys back to Byron, who was by now in Italy. He quickly and rather predictably lost interest in her, and placed her in a convent school, where she died of typhus in 1822. Claire, not unreasonably, more or less accused Byron of murdering her daughter. In a recently-discovered memoir, written when she was an old woman, she describes both Byron and Shelley (with whom—take a deep breath—she may also have had a child) as “monsters.”

But back to 1816. The weather that summer was freakishly cold and gloomy, for reasons the party could not have known—see below. They retreated inside to the fireplace, where they read German ghost stories and Byron suggested that they amuse themselves by writing some of their own. Mary’s story became one of the most influential books of the century and perhaps of all time: Frankenstein; or The Modern Prometheus.

Shelley got her subtitle (and the idea of animating dead tissue with electricity) from the philosopher Immanuel Kant’s apt description of American genius Benjamin Franklin, in the wake of his experiments with lightning. Both Kant and Shelley were referring to the myth about the Greek Titan who, taking pity on cold and shivering mankind, incurs the wrath of Zeus by bringing celestial fire down to Earth. But she was equally aware of the parallels between her story and the Christian “divine breath” story, as told in Genesis. Victor Frankenstein’s “creature” in the story actually finds and reads a copy of Milton’s Paradise Lost (lamenting that his fate is even worse than Satan’s), and for an epigraph Shelley chose these heart-breaking, plaintive, faintly accusing lines, addressed by Milton’s Adam to his Creator:

Did I request thee, Maker, from my clay
To mould me Man? Did I solicit thee
From darkness to promote me?

This is the question every child asks, or thinks of asking, when a parent resorts to that phony line “You owe us everything!” In modern English: “I didn’t ask to be born. So whose interests were you really serving? Mine, or your own? And if your own, why do I owe you anything?”

For more on Milton’s Adam, and the questions he raises (and then meekly drops) about what we should believe, see the note on Socrates and religious fundamentalism. But let’s stick with that gloomy summer weather—and how’s this for a weird and wonderful connection? The atmospheric conditions that prompted the “ghost story party,” and thus Frankenstein itself, were caused by the April 1815 eruption of Mount Tambora in the Dutch East Indies (now Indonesia). It was by far the largest eruption in modern history, leaving a crater four miles wide and causing years of global climate disruption, crop failure, and famine. 1816 became known to New Englanders as “the year without a summer”: that June, there were blizzards in upstate New York. (See “Some Dates” for more detail.)

As Kit’s remarks suggest, the science fiction riffs on the Frankenstein idea are innumerable. Two of the best are Arthur C. Clarke and Stanley Kubrick’s 2001: A Space Odyssey and Ridley Scott’s Blade Runner (based on Philip K. Dick’s Do Androids Dream of Electric Sheep?). Nearly all the writers who have followed Shelley hint at a question she might have expressed this way: “Does the creature move and speak only, or does it have a soul?” In modern terms: “Does it just behave like us? Imitate us? Or is it truly conscious?” (For why that distinction is a very big deal, see also the note on Turing. More on this in The Babel Trilogy, Book Three: Infinity’s Illusion.)

By the way, Frankenstein’s young inventor was the daughter of radical philosophers William Godwin and Mary Wollstonecraft. After her marriage, she always styled herself Mary Wollstonecraft Shelley, in honor of her remarkable mother, whose own epoch-making book was A Vindication of the Rights of Woman (1792).

While we’re on the subject of women a century or more ahead of their time, note that the daughter Byron had left behind with his wife in England grew up to be the brilliant mathematician and the “world’s first computer programmer,” Ada King, Countess of Lovelace (or more popularly, Ada Lovelace). Ada supposedly read her father’s work and wasn’t impressed, vowing that she would become a better mathematician than he had been a poet. Her achievements were great indeed, but they might easily have disappeared from view if they hadn’t become an inspiration to another early programmer, who appreciated their depth and originality—Alan Turing.

Finally, no account of Frankenstein’s origins in that amazing summer would be complete without mentioning the fact that the same group, in the same writing session, also invented the modern “vampire”—and (oh, it’s almost too good to be true!) this first vampire was an angry caricature of Lord Byron himself. Byron’s “friend” and personal physician at Lake Geneva, John Polidori, had come to hate his employer’s success. His own contribution to their “ghost story” exercise was The Vampyre. The main character, “Lord Ruthven,” is a pale, mysterious London aristocrat with an irresistibly seductive voice; he’s bad news, especially for women, and is clearly meant to be Byron.

In a further twist, which horrified and enraged Polidori, a publisher got hold of the manuscript of The Vampyre and published it as a new work by Byron.

Mary and Percy were married back in England in December. Harriet, his first wife, had killed herself weeks earlier.

“The gray outline of the Institute ...”

Readers familiar with the University of Washington campus will infer that I had to tear down both Cunningham Hall and Parrington Hall before I could build ISOC. Sorry.
 
Geist, atman

Geist is German for spirit—it’s cognate with our ghost, from Old English gast, spirit or breath. The Sanskrit for maha (great) and atman (soul, spirit, or consciousness) is where Mohandas “Mahatma” Gandhi got his nickname.
 

Darwin’s Origin and changing the question

1859 was the year Darwin published On the Origin of Species. (It was On the Origin … for the first edition only. There were five more editions in his lifetime, but he dropped the first word in the title. Stylistically, it’s a smart move: research shows that books do exhibit an extremely strong tendency to be on what they’re on, and this makes the On redundant.) Here’s how Julian Jaynes glosses that epoch-making event in his own book about origins, The Origin of Consciousness in the Breakdown of the Bicameral Mind:

Now originally, this search into the nature of consciousness was known as the mind-body problem, heavy with its ponderous philosophical solutions. But since the theory of evolution, it has bared itself into a more scientific question. It has become the problem of the origin of mind, or, more specifically, the origin of consciousness in evolution. Where can this subjective experience which we introspect upon, this constant companion of hosts of associations, hopes, fears, affections, knowledges, colors, smells, thrills, tickles, pleasures, distresses, and desires—where and how in evolution could all this wonderful tapestry of inner experience have evolved? How can we derive this inwardness out of mere matter? And if so, when?

As the third book of this trilogy will indicate, I think this is partly right and partly wrong. On the one hand, Darwin and evolution make it much harder to see consciousness as all-or-nothing, and we no longer do: we take it for granted that many other organisms are conscious in some sense, even if not quite ours. (Dogs and chimps experience hunger and pain, probably also loneliness and anxiety, possibly also joy and grief; they probably don’t worry that their children are wasting time, or that God disapproves of them, or that others may say mean things about them after they’re dead.) On the other hand, it’s misleading to imply that the origin of consciousness is now a purely scientific question. Whether that will turn out to be true depends on what the answer turns out to be, and some philosophers still argue that there’s a problem with the very idea that consciousness could be explained by any new scientific finding. When I eat a potato chip, taste-receptors on my tongue detect sodium ions, and send signals to the brain via specialized neurons, et cetera et cetera. But the philosopher Gottfried Leibniz had the measure of this three hundred years ago: you can elaborate the physical story as much as you like, get as fine-grained as you like with your description of the mechanism, and still not have an answer to the most basic question: Where’s the saltiness?
 

Turing among the machines

Alan Turing was born in England in 1912. In the late 1930s, while spending two years in the United States at Princeton, he started to produce original work on logic, the nature of computation, and the concept of an algorithm. He famously spent the Second World War helping to crack German military communications by applying mathematical logic (and some innovative mechanical tinkering) to cryptography in Hut 8, the nerve center of the Government Code and Cypher School at Bletchley Park. Perhaps his most influential work was the paper “Computing Machinery and Intelligence,” published in 1950, which essentially created the field of Artificial Intelligence. It famously begins, “I propose to consider the question, ‘Can machines think?’”

In 1952, Turing was arrested for homosexual acts, a criminal offense in the UK at the time. Having been stripped of his security clearance, he was forced to choose between prison and an estrogen treatment “cure”—chemical castration, essentially. He chose to take the drugs, because it would have been impossible to continue his work in prison.

Two years later, he died somewhat mysteriously of cyanide poisoning. Many think this was suicide, brought on by depression over the hormone treatment, but this seems unlikely. He had already completed the treatment some time before his death, was no longer taking estrogen, and was actively engaged in computational work (and experiments involving chemicals that included cyanide).
Since Turing has become something of a cultural icon, it’s perhaps unfashionable to say that he has been over-sold as the “lonely genius of Bletchley.” But many brilliant people worked there—and, contrary to the “cold autistic savant” myth so heavily underlined by the 2014 film The Imitation Game, he seems to have been an eccentric but willing (and warmly humorous) collaborator in a giant team effort.

It’s even more unfashionable to say that his published ideas on computing and intelligence are anything less than brilliant, but there it is: “Computing Machinery and Intelligence” is a clunky piece of work, and surprisingly vague (one is tempted to say confused) on what the “Imitation Game” or “Turing test” is, how it should be conducted, or what it might be taken to show. (You can get something of the flavor by comparing that famous first sentence, “I propose to consider the question ‘Can machines think?’,” with a less famous one that follows shortly after: “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.”

Fiction tends to make the confusion worse: writers and film-makers have often been more thrilled by the sound of Turing’s idea than by stopping to work out what it really is. So characters in movies and novels tend to throw around the term “Turing test” as if it’s a special way of proving that something is conscious—or (a different muddle) as if it’s a special way of deciding whether we ought to treat something as if it’s conscious.

To see what the Turning test is really about, and what its limitations are, it’s useful to start by clarifying which of two scenarios we’re talking about. Are we (as Turing imagined) communicating by text with something that might be a human, but might be a mere machine that’s imitating a human? Or (as per so much science fiction) are we sitting on the couch with a “person” who might be a human, but might be a robot/replicant/cyborg/android that’s imitating a human? Turing himself raises the “android” version of the story, only to dismiss it as a distraction (“We do not wish to penalize the machine for its inability to shine in beauty competitions”). However, given that the intervening decades have given us so much practice in at least imagining the “android” scenario, I’ll assume that’s we’re talking about. In the end, it doesn’t really matter: as Turing recognized, the point is that these two versions both describe a conversation, and they both describe a veil (either the wall or the flesh-that-might-be-silicon) that stands between me and knowing what’s really going on.

With that in mind, let me introduce a more important distinction. What exactly is the question that the Turing test poses? In one version, as already suggested, it’s “Is this a real human being, or not?” Call this version of the test T1. A different version, which we’ll call T2 in honor of a famous cyborg, is designed around a much broader question: “Does this entity have a mind, or not?”

Turing’s paper seems to show pretty clearly that he failed to make this distinction. And the distinction matters critically, because there may be entities that would fail T1 (showing their non-humanity all too obviously) but still turn out to have, on any plausible interpretation, a mind. What if your new friend, who seems ordinary and likeable, suddenly glows purple all over, says “Five out of six of me had a really crummy morning,” and then removes the top of her own skull to massage her glowing, six-lobed brain? At that point, she fails T1—not because she’s a machine, but because she’s Zxborp Vood, the eminently sentient ambassador from Sirius Gamma.

What this shows is that “Imitation Game” is a misleading label for what really interests us. So in what follows I’m going to assume we’re talking about T2: not whether the humanoid entity is convincingly human, but whether he/she/it is convincingly some kind of genuinely conscious intelligence.

Now, here’s the kicker. To say that an entity fails T2 is to say that we know it’s a mere machine—a simulation of a conscious being rather than the real thing. But then, by a simple point of logic that often gets missed: passing T2 means only that we still don’t know, one way or the other.

That last bit is vital, and people routinely get it wrong, so read it again. OK, don’t, but allow me to repeat it in a different way. Failing T2 establishes the absence of consciousness. (“Trickery detected: it’s merely a device designed to fool us, and we’re not fooled!”) But it doesn’t follow that passing T2 establishes consciousness, or even gives us evidence for its probable presence (“trickery ruled out”). Passing T2 only establishes that the question remains open. In the formal language of logic: “A entails B, and A is true” entails B. But “A entails B, and A is false” entails exactly squat about B.

With that in mind, suppose it’s the year 2101, and the latest DomestiBots are so convincingly “human” that your grandchildren really have started to think of their new 9000-series DomestiDave as kind and caring. Or happy. Or depressed. Or tormented by a persistent pain in his left shoulder blade.

(As an aside, I’m skeptical of the common assumption that even this will happen. Our computers can already be programmed do things that everyone in Turing’s day would have counted as impossible for a mere machine—which is to say: our computers might well have passed their T1. Yet we, having built and spent time with such clever machines, and indeed carried them around in our pockets, aren’t even slightly tempted to think of them as conscious. Whence the assumption—present in Turing’s paper, and now virtually universal in fiction, in AI, and in popular culture generally, that our grandchildren will be more gullible about computers than we are?)

But OK, just suppose our grandchildren really do find themselves ascribing emotions or intentions to their machines, and meaning it; suppose the fact that the machines all look young and sexy, and smile when they offer backrubs, really does cross some psychological threshold, so that people can’t help themselves—they ascribe emotions to the Bots, and even entire personalities. Remember, remember, remember: that will be a psychological report about our grandchildren, not about their machines.

The 2015 film Ex Machina makes explicit the point I’ve hinted at here: in the end, Turing’s “veil” (wall, disguise) is irrelevant in either form. Ava is a robot who’s perfectly capable of passing T2. But her smug inventor Nathan already knows that. He wants to find out, instead, whether his rather feeble-minded employee Caleb will fall for her flirty shtick even when he’s allowed to see from the start that she’s not a beautiful woman but “just” a machine. “The challenge,” Nathan says, “is to show you that she’s a robot—and then see if you still feel she has consciousness.”

In a way the filmmakers perhaps didn’t intend, this awkward line of dialogue exposes the problem at the heart of Turing’s idea, and any version of his test. For it’s an interesting technological question whether a “Nathan” will ever be capable of building an “Eva.” And, if he does, it’ll be an important psychological question whether the world’s “Calebs” will “feel she has” (and feel compelled to treat her as if she has) emotions and intentions. But the far deeper and more troubling question is an ethical one, and (ironically, given the film’s relentless nerd-boy sexism), it’s a question about Ava, not Caleb. Never mind what the rather clueless Caleb is emotionally inclined to “feel” about her! Leaving that aside, what does it make sense for us, all things considered, to believe she is? On that distinction just about everything hangs—and that’s why Turing’s attitude in his paper, which could be summed up in the phrase “as good as real should be treated as real,” is a fascinating, plausible and fruitful idea about computational intelligence, but a wholly and disastrously wrong idea when the issue comes to be, say, whether that pain in the left shoulder blade actually hurts.

More on this in The Babel Trilogy, Book Three: Infinity’s Illusion. As my story will ultimately suggest, I believe that in time we will come to think of Turing’s ideas about artificial “thinking machines” and mechanical intelligence as a long blind alley in our understanding of the mind.
 

Cesium-fountain atomic clock

Cesium-fountain clocks can be accurate to within seconds every few hundred million years, and they’re the basis for current national and international time standards. But new designs, using a lattice of ultra-cold strontium atoms, will be accurate to within a second over the whole of the Earth's 4.5 billion year history. If you put one of these strontium clocks on your floor, and one on your roof, they'll get measurably out of phase. Being nearer the center of the Earth, the first one is in a stronger gravitational field—which is to say, it's in a place where time itself runs more slowly. Your feet are younger than your head.
 

Epigenetics, Hominin, etc.

Genetics is the study of what changes when the genome changes. Epigenetics is the study of inherited changes in the way genes work (or are “expressed”) that don’t depend on changes in the genome. In plain English: you’re probably tall, dark, and irresistibly attractive mainly because of the genes you inherited from your parents. The question is: could you be that way partly because your parents ate well, got a lot of fresh air, and exercised a lot? This idea used to be heresy; now it’s a major area of study. We know that all organisms have genes that switch on and off at different times, and it’s clear that genetically identical twins raised in different environments (think: severe malnutrition) can end up living with a different set of active genes. One process causing this is methylation, in which a methyl molecule attaches itself to a gene and “silences” it, making it unavailable for expression. See my note on Jean-Baptiste Lamarck in The Fire Seekers; for the full fascinating story, check out Matt Ridley’s Nature via Nurture or Nessa Carey’s The Epigenetics Revolution.

If you’re confused by “hominid” and “hominin,” welcome to the club. Hominidae (with a “d”) is the name for the broader family that includes humans, gorillas, chimps, and also the more distantly-related orangutans—the “great apes.” Homininae (with an “n”) is the sub-family that includes just the more closely-related humans, gorillas, and chimps. That might lead you to think “hominin” refers to any member of this second, smaller family. But it doesn’t, because there’s also the even smaller group, the hominini, which is just the humans (members of the 2.8 million-year-old genus Homo)—us, our extinct human ancestors such as Homo heidelbergensis, and our extinct human “cousins,” such as the Neanderthals and Denisovans.

The simple version is this: the Great Apes (including us) are hominids, and anything in the genus Homo (living or extinct, and including us) is a hominin.

Since paleontology is the study of fossils, and anthropology is the study of human activity, paleoanthropology is the study of human (hominin!) fossils. Paleolinguists are, in real life, a rather controversial group trying to use linguistic evidence for “long-range” speculation about the human past; however, I have co-opted the term to describe Natazscha’s rather different interest in the origin of language itself.
 

The FOXP2 “language gene”

FOX (= “fork-headed box”) proteins give their name to the genes that code for them, and FOXP2 is a real protein manufactured by what has been described, misleadingly, as “the language gene.” A form of it exists in all vertebrates; it has been around for 300 million years. But things have changed even in the five to eight million years since we split from the chimps: there are two new mutations since then, and we share now 713 of the amino acids in the chimps’ version, out of 715.

People are fond of the idea that there’s a gene for blue eyes, for anemia, for Tay-Sachs disease, et cetera, as if we’re made from a neat stack of children’s blocks. In some cases it’s like that. But a condition like having bad impulse control, or good eyesight, involves many different genes. And what really makes it complicated is that (see note on epigenetics) we all carry genes that may or may not get switched on. Even environmental factors, like nutrition and radiation, can switch a gene on or off. And that’s what FOXP2 does: shaped like a box with a pair of antlers, it’s a transcription factor, affecting whether other genes work or not.

The much-studied “KE” family in England is real. About half of them have difficulty understanding sentences where word order is crucial, and show the same tendency to leave off certain initial sounds, for example saying “able” for “table.” A paper published in 2001 identified a “single nucleotide polymorphism” (i.e. a mutation) in FOXP2 as the culprit.

One argument that the Neanderthals had language is based on evidence that they had the same version of FOXP2 as us. I’ve ignored that evidence and gone for something even more exciting.
 

FOXQ3 and a bizarre coincidence

FOXQ3 is pure fiction—I insist! Some knowledgeable readers might not believe me. I had Natazscha discover it in a draft of this chapter that I wrote in early 2015, after reading some of the literature on FOXP2. But it turns out there is a real gene called FOXO3… which I found out about only by chance in early 2016. It gets better: FOXO3 is associated with human longevity, and has been of great interest to the real-world people I’ve poked some fun at as the “Extenders.”
 

Babblers and FOXQ3

As far as I know, there’s no evidence for a genetic mutation to explain giftedness in languages, and most stories about such giftedness are exaggerated. On the one hand, there are cultures where most people can get by in several languages, and being able to get by in four or five is quite common; on the other hand, there are few people anywhere who maintain full mastery of more than about five languages at any one time. Some famous “hyperpolyglots,” such as the Italian Cardinal Giuseppe Mezzofanti, German scholar Paul Krebs, or British adventurer Sir Richard Burton, had a working knowledge of several dozen languages. But even they seem to have managed it mainly by being smart, having excellent memories, and being fanatically driven to succeed at language learning; in other words, no magic bullet. There’s a fascinating tour through the world of the hyperpolyglots (actual ones, not Babblers) in Michael Erard’s Babel No More.
 

Scanner … “this is low resolution, compared with what we can do”

For a sense of how far away this still is, you might take a look at the short video Neuroscience: Crammed with Connections, at https://youtu.be/8YM7-Od9Wr8. My own suspicion is that we’re way, way, way farther from “complete brain emulation” than even this suggests. (See the note on the Bekenstein bound.)
 

Language: ”a crazy thing that shouldn’t exist”

Anyone who knows the literature about “Wallace’s Problem,” as it’s sometimes called, will detect here the influence of linguist Derek Bickerton. See Adam’s Tongue, in which he argues that, despite misleading similarities, phenomena such as animal warning cries, songs, and gestures have essentially nothing to do with the abstract features underlying human language.

Paradoxically, humans are good at under-rating the intellectual, social, and emotional sophistication of other animals—especially when doing so makes it easier to eat them, or mistreat them—while being real suckers for the romantic idea that we might one day learn to “talk” to them. Chances are, we never will talk to them, because they’re just too cognitively distant from us.

One aspect of that distance is telling. Much has been made of the fact that elephants and some other species pass the “mirror recognition test.” But nearly all animals, even the most intelligent, fail another superficially easy test. Think how routine it is for humans, even young children, to follow another’s pointing hand, and thus demonstrate their ability to make the inference “Ah, she’s paying attention to something that she wants me to pay attention to.” Human infants start to “get” this when they are as little as nine to fourteen months old. Michael Tomasello, of the Max Planck Institute for Evolutionary Anthropology in Leipzig, has pointed out how striking it is that our closest genetic cousins, the chimpanzees, absolutely never get it. They have many cognitive abilities we once thought they lacked, yet even adult chimps definitively lack this mark of “shared intentionality.” That may explain a further critical difference: aside from the exception that some chimps occasionally cooperate to hunt monkeys, non-human primates generally lack the human ability to form groups dedicated to cooperating in pursuit of a common goal.

In The Origin of Consciousness in the Breakdown of the Bicameral Mind, Julian Jaynes makes a broader point that may be rooted in this cognitive difference. “The emotional lives of men and of other animals are indeed marvelously similar. But ... the intellectual life of man, his culture and history and religion and science, is different from anything else we know of in the universe. That is fact. It is as if all life evolved to a certain point, and then in ourselves turned at a right angle and simply exploded in a different direction.”

The big question is why. For at least a partial answer, check out the TED talk “Why Humans Run the World,” (and the book Sapiens) by Yuval Noah Harari. For something more technical, specifically on “Wallace’s Problem” (‘How could language ever have evolved?’), see Derek Bickerton’s More Than Nature Needs.

Oh, but wait: here’s a key point on the other side of the cognitive debate. There is at least one species with highly sophisticated “shared intentionality” that routinely does “get” the pointing gesture, perhaps because of its inherently social nature or perhaps because it has spent thousands of years (possibly tens of thousands of years) co-evolving with us: Canis lupus familiaris, otherwise known as the dog.

For one fascinating possible consequence of that co-evolution, see the note “Neanderthals … went extinct not much later.”
 

 

“The Neanderthals had bigger brains than we do”

It’s true, just. The later Neanderthals—and their Homo sapiens contemporaries, around fifty thousand years ago—were equipped with about 1500 cc. of neuronal oatmeal, on average, whereas we get by on about 1350 to 1400 cc. Again, this is on average: the “normal” ranges for the two species are surprisingly large, and overlap—and arguably the differences vanish completely when you take into account body size and other factors.

The Austrian zoologist and ethologist Konrad Lorenz was very taken with the difference, though, and he speculated that “wild” populations like the Neanderthals and our Late Pleistocene ancestors needed bigger brains to survive the hunter-gatherer life, and that the ten thousand year process of turning from hunter-gatherers to farmers to city dwellers had in effect domesticated us, outsourcing our cognitive demands to a larger society—an interesting echo of some themes in this book. Matt Ridley, in Nature via Nurture, says Lorenz’s ideas about domestication fed into his secret support for Nazi junk-science about racial “degeneration,” and argues virtually the opposite case: civilization relaxes selection pressure, allowing more genetic diversity to survive.

Despite the ghastly politics, Lorenz’s work on animals is original and interesting. I remember fondly how my own familiarity with it began. I was sixteen, and browsing innocently in our school library, when a friend crept up behind me and smacked me over the head with a copy of his book On Aggression.
 

“We have complete genomes for …”

Not yet. We have essentially complete genomes for some Paleolithic Homo sapiens, at least one late Neanderthal (a female who died in Croatia approximately 40,000 years ago), and one Denisovan—even though all we have of the entire Denisovan species is a finger bone and a few teeth. (All people of European descent have some Neanderthal DNA; some people of Melanesian, Polynesian, and Australian Aboriginal decent have some Denisovan DNA.) Intriguingly, the Denisovan genome suggests they interbred with H. sapiens, H. neanderthalensis, and yet another, unidentified human species.

We have nothing yet for the Red Deer Cave people, and don’t even know for sure that they’re a separate species. Some experts have suggested that they were the result of interbreeding between Denisovans and H. sapiens, but a recently re-discovered thigh bone, dated to fourteen thousand years old, suggests that the Red Deer Cave people are, like H. floresiensis perhaps, a long-surviving remnant of a more primitive population, probably H. erectus.

The bit about FOXQ3 is pure invention—a claim that some knowledgeable readers may find hard to believe. I had Natazscha “discover” it in a draft of this chapter that I wrote while reading some of the research on FOXP2 in early 2015. But it turns out there really is a gene called FOXO3 (associated with human longevity, no less, and of great interest to the real-world people I make some fun of here as the ‘Extenders.’ I found out about the real FOXO3, quite by chance, more than a year after I’d invented FOXQ3.
 

The Great Leap Forward

The carving of a woman known as the Venus of Hohle Fels (Venus of the Hollow Rock, discovered in 2008) comes from this period. At forty thousand years old, it’s the most ancient representation of a human being currently known.

The name Great Leap Forward is an ironic borrowing from the Chinese Communist Party’s campaign of industrialization and collectivization under Mao Zedong, from 1958-61. That “Great Leap,” which was supposed to make Communist China industrially competitive with the United States, is a plausible candidate for the single greatest exercise in human folly, the single greatest resulting economic disaster, and one of the greatest crimes against humanity, in all history. Forced collectivization, and an absurdly counter-productive attempt to focus on ramping up steel production using backyard furnaces, coincided with several exceptionally poor growing seasons; agriculture was essentially wiped out across large parts of China, and the subsequent famines killed 20 to 45 million people. By some estimates, related coercion led to approximately 1-2 million of those people being killed by deliberate execution.
 

The Bekenstein bound

Is nature ultimately grainy or smooth? Is it made of indivisible units, is it smooth and infinitely divisible? This may be the most profound question in science, and it’s been debated since at least 500 BCE, when the philosopher Leucippus, and his student Democritus of Abdera, invented the idea that everything was constructed from elementary particles that were atomos—indivisible.

The modern “atom,” first conceived of by John Dalton around 1803, was supposed to be atomos, and then it turned out not to be. But quantum mechanics is a return to Democritus in that it too claims there’s a very, very tiny “smallest possible thing’—an indivisible ultimate unit of space itself, the Planck length.

If that size really is an absolute minimum, then there’s a huge but finite number of ways to arrange the contents of space. Think of a cube-shaped “toy universe” consisting of eight dice, seven of which are red and one of which is yellow; there are only eight possible ways for this universe to be. If quantum mechanics is right about graininess, then any volume of space—including both the whole universe, the visible universe, and the much smaller bit of the universe that has the special honor of being the inside of your head—is subject to the same principle.

Physicist Jacob Bekenstein’s interest was in entropy and black holes. But his quantum-based idea—that any region of space contains a finite amount of information—seems to have implications for the debate over consciousness and the physical basis of the mind. If I can create an exact copy of my brain down to the last Planck unit, then everything about that copy will be identical too: that brain (that I?) will also dislike loud noise, love the taste of figs, remember falling out of a tree on a summer afternoon in England decades ago, and wish it were smart enough to understand quantum mechanics; it will be me, in fact—or, at least, it will be wholly convinced that it’s me.

Notice the very big “if,” way back there at the beginning of that last over-packed sentence. And, if you don’t have anything more pressing to do right now, look up “Boltzmann brain.”

By some estimates, you need to acquire 10^73 (1 followed by 73 zeroes) Planck units before you have enough parking space for a single hydrogen atom. (A Planck unit is to the atom roughly as the atom is to the Solar System.) You’d need 10^101 Planck units for your brain, and 10^185 for the observable universe. Which is weirder—that your brain is only 10^28 times bigger than a hydrogen atom, or that the observable universe is only 10^84 times bigger than your brain?
 

“Proper names … Rosetta Stone”

Jean-Francois Champollion and Thomas Young were the two linguists mainly responsible for using the Rosetta Stone to translate Egyptian hieroglyphics. It was a feat of intellectual detective work that took twenty-three years even after Napoleon’s troops had discovered the stone in the Nile delta in 1799. One of Champollion’s key clues was the discovery by other scholars that so-called cartouches, Egyptian words with a kind of bubble drawn around them, were the names of rulers.
 

Linear B

The scripts known as Linear A and Linear B were discovered on Crete at the beginning of the twentieth century, shortly before the Phaistos Disk was found. They’re closely-related syllabaries—which is to say, they‘re physically similar, and, as in much of Egyptian hieroglyphic script, each symbol represents one syllable. But think of English and Finnish: despite the visual similarity, which makes it clear the scripts are related, Linear A and B encode two entirely unrelated languages.

Linear A is believed to be the written form of a pre-Greek indigenous Cretan language, but any other knowledge of it is lost. It was used only for a relatively short time, between about 1750 BCE and 1450 BCE, and apparently only for routine bureaucratic purposes, which suggests that we see in it the first emergence of a writing system—a way of keeping lists, for instance—in an otherwise pre-literate culture.

Linear B was used for several centuries, beginning just before the end of the Linear A period. Finally cracked in 1952, it should really be called “Mycenaean Linear B,” because what it encodes is not a Cretan language but the earliest written form of Greek.

The Mycenaean Greeks (from the Peloponnese, the big peninsula in southern Greece) brought their language across the Mediterranean to Crete when they invaded the island around 1550 BCE, in the wake of the Thera eruption. Presumably they also had no writing at that time, and adapted the recently-invented local system, Linear A, as a vehicle for their own language.

Centuries later, after the fall of Mycenaean influence during the Bronze Age Collapse, Greeks adopted from the Phoenicians the completely different alphabetical writing system (alpha, beta …) we’re familiar with.

The whole story of how the truth about Linear B was recovered, mainly by Alice Kober and Michael Ventris, is told brilliantly by Margalit Fox in The Riddle of the Labyrinth.
 

“Kraist … someone he’s never even met”

I cribbed the Tainu’s response to Kurtz from a real tribe in the Brazilian Amazon, the Pirahã (pronounced PIR-aha). I admit that, being inclined to atheism, I find it an initially plausible and tempting response—but on second thoughts it’s far from persuasive.

Linguist and former missionary Dan Everett has lived among the Pirahã for decades. He says that he lost his Christian faith partly because he found their skepticism about his own beliefs compelling. (“Wait—you’ve been going on and on about Jesus, and you want us to believe all this stuff about him, and yet now you admit that you never even met the guy?!”)

For Everett’s own account, search “Daniel Everett losing religion” on YouTube, or read the last sections of his fascinating and moving book about his fieldwork, Don’t Sleep, There Are Snakes. But, before you walk away with the idea that the Pirahã have a knock-down argument for the silliness of religious belief, consider an obvious response. Wouldn’t their reasons for being skeptical about Everett’s Jesus also force them to be skeptical about my claiming to know that Henry VIII had six wives, or that Abraham Lincoln was once President of the United States? And yet, on any sane view of what knowledge is, I do know these things. As Everett himself attests, the Pirahã have no records of their own past, and thus little sense of history. So maybe they were right not to believe what Everett said about Jesus, or maybe they weren’t, but the mere fact that Everett hadn’t walked with him by the Sea of Galilee seems to be a poor basis for that skepticism.
 

Smoked ancestors

This is (or was) true of the Angu or Anga people, who live in the Morobe Highlands of far western Papua New Guinea. The practice was frowned upon by missionaries, who attempted to ban it. For some vitriolic commentary on the damage that banning traditional burial rites does to indigenous people, see Norman Lewis, The Missionaries. By the way, Lewis’s revulsion at the influence missionaries were having on indigenous people led to the creation of the organization Survival International.
 

Ghostly ancestors and first contact

The last truly uncontacted New Guineans were Highlanders who encountered early Australian prospectors in the 1930s. They had been cut off from the outside world for millennia, and had invented farming at roughly the same time as it arose in Mesopotamia.

There are remarkable photos and film clips online of what happened when these dark-skinned Melanesians first encountered bizarre white-skinned beings with knee socks and rifles—particularly from Dan Leahy’s first expeditions. The appearance of the Caucasians was terrifying, partly because pale skin fit right into their existing stories: many of them believed that the dead retained their bodies but that their skin turned white. One historic (and disturbing) clip shows Leahy shooting a tethered pig to convince the villagers of his power; see http://aso.gov.au/titles/documentaries/first-contact. In her book about New Guinea, Four Corners, travel writer Kira Salak describes seeing this clip:

It is like the fall of Eden in that moment, recorded for posterity on grainy black and white. When I first saw it, I was riveted. It is actually possible to sit down and watch on a television screen an abbreviated version of foreign encroachment and destruction, a chilling glimpse of what has happened to nearly every native group “discovered” in the world. It is almost as if I were watching the arrival of Judgement Day. Thirty years later in the western half of New Guinea, the Indonesians would already have their foothold and begin the massive deforestation and genocide of the tribes. Thirty years from beginning to the arrival of the end.

For more on Indonesia and “the arrival of the end,” see the note “Giant mines (and a short polemic on the relationship between wealth, government, colonialism, racism, and terrorism).”
 

“Paint their bodies with clay … a tribe near Goroka does that”

Morag’s referring to the “Asaro mudmen.” It’s the masks that are really wild—look them up!
 

“Homer was a blind storyteller …”

There are many legends about Homer, but we know virtually nothing about him for sure. He may or may not have been blind, and may or may not have been illiterate; he probably lived on the coast of Asia Minor (modern Turkey) around 750 BCE, or up to a couple of centuries earlier. But it’s also possible that he’s just a legend, and that the great epics under his name were the work of many people.
 

“Infinitely many consistent theories”

Here’s a naïve view of how science works: we collect piles of factual evidence, without reference to any theory, and at some critical point the pile of facts is big enough to confirm that one theory is true. 

No modern scientist believes this.

Here’s another naïve view: we make up as many theories as possible, essentially out of nothing, and then reject as many as possible by looking for single falsifying pieces of evidence; we then tentatively accept whatever hasn’t been falsified yet.

Quite a lot of actual scientists seem to believe something like this.

In general, neither view is a good model for what really goes on historically, even in the best science. All possible evidence “underdetermines” (leaves open many possibilities about) what it’s rational to believe. So, as philosophers such as W.V.O. Quine have argued, what we really do is juggle all the available facts and all the available theories, all at once, trying to create the most mutually consistent overall “web of belief.” Some theories are at the edge of the web, and can be proved or disproved easily without changing much else; others are near the center, and can’t be changed without major conceptual re-decorating. For example, suppose you believe both of the following theories: (1) I let the cat back in just before I left the house this morning; (2) Hydrogen atoms have only one proton. What’s required for you to conclude that (1) or (2) are mistaken? You can safely drop (1) as soon as you get home and find the cat wailing outside the front door; you can’t drop (2) without rebuilding all of science from the ground up.
 

“Five miles high on peyote”

Peyote, a cactus native to Mexico, contains powerful psychoactive alkaloids including mescaline. The spiritual significance of their effects is nicely captured in the descriptive noun entheogen, which was invented for these substances in the 1970s to replace the earlier hallucinogen and psychedelic. Entheogenic literally means “producing a sense of the divine.”
 

Japanese, Korean, and the ISOC linguists

What’s the hardest major language for a native English speaker to learn? Some popular bets are: Albanian, Amharic, Arabic, Basque, Cantonese, Estonian, Georgian, Hungarian, Icelandic, Japanese, Korean, Navajo, and Tagalog. But there are literally thousands of other languages at least as distant from English as any of these—see the note on Nuxalk.
 

Nuxalk, and “What Raven Did”

The Pacific Northwest is one of the world’s five top hotspots for language extinction, with over two hundred critically endangered languages, including Nuxalk, Kutenai, Klallam, Yakima, Snohomish, Spokane, Quileute, Siletz Dee-ni, and Straits Salish. (The other four major hotspots are Central and South America; Northern Australia, especially the Cape York Peninsula; the American Southwest; and East Siberia.) Nuxalk is spoken only in one village at the mouth of the Bella Coola river in British Columbia. Its strange phonemes, and its distaste for vowels, make it especially difficult for anyone who starts out with a European language. You can hear it spoken here.

I have altered slightly the version of the Raven story I found at firstvoices.com—a great site for learning about Native American language and culture.
 

Physicists … three big theories … junk

The attempt to reconcile general relativity and quantum mechanics into a coherent theory of “quantum gravity” has already sucked up many whole careers, with no end in sight; string theory, which many thought would elegantly solve the impasse, can’t offering any testable predictions at all, according to its critics, and seems to come in up to 10500 equally plausible versions, which is about (10500)-1 too many. See for example Lee Smolin’s The Trouble with Physics, or look up “Smolin-Susskind debate.”
 

“A thousand years of human habitation”

Scientists disagree about when Polynesians from the southwest first arrived in the Hawaiian Islands; around 1000 CE is probable.
 

Gilgamesh and the lion

To see the original, search “Louvre Gilgamesh.”
 

​“A trapdoor function … even our best computers will gag on it”

Modern cryptography (and therefore the entire Internet, the world banking system, and a whole lot else) depends on trapdoor functions, mathematical operations that are intrinsically harder to compute in one direction than another. Even simple addition is a sort of trapdoor function: you can solve “123 + 789 = x” quicker than “123 + x = 912.” But some functions are (or they become, when the numbers are large) much much much harder one way than the other.

The key example is prime factors. “What’s 13 x 17?” is easy: 221. But “Here’s a number, 221; what are its two prime factors?” is significantly harder. What if, instead of two-digit primes, I start with a pair of two-hundred-digit primes? Your computer can still multiply them together in a flash. But the reverse process, finding the primes with nothing but the result to go on, isn’t merely more difficult: it’s practically impossible, even with all the computing power in the world.

This fact makes it possible to use systems in which you publish a “public key” that anyone can use to encode a message to you; only you own, and never need to transmit, the “private key” capable of decoding those messages.

The idea of public keys based on trapdoor functions was the biggest advance in cryptography since hiding things under rocks, and goes back to publications by Whitfield Diffie and Martin Hellman in 1976. (A mathematician at Britain’s GCHQ had worked it out slightly earlier, but his work was classified and its implications ignored.)

A rumor going the rounds now (2016) is that the National Security Agency has in effect hacked this kind of “unhackable” Diffie-Hellman encryption—and not by being super-smart, or having super-incredible technology, but by having pretty-serious technology and one modest insight into human frailty and laziness. In brief, Diffie-Hellman probably is beyond even the NSA’s reach if it’s properly implemented. But there appears to be a flaw in the way most such encryption was set up: because it’s easier, nearly every major system is using the same few numbers as public keys. The NSA realized this, and—so the rumors claim—built a giant specialized computer to prime-factorize just those numbers. It could be true.

In ways I only vaguely understand, the ultimate security of public-key systems is related to “P = NP,” often described as the most fundamental problem in computer science. Look it up and enjoy the headache.
 

Archimedes

See my note on him in The Fire Seekers. The newest research makes a good case that the Antikythera Mechanism was manufactured in about 205 BCE, which makes it slightly too recent for Archimedes—he had his terminal encounter with a Roman soldier during the sacking of Syracuse in 211 BCE—but it could have been based on his design. (See Christián C. Carman and James Evans, “On the epoch of the Antikythera mechanism and its eclipse predictor,” Archive for History of Exact Sciences, Nov 2014.)
 

Socrates and knowing how ignorant you are

The point was also made by the great Chinese sage Lao-tsu, or Laozi. Speaking of ignorance, I’ve always thought it fascinating that Socrates and Lao had so many things in common, and were near-contemporaries, but had no idea that each other’s entire civilizations existed.
 

The Voynich Manuscript.

The Voynich is easily the oddest and most beautiful of all the great “mystery” texts, in my opinion—take a look at the many pages reproduced online. It is, contrary to my fictional history, still in the Beinecke Library at Yale. “Solutions” to the mystery are legion; good luck.
 

“Let there be light”: what’s involved in creating a universe?

If you don’t know it, listen to the first part of Joseph Haydn’s oratorio Die Shöpfung (The Creation), with the volume turned way, way up. The C-major chord on the word “light” is a real spine-tingler. In the English version, the crucial words are “And God said, ‘Let there be light,’ and there was light”—which suggests to me a chain of three events: He decided that light would be a good idea, He reached for the switch, and the light came on as a result of His action. The German text seems to do a better job of hinting at a less ordinary, more appropriate idea. “Und Gott sprach: ‘Es werde Licht!’ Und es ward Licht”: literally, “And God said it would be light—and that was the light.” On this view, more broadly, God doesn’t cause the universe to exist, because His idea of the universe is the universe. So we exist in the mind of God. (I’m grateful to Henry Newell for introducing me to both the oratorio and this insight about the text, many years ago.)

See the note on Hegel—and, if you’re interested in a grand intellectual detour, look up “the Simulation Argument,” a more recent and perhaps rather creepier take on the idea that we, and the universe, might be nothing more that someone else’s idea.
 

Religion, immortality, and equating consciousness with the soul that survives death

Balakrishnan is giving a very simplified view here, and one that sounds much more like Christianity (or perhaps Islam) than religion in general. The Greeks seem to have a very ambivalent view of whether or in what sense the dead survive; so does Judaism; Buddhism and Hinduism perhaps even more so, since they think of survival after death in term of reincarnation—and they think of reincarnation as something to be escaped.
 

Einstein … “the theory has to be right”

Though Einstein’s two great theories were published in 1905 and 1915, it was in 1919 that he became world-famous. That was when Arthur Eddington used a total solar eclipse to show that the sun’s gravity “bends” starlight just as general relativity predicts. Einstein joked that it would have been a pity if the experiment had gone the other way—not because that would have disproved his theory, but because it was “correct anyway.” That’s not arrogance; it expresses the perfectly sound idea that even a new theory needs more than one contrary “fact” to defeat it, especially if it’s profoundly convincing in other ways. Science works through a balance of evidence about what to believe overall, and theories guide what it makes sense to believe just as much as facts do. (See also the note on “Infinitely many consistent theories.”) Daniel Kennefick has a good article on the Eddington experiment —serious and detailed, but very readable —at http://w.astro.berkeley.edu/~kalas/labs/documents/kennefick_phystoday_09.pdf
 

Descartes’s ghost

The great French mathematician and philosopher René Descartes outlined his version of mind-body dualism in The Passions of the Soul, 1649. According to his view (or, arguably, an unfair simplification of it), the body is analogous to a mechanical device—a robot, we might say—controlled or piloted by an immaterial and immortal substance that resides within it. Descartes believed the world was made up of two fundamentally different kinds of stuff, matter and thought. (Or three, really: matter, thought, and God.) Matter is res extensa—‘extended stuff,’ or, literally, stuff that takes up space. Thinking stuff, res cogitans, has no extension in space. But we human beings are uniquely dual: physical objects that think.

The uneasy idea of these two things cohabiting was a popular theme in the seventeenth century. It was put into clever, comical form in Andrew Marvell’s poem A Dialogue Between the Soul and the Body, in which each complains that it is imprisoned by the other. The soul begins:

O who shall, from this dungeon, raise
A soul enslav’d so many ways?
With bolts of bones, that fetter’d stands
In feet, and manacled in hands;
Here blinded with an eye, and there
Deaf with the drumming of an ear;
A soul hung up, as ‘twere, in chains
Of nerves, and arteries, and veins;
Tortur’d, besides each other part,
In a vain head, and double heart.

The big problem for dualism—closely related to Bill Calder’s skepticism about the very idea of “the supernatural”—seems to be this: How can we make sense of the idea that the mind/soul and matter interact? There has never been a good answer to that question, and in The Concept of Mind (published exactly three centuries after The Passions of the Soul, in 1949), English philosopher Gilbert Ryle dismissed “Cartesian dualism” as the “ghost in the machine” theory.

Ryle’s work ushered in an era in which few philosophers took dualism seriously; instead, various forms of physicalism or naturalism or materialism, which might collectively be called “all machine, no ghost” theories, reigned supreme. They still do, despite the fact that purely materialist theories are also beset by deep problems.

One problem is causation: some philosophers have argued that the idea of causation, as in “the hammer caused the damage,” is every bit as mysterious and difficult to make sense of in materialist theories as in dualist ones. Evidence can only ever describe events (like hammers moving and damage appearing), not the spooky (immaterial?) causing that supposedly sits between them. If that’s right, “How can thoughts cause actions, or events cause thoughts?” may be a genuine problem for dualists—but isn’t a special reason for preferring materialism.

But a second problem for materialist theories is more important, at least in the context of this trilogy. If matter is all there is, as Bill Calder and Mayo both think, how can we make sense of the idea that mind or soul or consciousness (the reality of which we experience directly every moment of our waking lives) even exists? The question has driven some philosophers, such as Daniel Dennett, to propose in all seriousness the apparently self-contradictory “eliminative materialist” doctrine that consciousness itself is an illusion. By way of illustrating how deep the trouble is, another philosopher, Galen Strawson, has memorably described this as “the silliest view that anyone has held in the whole history of humanity.”

See also the note on Turing, and on “Our biology is a barrier to our nature, because matter is evolving into mind.”

Of course, there may be another option. Patience, grasshopper.

Haole … buried him at sea

The word haole implies pale-skinned, and it’s now used in the sense of a white person from the mainland, but that’s not what it originally meant. Probably (no one seems sure) it meant “without breath,” and this had to do with Europeans not following Hawaiian customs involving breathing. One theory is that Hawaiians traditionally greeted one another by touching noses and in effect intermingling their breaths, and Europeans failed to do that.
 

Middle-Eastern street … hijabs … abayas

Those of us who aren’t Muslim should at least get the terminology right—and I certainly couldn’t have done until I looked it up. A burqa is the fullest garment: it covers the whole body, with a mesh screen for the eyes. Chadors and abayas are full-body hooded cloaks, worn over other clothes; the chador is often worn open, whereas the abaya is usually pinned closed across the front. A niqab is a veil for the face only, usually with a slit for the eyes. A hijab is basically a scarf that covers the hair but not the face, and is often worn with what is otherwise Western-style clothing. In some countries—as imagined here by Morag—they are made from colorful and elaborately decorated cloth. Actually Morag is guilty of cliché, as she suspects: Jordan is one of the most liberal Islamic countries, and many women there, especially in the cities, wear either the hijab or no head covering at all.
 

Astronomer-friendly street lights

Light pollution is a big and interesting issue: tragically, our cities produce so much “stray” light that most people alive to day have essentially never seen the sky that thrilled and awed our ancestors. (A lone light bulb, ten miles away, is as bright as the brightest star.) Because of the Mauna Kea telescopes, Hawaii has always been at the cutting edge of the issue; low-pressure sodium lights were mandated in Kona back in the 1980s, and have since been replaced by directional LEDs. Still, a night at a remote site with excellent “seeing” should be on everyone’s bucket list. I camped high on Mauna Kea one spring, and the southern Milky Way was so dense with stars that the meaning of the name was immediately obvious: it looked as if someone had spilled milk on the sky.
 

Giant mines (and a short polemic on the relationship between wealth, government, colonialism, racism, and terrorism)

The island of New Guinea is rich in minerals; the open-pit mines at Ok Tedi and Porgera in Papua New Guinea, and the Freeport mine at Grasberg on the Indonesian side of the border, are among the largest man-made holes on Earth. (Use Google Earth to look up Puncak Jaya, the tallest peak in New Guinea. The Grasberg mine is the set of concentric rings clearly visible just to the west of it.) It’s a classic example of what economists call the “resource curse,” in which poor, vulnerable people have their lives made immeasurably worse, rather than better, by the discovery of mineral wealth on their own land. Almost none of the vast profits from these mines have gone to indigenous groups, mine tailings have poisoned once-pristine major rivers like the Strickland and Fly, and violence has blossomed in the jungles like a big new crop.

Unfortunately for the people of Papua New Guinea and West Papua, the biggest problem isn’t even foreign mining (and logging) corporations, but corrupt governments that find those corporations to be an irresistible sources of cash. As Nobel Prize-winning economist Angus Deaton has pointed out, one of the biggest factors separating the most fortunate people in the world from the least fortunate is the matter of living under relatively stable, transparent, competent, corruption-free governments—and those of Indonesia and Papua New Guinea rank a miserable 107th and 145th out of 175 on Transparency International’s global corruption index.

Around the world, governments at this level are a lot like the bully in the school corridor: reliably stupid, but also reliably strong, selfish, pitiless, and violent. For all its failings, at least the government of Papua New Guinea is indigenous. The government of West Papua (formerly Irian Jaya), on the contrary, is one of the most brutal, most ruthless, most overtly racist exercises in colonial domination in history, comparable to the worst excesses of the European powers in Africa in the nineteenth century.

The former Dutch territory was forcibly annexed by Indonesia during a drive for independence from 1961 to 1969. As zoologist Tim Flannery drily remarked (writing in 1998, shortly after the area had been renamed “Irian Jaya” by the Indonesian government):

Surely it is a perverse twist of fate that has put a nation of mostly Muslim, mostly Javanese, people in control of a place like Irian Jaya. You could not imagine, even if you tried, two more antipathetic cultures. Muslims abhor pigs, while to a Highland Irianese they are the most highly esteemed of possessions. Javanese have a highly developed sense of modesty … for most Irianese, near-nudity is the universally respectable state … Javanese fear the forest and are happiest in towns … Irianese treat the forest as their home …”

This is a short quote from a long list of opposites, and bad things could have been predicted to flow from them. But alas “bad” is far too weak a word, as it is not really even the Indonesian government that rules West Papua, but the Indonesian military—which, after enjoying decades of generous support, supply, training, and diplomatic shielding by the United States, Australia, and even the United Nations, has one of the worst human rights records of any entity on Earth. In the fifty years since the brutally violent annexation, many tens of thousands of West Papuans (over five hundred thousand, according to the group International Parliamentarians for West Papua) have been murdered by Indonesian forces, with thousands more imprisoned, raped, tortured, and “disappeared,” in a campaign of terror that arguably amounts to genocide.

About that word “terror.” We’re encouraged to think that terrorism is a very big deal, but what is it? The US State Department recognizes more than sixty terrorist organizations around the world. It would appear that what unites the listed organizations is their willingness to pursue political goals by using violence to intimidate and kill innocent people—and they are doing so at an unprecedented rate, mainly in Afghanistan, Iraq, Nigeria, Pakistan and Syria. Currently the world’s worst example is Nigeria’s Boko Haram, an especially bloodthirsty Islamic extremist group affiliated with ISIS/ISIL. It’s on the list and in the news for murdering 6,664 people in 2014, more even that the rest of ISIS /ISIL worldwide. (Figures are from the Institute for Economics and Peace, 2015 Global Terrorism Index). But since 1963 the Indonesian military has murdered up to seventy times that many people in Irian Jaya/West Papua alone, out of a population of just 4.5 million. It continues to imprison, torture, and murder West Papuans today, making something of a specialty of pro-democracy activists, otherwise known as “rebels,” (look up, for example, the cases of Yawan Wayeni and Danny Kogoya), and farmers and children (see, but be warned of very disturbing images at, freewestpapua.blogspot.com). Yet Indonesia’s army doesn’t even make it onto the State Department’s terrorist list.

Possibly this is evidence that the United States government is a fan not only of our “ally,” Indonesia, but also of Lewis Carroll. For, as Humpty Dumpty famously tells Alice, “When I use a word … it means just what I choose it to mean—neither more nor less.”

You can find out more about what’s going on in West Papua in this article from The Diplomat, http://thediplomat.com/2014/01/the-human-tragedy-of-west-papua, and at the websites of Human Rights Watch, International Parliamentarians for West Papua, Cultural Survival, Survival International, and Amnesty International.
 

839 languages

The figure for PNG (from the website ethnologue.com) includes not just the eastern half of the main island of New Guinea but also the languages of New Britain, New Ireland, and hundreds of smaller islands, so the figure for the main island may be somewhat lower. But New Internationalist magazine lists 253 tribal languages for the Indonesian province of West Papua, and states that the island as a whole accounts for 15 per cent of all known languages—so that’s a total of about eight hundred to a thousand. Zoologist Tim Flannery gives a similar number; in Light at the Edge of the World anthropologist Wade Davis claims “more than 2,000 languages” for the entire island.

The exact number is beside the point. Let’s say eight hundred. By way of comparison, Ethnologue lists 166 for all of Europe and Scandinavia combined.
 

Australia … “not a single active volcano”

The last volcanic eruption in Australia was probably at Mount Gambier, 5-6,000 years ago—not long before Thera. But Oz does have many extinct volcanoes, plus some dormant ones in Victoria and two active ones on remote islands. Recently (2015) it was also discovered to have the longest chain of (admittedly very extinct) volcanoes anywhere in the world. The “Cosgrove chain” was created by the motion of the landmass across a hot mantle plume between 33 and 9 million years ago.
 

Singing dog

The New Guinea Singing Dog (Canis lupus hallstromi) is actually a shy, rare and genuinely wild relative of the Australian dingo; hunting dogs in the Highlands are descended (at any rate partly) from them.
 

Giant rat

He doesn’t just mean it’s a big one. The Bosavi Wooly Rat, a species discovered in 2009, is one of many “giant rat” species of the genus Muridae in New Guinea, and it may be the largest of all: it’s almost three feet long and weighs about three and a half pounds. Rats like this are a common food source for many tribes in New Guinea. The Bosavi Wooly Rat, by the way, was found living in the crater of an extinct volcano.
 

Lost tribes … “the Hagahai, the Fayu, the Liawep”

These are just three real New Guinea tribes that as recently as the 1980s or 1990s had had little of no contact with Westerners, and relatively little contact even with other local tribes. There’s good information on the Hagahai at culturalsurvival.org; for the Fayu, see Sabine Kuegler’s memoir Child of the Jungle, and Jared Diamond’s Guns, Germs, and Steel; for the Liawep, see Edward Marriot’s The Lost Tribe. Several other tribes with very little outside contact are described in Tim Flannery’s memoir Throwim Way Leg—including both the Miyanmin and the Atbalmin, whose territory is approximately where I’ve set the Chens encounter with the Tainu. Survival International claims there are forty such groups in West Papua alone.

Many tribes or groups referred to as “uncontacted,” in places like New Guinea and the Peruvian and Brazilian Amazon, are better characterized as having responded to contact with neighboring populations and the “outside world” by making it clear that they wish to minimize or avoid any more of it. Others may have had outside contact in the relatively distant past, and then retreated from it: according to one source, the Mashco Piro of Peru disappeared into the jungle when the rubber industry came to the Amazon more than a century ago.

Tribal people often have excellent reasons for keeping the outside world at spear’s length. But contact isn’t always a story of outside interference: the Mashco Piro recently began to initiate what we might call “re-contact,” of their own accord. And a somewhat similar case involves the “Pintupi Nine,” a family from a traditionally nomadic tribe living near Lake Mackay in Australia’s Gibson Desert. Most Pintupi were moved or “encouraged” to move from their exceptionally remote ancestral lands by Australian governments in the 1950s-1960s. But one family continued to live in the desert as their ancestors had for thousands or tens of thousands of years, and had apparently never seen any non-nomads, certainly not whites, until forced to make contact by exceptional drought conditions in 1984. They were naked except for some items woven from human hair; doctors who examined them reported that they were exceptionally fit and healthy. Two of the nine, Yukultji Napangati and Warlimpirrnga Tjapaltjarri, have become internationally acclaimed artists..
 

Humans arriving in Australia/Melanesia “fifty thousand years ago”

Jimmy could be wrong. The best current evidence strongly suggests fifty thousand at least, and sixty thousand or more is quite possible. (The sea level was so much lower back then that Australia, Tasmania, and New Guinea were one land mass, known to geographers as Sahul, and the coastal areas where you’d expect to find the earliest evidence of settlement now lie under the Timor and Arafura Seas.)

Although ten times further back in history than the Egyptian Pharaohs, those first Austro-Melanesian migrants were apparently already skilled boat-builders: even in that era of lower sea levels, there was never less than 50 miles of open sea to cross from Asia. That deep water channel is the origin of “Wallace’s Line,” separating two geographically close but utterly distinct biological worlds, with possums, kangaroos, parrots, birds of paradise, and eucalyptus trees on the eastern (Sulawesi/Lombok/New Guinea) side, and orangutans, flying squirrels, leopards, and giant Dipterocarp trees just to the west in Bali, Java, and Borneo. This was one of the key pieces of biogeography that led Alfred Russell Wallace, like Darwin, to the idea of natural selection.
 
“Neanderthals … went extinct not much later”

At one point the Neanderthals ranged from east of the Caspian Sea to southern England and southern Spain, and recent evidence puts them at least occasionally as far east as the Altai Mountains in Siberia. By about fifty thousand years ago, their range was shrinking, and, while there’s been some apparent evidence of a remnant population hanging on in southern Europe until twenty four thousand years ago, recent research is pushing the date of final extinction back in the direction of forty thousand years.

(Shameless speculation alert. This is only a debate about the correct dating of existing finds. It doesn’t necessarily prove that no Neanderthals survived longer: who’s to say what surprises will be washed to the surface by the historic floods of 2052?)

After showing up from Africa, Homo sapiens may have lived alongside Neanderthals for tens of thousands of years, but some argue that there was little or no interaction until about forty two thousand years ago. If that’s right, and the Neanderthals really did die out forty to thirty eight thousand years ago, the overlap is suspiciously narrow. Did we bring disease to them? Slaughter them? Slaughter them and eat them? A new theory, outlined in Pat Shipman’s book The Invaders, suggests rather that we out-competed them for food resources by showing up ready-armed with a lethal new hunting technology: semi-domesticated dog-wolves. Some clever statistical research on skulls from ancient and modern Canidae (wolves and dogs) suggests that our ancestors first domesticated wolves into dogs not seven to ten thousand years ago—or fifteen thousand, which until recently was an “extreme” date—but as long as thirty five to forty thousand years ago. And dogs make hunting big game much, much easier. (See also the note “Language: a crazy thing that shouldn’t exist.”)
 
Homo floresiensis, and the legend of the ebu gogo

Remains of the dwarf hominin species Homo floresiensis, immediately nicknamed “the hobbit,” were discovered in 2003 in a cave on the Indonesian island of Flores, about 1,300 miles west of New Guinea. H. floresiensis reached the Flores area hundreds of thousands of years before modern humans did, and thrived, partly on a diet of now-extinct pygmy elephants, long after the Neanderthals went extinct in Eurasia. (An even more recent discovery, in 2016—stone tools well over a hundred thousand years old on the island of Sulawesi—indicates that yet other groups of early hominins also beat Homo sapiens to the area.)

Modern people on Flores tell of the “ebu gogo,” small hairy people who live in caves in the forest and comes out to steal pigs, and even children. (The names means something like “greedy granny” in the local language.) It’s a similar story to the Orang Pendek (‘short person’) legend on Sumatra. The most recent alleged sightings of ebu gogo are from the 19th century; still, if H. floresiensis was the cause of those reports, then the species hung on at least ten thousand years longer than current fossils indicate. And maybe they did—jungles are notoriously bad environments for fossil preservation. Anyway, that’s what gave me the idea for the I’iwa.

Controversy over whether the Flores “Hobbit” bones are really from a new species of hominin, or are merely (say) a child with a genetic abnormality, continues to rage. There’s a short, accessible summary by Karen L. Baab in Nature. See also Linda Goldenberg’s Little People and a Lost World, or for a more detailed account Dean Falk’s The Fossil Chronicles.
 
“Our biology is a barrier to our nature, because matter is evolving into mind”

That we might evolve from pure matter to pure mind—that that is the whole universe’s trajectory, in fact—is the philosopher Hegel in a nutshell. It’s a fascinating idea; too bad it’s buried in The Phenomenology of Spirit, one of the most unreadable books ever written.

(Imagine several hundred pages of this: “The psycho-organic being has at the same time the necessary aspect of a stable subsistent existence. The former must retire, qua extreme of self-existence, and have this latter as the other extreme over against it, an extreme which is then the object on which the former acts as a cause.” The philosopher Schopenhauer was left speechless with loathing by Hegel’s writing and had to fall back on quoting a phrase from Shakespeare: “Such stuff as madmen tongue and brain not.”)

But you can say this much for Hegel: at least he took the reality of ideas seriously, and accepted that the relationship between things and thoughts was a genuine and deep mystery. In the past century, most cognitive scientists and psychologists, and many philosophers, have managed to persuade themselves that it isn’t a deep mystery. Which is tragic, really: being mistaken, they’ve condemned themselves to playing in the shallows. (See also the note on “Let there be light” and the Simulation Argument.)
 

Paleolithic evolution and “idiots” not cooking their food

A trifle harsh, but it’s easy to see where Morag’s impatience is coming from. A lot of pop science gives the impression that we evolved into modern human beings one Wednesday afternoon on the African savannah a hundred thousand years ago—as if some magical change made us anatomically and biologically what we are, now, right then. But evolution is continuous, and while some people claim we’ve evolved little since the mid-Paleolithic, the most recent evidence suggests the opposite: epochal revolutions like the domestication of cattle, the invention of agriculture, mass migration, and industrialization have probably accelerated the pace of human genetic change. One line of recent research indicates huge genetic shifts in the European population just in the eye-blink since farming began, 8,500 years ago; among other things, those changes account for pale skin, the ability to digest milk, and the ability to survive on high-wheat diets, which are poor in the amino acid ergothioneine. Another line of research indicated that factors like antibiotics, public sanitation, and a massive increase in our consumption of simple carbohydrates, have radically altered our gut biome in just the past century. (Even if, as some claim, hunter-gatherer diets were healthier than modern ones, our ancestors were all hunter-gatherers 9,000 years ago, and most of them were still foraging much more recently than that.) So there’s little reason to think that what was normal or natural for our ancestors a hundred thousand years ago is an especially good guide to what’s best for us now. As for cooking dinner: our bodies became adapted to this vastly more efficient way of getting calories at least three hundred and fifty thousand years ago, and possibly two million years ago. In either case, that’s long before Homo sapiens even evolved.
 

“I make of you nice fat kebab”

That might sound odd, coming from a Russian, but kebabs, or shashlik, have been a popular fast food in Russia ever since they were introduced from Central Asia over a century ago.
 

Time for Gödel

Gödel was one of the most important mathematicians of the twentieth century, and one of the most important logicians since Aristotle. Like Iona, he was a mathematical Platonist: he believed mathematical objects existed in an independent reality outside the mind, and had to be discovered; they were not mere inventions. His own work contributed strong new reasons for believing so.

Modern physics seems to agree with him on the unreality of time. Einstein called it “a stubbornly persistent illusion.” And when John Wheeler and Bryce DeWitt tried to combine general relativity with quantum mechanics by giving a quantum-mechanical description of the universe as a whole, they found that “time” dropped out of the picture. Look up “Wheeler-DeWitt equation” for more on this.

Iona’s “crazy as a loon” comment is a reference to the fact that Gödel was paranoid about being poisoned and would only eat food that had been prepared for him by his wife. When she became too ill to do this, he stopped eating and essentially starved himself to death. He died in 1978.
 

The underground city at Derinkuyu

In case you think the I’iwa’s home too fanciful, look this up. Derinkuyu once held thousands of people and their cattle, and it’s just one of several dozen underground settlements in Cappadocia, carved out by the Hittites and/or Phrygians. I visited the area many years ago; I was already writing about the I’iwa before I realized it was those interiors populating my mind’s eye. The earliest written reference to the Cappadocian rock settlements is already familiar to readers of The Fire Seekers: Xenophon visited them during his campaigns with Cyrus the Great, and mentions them in Anabasis.
 

Lascaux, Altamira, and Chauvet

These are the three best-known sites for early cave art. At Altamira, in Spain, the paintings discovered in 1879 were originally dated at twelve to fifteen thousand years old, which seemed astonishing enough, but Lascaux in France was discovered in 1940, with apparently older images, and some of those at Chauvet in France (discovered 1994) were clearly older still. Subsequent research has shown that most of these sites were occupied in waves, over tens of thousands of years; at Chauvet, and also another Spanish cave, El Castillo, the oldest images are now believed to be as much as 37,000 to 40,000 years old. That puts them near the time when H. sapiens arrived in Europe and the Neanderthals vanished.

The most amazing aspect of the cave paintings isn’t their age but their staggering skill, beauty, and power. When Altamira was discovered, the images were dismissed as fakes on the grounds that “primitive” people could not possibly have produced such things. Funny, but understandable—some of the animals, in particular, take your breath away. For a private tour of Chauvet, see Werner Herzog’s documentary Cave of Forgotten Dreams.

 
 
 

Copyright © Richard Farr 2020