animal
The top-level category animal first appears in English in 1398, a borrowing from Old French animal. That descends unchanged from Classical Latin animal, meaning “animal”. It’s the noun form of animālis, meaning “animate” or “living”. Its root anima meant “air”, “life”, or “soul”, ultimately descending from reconstructed PIE h₂enh₁-, meaning “breathe”, likely onomatopoeic. It’s interesting to note the most salient characteristic of the word for animals is that they’re animate. They move, unlike plants, which are planted in place.
What did English-speakers say instead of animal before 1398? It’s one of the ten hundred words people use the most often! The word it replaced in common use was beast, which turns out to also be a borrowing from Norman French from around 1200! Okay, well, what did English-speakers say instead of beast? The Germanic English word for animal turns out to be deer. You can compare the words for “animal” in other Germanic languages, like German Tier, Swedish djur, or Dutch dier. But what did Old English speakers say when they specifically meant a deer? Probably hart, stag, or hind. You can feel how important deer were in premodern England by the variety and terseness of the words describing them, and that’s even before considering specific terms like buck, doe, or fawn.
I’m fascinated by generic terms like deer that eventually acquired a specific meaning that invalidated their genericness. Another English example is the word corn, which used to be the generic word for “grain” but now specifically means maize. The original meaning is preserved in phrases like peppercorn, meaning granules of pepper, and corned beef, referring to grains of salt. Similarly, apple used to be the generic word for “fruit” but now specifically means apples. Pineapple used to be the word for pine cones, the “fruits” of pine trees, before pineapples were known to English-speakers. The Edenic forbidden fruit being described as an apple originates from a similar genericization in French. Non-European traditions commonly depict the forbidden fruit as a fig or as grapes.
Microorganisms were originally called animalcules, coined by Leeuwenhoek while experimenting with microscopes in 1677. It’s borrowed from Latin, meaning “little animal”. That term is still sometimes used in a technical sense to refer to specifically marine microbes. It was perhaps chosen in analogy to molecule, which was coined in French from the Latin for “little mass” in 1641, but not given its modern meaning until Avogadro proposed it in 1811.
Diagram of a contemporary tree of life based on cladistics, separating life into three top-level domains and twenty-five kingdoms.
By 300 BCE, Greek philosophers were clearly dividing life between animals, studied in Plato’s Τῶν περὶ τὰ ζῷα ἱστοριῶν / Historia Animalium; and plants, studied in his student Theophrastus’s Περὶ φυτῶν ἱστορία / Historia Plantarum. Linnaeus originally proposed dividing all life into three top-level kingdoms in his 1735 Systema Naturale, which translate into the now-familiar “Animal, vegetable, or mineral?” While Twenty Questions was a well-known English parlor game by 1790, its iconic first question wasn’t typically included until the 1840s. The phrase was immortalized in Gilbert and Sullivan’s 1879 opera The Pirates of Penzance by its inclusion in “I Am the Very Model of a Modern Major-General”.
But by then, it was clear there was more to life than those three categories. A proposal to add protists (“primitive” microorganisms) as a top-level division was put forward in 1860, and in 1866 another proposal to remove minerals was submitted and accepted. In 1938, the newly-discovered distinction between prokaryotes and eukaryotes was reflected in four top-level divisions: animal, plant, protist, and monera (prokaryotes). The next eighty years of rapid discovery added more complexity than anyone anticipated. Today, we loosely recognize three top-level domains with seven second-level “kingdom” divisions: The domain and kingdom of bacteria; the domain and kingdom of archaea; and under the third domain of eukarya we classify the kingdoms of animals, plants, protozoa, fungi, and chromista.
In-N-Out Burger first added animal style to their secret menu in 1961, named after rowdy drive-thru customers the staff privately called “animals”.
A brief history of time units: Minutes
This is part two of a multi-part series on the origins of the time divisions of a day. Part one can be found here.
Minutes
While there was great demand for clocks accurate enough to measure individual minutes, they did not become a reality until 1656! Even the most accurate clocks in 1400 were expected to drift as much as 15 minutes per day. As such, the word minute does not appear in English until the mid-1400s. Instead, people subdivided hours into halves and quarters when they needed to be more precise. This can be seen preserved in phrases like “half past eight” and “quarter of nine”.
Minute comes from classical Latin minūta, meaning “very small”, and arrives at English through both Arabic and French. Specifically, the Latin phrase pars minūta prīma, meaning “first very small part”, was used in geometry to describe subdivisions of degrees. The ancient Sumerians used a sexagesimal (base-60) numeral system. Nearly all the attributes of the minute time unit stem from Sumerian astronomy dividing circles into 360 degrees, and then each degree into 60 subparts. This convention was passed down for 5,000 years through Babylonian, Greek, Roman, Arabic, and French astronomers before arriving in England.
The invention that finally enabled clocks accurate to the minute was the pendulum. Galileo’s studies on the mechanics of pendulums, first published in 1602, would go on to inspire Dutch scientist Christiaan Huygens to construct the first pendulum clock in 1656. By 1690, after a lifetime of iterative improvement, clockmakers had refined the design into what are recognizable as grandfather clocks today. Boasting just seconds of drift per day, these clocks were the first to commonly include minute hands.
Another key component of this hundredfold increase in precision was the addition of a control system. Even pendulums are subject to the laws of physical reality, and exhibit tiny variations in speed due to wear, temperature, and dozens of other variables that stubbornly resist exact calculations. The solution clockmakers ended up devising was a balance spring that measured the speed of the oscillation, then sped it up if it was too slow, or slowed it down if it was too fast.
Scan of a printed page from an 1857 American guide to train travel. Most of the page features a table of US cities and their respective local times when it is noon in Washington, DC. The rest of the page describes how to use the table.
An additional wrinkle is that sunrise and sunset times also change based on your longitude (east-west location). In 1700, the solution to this problem was for every town to keep its own clock synchronized to its own solar noon. This solution started creating new problems when unimaginably fast rail travel became common in the mid-1800s. In 1799, it was impossible to send information, let alone people or cargo, any faster than a person on a horse. You might imagine sailing ships going faster than that, but skilled crew sailing with favorable winds might only reach an average speed of 10 km/h (6 mph).
European countries solved this new problem by coordinating on a single standard time per country. For example, the UK began tracking “railway time” in 1840 based on the time observed in Greenwich, London. This solution did not work for the US, which spanned so much longitude that when it was 12:12 in New York (and 12:24 in Boston, and 11:18 in Chicago), it was also 9:02 in Sacramento. The eventual solution of resetting every single city’s time to standard hour-wide time zones was only proposed in 1870 and enacted in 1883, on an occasion called “The Day of Two Noons”.
That's right, it goes in the religion hole
Humans seem to be born with a hole in our brains where a language should go. Babies are programmed to take to language acquisition like a duck to water. I suspect that humans are also born with a hole in our brains where a religion should go. For most of recorded history, a religion, a pantheon, or perhaps earlier, an belief in something like animism, would arrive and take root there to explain the burning questions that humans perennially demand explanations to. What happens after we die? Why do bad things happen to good people? How will the world end?
But in our enlightened century, many people are born and raised without the influence of a religion, a pantheon, or even an animist belief system. I argue this leaves a religion-shaped hole unfilled, a naturally abhorrent vacuum. It drives people to try to fill it with whatever entirely inappropriate belief system is at hand. We all know people who’ve tried in vain to fill the religion hole with science, which is clearly not religion-shaped! Science tries to explain how and what, but not why. Why is there something instead of nothing? Why are we here? Science remains conspicuously silent.
Other people will search for something more religion-shaped to believe in. Maybe birds aren’t real. Or maybe you find your faith in QAnon, or PETA, or AI safety. Movements that are not conspicuously silent about why we were put here, what your reason for living is, and when it will all end. I’ve seen enough lost-but-seeking people find meaning in these new belief systems to bet there’s some strong force that naturally attracts people to them.
A consulting tip I’ve taken to heart is that when you’re trying not to do something, it’s a lot easier to find something else that’s incompatible with it and do that instead. It takes a lot of effort and willpower to continually not do something. It’s much easier to start doing the incompatible thing once and expend that effort on continuing to do it. For example, if you’re trying not to smoke this afternoon, it’s a lot easier to chew gum all afternoon than it is to not smoke all afternoon.
So a few years ago I searched my beliefs and constructed something religion-shaped from them. And I intentionally tried to fill my religion hole with my construct. And it worked! It worked better than I could have imagined. I was born and raised without the influence of a religion. I’ve never had faith in anything before. But I have faith now, in things I know I believe in, and it’s an incredible source of strength to be able to just believe in them, without the burden of proof. I feel like I now have a solid foundation that will never give way, and it grants me such a feeling of stability that I never knew I was missing and always wanted.
I offer my home-cooked faith as an illustrative example. This creed definitely won’t resonate for you in the same way it does for me. I don’t know that there’s a way to shortcut the introspection and experimentation it took me to get there. Still, you can hopefully see how arbitrarily choosing to define core axioms like these as true could robustly support a belief system.
“People are amazing. Every person has inherent worth and deserves a chance to be happy. When people put their minds to it, they can do anything.”
I think I will cause discord on purpose
I have a secret technique that will cause any group of engaged smart people to start squabbling. I haven’t seen it fail to work yet, with the caveat that I’ve gotten bored with seeing the same patterns of squabbling recur, so I haven’t deployed it much recently.
Stop me if you’ve heard this one before: is a hot dog a sandwich?
No wait, come back. It’s a meme question because it works. Well. It doesn’t really work any more because now everyone knows everyone knows it’s a meme question. But asking questions of that nature does work. Consider a fictional example:
- Mallory: Does a tuna salad count as a salad?
- Alice: Hmm. I don’t think so, since the tuna’s cooked and salads only contain raw things.
- Mallory: Interesting. So is a beet salad a salad?
- Alice: Only if the beets are raw? That suddenly doesn’t feel right to me.
- Bob: Wait, no. Those are both obviously salads. They have salad in the name.
- Mallory: Huh. Then would you say a fruit salad is a salad?
- Carol: That can’t be a salad! It’s sweet and salads can’t be sweet.
- Alice: What? You’re crazy. That’s definitely a salad.
You can see how it would work. If everyone leans toward accepting, come up with less and less salad-like examples until you uncover an argument. And do the opposite if everyone instead leans toward rejecting. Is a pile of croutons a salad? Is a chicken fajita a salad? Is a salad wrap a salad?
Why does this work so well? I think it has to do with how people tend to deal with the fuzzy boundaries around categories. My current mental model of how most people think about categories is: ask system 1 what feels right, then get system 2 to come up with a post facto justification. This results in inconsistencies and contradictions if you drill down into a specific category with just one person, and results in discord in a group setting.
If you see something like this happening and want to stop it, the most common defense is to categorically reject this kind of categorization question. While this does successfully protect you from my fellow agents of chaos, it does not protect you from FOMO. And, I think we can do better. Categories are useful tools for making sense of the world, and we don’t want to dismiss interrogating their inner workings out of hand.
My current mental model of how to think about categories in a healthy way is informed by semantics, a subfield of linguistics. The theory goes, each category I have easy mental access to is represented by a prototypical element in my brain. So when I think of “sandwich”, I think of a prototypical ham and swiss on sliced white bread with lettuce, tomato, and mustard. When I want to see if something else fits the category of sandwich, I mentally compare it to the prototype and produce a binary yes or no based on its similarity. The individual variation we observe results from how everyone’s internal learned encoding of “similarity” can be very different from each other and still yield the same result for nearly all relevant questions, nearly all of the time.
The way I cut this particular Gordian knot is that I reject your sandwich binary. I instead think of that ham and swiss as 1.0 a sandwich, a BEC bagel as .9 a sandwich, a taco as .6 a sandwich, a salad as .2 a sandwich, and so on. It’s our old friend cosine similarity, modeled in our own organic brains. Then you can set your cutoff for whether something is a sandwich based on what the question is asking about. You can even re-weight each property of sandwichness depending on what the question is. Sometimes you care more about whether the maybe-sandwich is edible and sometimes you care more about whether the maybe-sandwich will come apart if you throw it. I will not be taking further questions at this time.
And now you know the secret. Please don’t actually cause discord on purpose, unless you really want to, or it would be really funny.
Why histories of words?
ID: Colorful diagram of each letter of the alphabet and its ancestors through Latin, Greek, Phoenician, and Proto-Sinaitic.
So, why histories of words? What unusual circumstances conspired to drive me to select this one extremely specific niche as my creative outlet?
Between histories and words, words are more straightforward to explain. I love words. I read myself into myopia by the age of six. I’d read anything and everything, which my parents gladly encouraged, as their (totally fine) spoken English was far better than their written English. I relayed the contents of parenting advice pamphlets to my parents when I was ten. They’d drop me off at the library on weekends. The library is not childcare! But I appreciated the childcare it provided me. Growing up in the 1900s, I stumbled upon Jed Hartman’s delightful language blog, sparking a lifelong love of wordplay. Rare words get stuck in my head the way people get songs stuck in their head, a mental quirk I find annoying just as much as I find it useful.
My interest in histories began less directly. As a child, I read widely enough to absorb the advice that childlike wonder was precious and worth preserving. So I dutifully tried to preserve it. I’m happy to report that I still regularly experience childlike wonder, however diminished, in the form of an ambient impulse of “Huh, why is that there? What is its purpose? Who keeps it working? When did we start doing things like that?” The impulse still evokes surprise and awe at all the stuff we managed to do across hundreds of lifetimes. Decades of this eventually got me asking second-degree questions to sustain the same joy of discovery, then third-degree questions. And I came to appreciate that not only does everything have a story, the components of its story also each have their own stories, and the components of those stories can all somehow eventually be determined to be Plato’s fault.
I credit David Crystal’s The Story of English in 100 Words as the inspiration for my particular format. Because my burning question is “why this thing”, I favor detailed histories of specific things over general overviews. Crystal’s book showed me that you can use those specific histories to tell a larger, compelling story that lets you take in the overall arc of history through osmosis, even as it focuses on the details. I aspire to someday publish my own general history told through the medium of histories of words, specifically focused on the history of technology.
If my work has a core thesis, it’s that we made it all up. Dollars to donuts, crimes and punishments, weekends and holidays, we made it all up! That’s awe-inspiring and wonderful, and that also means we can change these things if we can get enough people to agree to. I don’t want to be obtusely postmodern here; there’s clearly a base reality underlying it all that we did not make up. But 99% of the stuff we see, interact with, and think about on a daily basis is human-made, artificial, and of our own laborious construction. And that means we can change them, and make more things like them.