A 100-year-old website

Fifteen years ago, a conference talk planted an idea in my brain that I haven’t been able to uproot since. What things would we do differently if one of the success criteria of a web project was that it should be readable 100 years from now? It’s strange that this question feels as radical as it does. We can easily read 100-year-old books, watch 100-year-old movies, and listen to 100-year-old songs. Isn’t it interesting that the idea of people browsing 100-year-old websites in 2126 seems so wild in comparison?

The web is already designed to be radically backwards compatible. You can visit a website that was last edited in the 1900s and interact with it basically the same way you would have back then. Even wilder, you can start up a computer that was built in the 1900s, insert a Netscape Navigator disk image of appropriate vintage, and point it at a contemporary website, and watch it mostly kind of work. Countless engineer-hours have been dedicated to maintain these unsung miracles. So what’s the problem? Can’t we just trust our collectively galaxy-brained engineers and archivists to find a way to preserve as much of this moment in amber as we can, so that future generations of historians can have all the background context they need to debate the literary merits of Homestuck?

When put in those terms, my problem sounds like more of a personal problem. I bet the likes of Substack and YouTube will be fully archived, indexed, and preserved in their original formats for historians willing to brave the brainrot. If I only cared about being legible to gen-delta historians, I’d just stick to the major platforms and stop expending mental energy on this ridiculous problem.

But I also bet we can do better. I’ve spent an entire career understanding how computers work from resistors to css. Shouldn’t I be able to use this hard-won knowledge to figure out how to construct something all my own that might stand the test of time? Furthermore, shouldn’t I share the results of my quixotic quest so that people have a chance to collectively improve on my half-baked ideas?

So here they are. By far my biggest self-imposed restriction is: all of my artistic work to date is entirely client-side. That means that there is no database, and therefore no persistent storage across computers. This sounds like a huge limitation, and it is! But you can do a lot more than you might expect without an internet database. For example, you can do everything that a computer could do in 1995, but seamlessly accessible from any device anywhere, as long as it has a web browser. You can use local storage to let people save their place and preferences, despite the lack of a central server.

If your work is entirely client-side, that then means you can distribute it as bare files. You don’t have to figure out how to bundle an executable, restart a server when it hangs, or figure out how to get a server to page you when it hangs. You can just spin up an FTP server, or a modern equivalent with a nicer UI — I currently use GitHub Pages — and put the files there, then point a URL at that bare directory. For local testing, you can spin up a minimal web server pointed at the directory with python3 -m http.server and test on that.

I believe the tech stack most likely to work in 100 years is the smallest possible tech stack. For simple projects, I tend to stick with vanilla JavaScript or TypeScript, which are far nicer to work in than they were just five years ago. Here’s an example. For more complex projects, I like the balance of simplicity, convenience, and portability that a Svelte app compiled down to a static site offers. Here’s an example of that.

One outcome of this process I can proudly point to now is this art project. It has now been publicly available and functional for eight years. In that entire time, I have spent zero (0) hours of maintenance on it. I’ve spent enough time around tech art spaces and their graveyards of dead project links to appreciate just how rare this actually is.

I do have to admit that, even with all of that accounted for, 100 years is probably not realistic. Hosting providers will go out of business, domain renewals cost money, and there’s a new norm around automatically deactivating your accounts when you die. So I mentally target 50 years. Far enough in the future that I expect to be dead, but for a short enough time that my infrastructure providers might not also have died. And I willingly choose to relinquish my agency, and trust in the archivists to handle the next 50 years after that.

To whom it may concern in 2076, I hope this email blog post finds you well…

Why are we so bad at reasoning about randomness?

I claim that patternless randomness, the kind generated by coins, dice, cards, and cryptographically secure pseudorandom number generators, has no precedent in the natural world. In this essay I will

Whoops, I didn’t mean to start a new paragraph there, that’s weird. In this essay I will argue the very mechanisms that made humans unreasonably effective at shaping the world to our liking also make it really counterintuitive to work with the kind of patternless randomness we often encounter today. You can clearly develop intuitions around randomness, but it’s a skill we have to deliberately practice, where untrained people will confidently make really bad predictions.

My core argument is that in nature, for every random-seeming event, there are always actually patterns behind it that you can identify and exploit. Were you just attacked by a bear? That’s a random freak occurrence! But over generations of people getting attacked by bears, they can work out that bears are more likely to live in and therefore be encountered in particular environments, tend to be more aggressive when you exhibit certain behaviors and odors, can be warned of in advance by droppings and tree markings, don’t show up at all in the winter for some reason, and so on.

In other words, every event that seems totally random is actually part of a pattern that you can learn. In a state of nature, you should always assume that every event has legible, understandable causes. Lightning is more likely to strike tall trees standing alone. The air feels different before it rains. Cooking food over a fire makes it taste better most of the time, and here’s what to do with the exceptions. Learning the pattern behind these patterns rewards you a with better chance of survival. If they learn to trust the expertise of people like you, the tribe you find yourself in has a correspondingly better chance of flourishing. For millennia, survival of the fittest has selected for minds and societies that see patterns in everything.

What biases might you expect to see in organisms that have evolved to recognize patterns? If you look at misconceptions most people have about randomness, a common theme is that they expect random samples to not be independent. The gambler’s fallacy is expecting an outcome, like red, that hasn’t come up recently to be more likely because it’s “due”. The opposite expectation can be seen in sports: the belief that if a player’s made a few shots in a row, they’re on a hot streak and more likely to make the next one. It feels like knowing past results should somehow influence your predictions of future results.

Sometimes people get mad about video games. One thing they might get mad about is if something that has a 50% chance of success fails five times in a row. This is clearly unfair and rigged. Sitting back here in our armchairs, we can dispassionately calculate the odds of this at 1/32, or 3% for every distinct set of five flips, and see that we should expect outcomes like this pretty often if you’re flipping that coin 500 times. So one thing games often implement to bring randomness more in alignment with expectations is a streakbreaker that nudges the odds in the player’s favor on repeated failures. A streakbreaker for successes is never considered, because knowing of its mere existence would cause players to claim the RNG is clearly unfair and rigged, and people don’t tend to notice anything strange if they succeed at five 50% flips in a row.

In conclusion, I think there’s a huge untapped market opportunity in certifiably organic random number generators. Just imagine, an octopus that predicts

How was the game world made?

Today I want to write about a thing people who play lots of computer games notice right away about a game. To make it more fun, I am using Simple Writer to say just the ten hundred words people use the most often.

One thing you learn to look for right away if you play a lot of games is how the game world was made. Was it made by people or was it made by machines? It used to be that all game worlds were made by people. In the last thirty years, worlds have been made by machines more and more often.

A thing you notice about worlds made by people for other people to enjoy is that where the nice surprises are put has meaning. Usually when machines make worlds for people to enjoy, the nice surprises are still very nice, but are put here and there without thinking just where will be the most exciting. So if you can tell you are in a world made by people, you try hardest to look in places you think are exciting, and trust there might be a nice surprise waiting for you. But if you think you are in a world made by machines, you instead try to look in as many places as possible so you have more chances to find nice surprises.

In this way, the real world acts like a world made by machines. People have not made the real world for other people to enjoy finding nice surprises, but for other very good reasons. Sometimes kind people put nice surprises in strange places people might look, but those nice surprises usually are not as good as money or power like they are in computer games. They are still nice!

The parts of the real world that people did not make were not made by machines either. They were there before people, and people who know how that older world works can find lots of nice surprises in it too, if they learn a lot about how the older world and what you can find in it.

Worlds made by people for other people to enjoy can be really fun, because the people who made the world can guess what you will do and when, and really try to make those things great for the enjoying people. But worlds made by machines can be really fun, because you know no one has put things there ahead of time, so the chance surprises you see and find are surprises just for you! No other person enjoying the world will see them in just the same way. They are different kinds of really fun.

In this way, enjoying a world made by people can be more like a good book or movie. Enjoying a world made by machines can be more like watching a team play a game against another team, or like people making up a funny act. It is exciting to know that no one has planned what will happen before it really happens!

Here again, the real world is more like a world made by machines than a world made by people. We do not know what is about to happen and what is supposed to happen, and that is really exciting. And, well, sometimes that is also really worrying.

animal

The top-level category animal first appears in English in 1398, a borrowing from Old French animal. That descends unchanged from Classical Latin animal, meaning “animal”. It’s the noun form of animālis, meaning “animate” or “living”. Its root anima meant “air”, “life”, or “soul”, ultimately descending from reconstructed PIE h₂enh₁-, meaning “breathe”, likely onomatopoeic. It’s interesting to note the most salient characteristic of the word for animals is that they’re animate. They move, unlike plants, which are planted in place.

What did English-speakers say instead of animal before 1398? It’s one of the ten hundred words people use the most often! The word it replaced in common use was beast, which turns out to also be a borrowing from Norman French from around 1200! Okay, well, what did English-speakers say instead of beast? The Germanic English word for animal turns out to be deer. You can compare the words for “animal” in other Germanic languages, like German Tier, Swedish djur, or Dutch dier. But what did Old English speakers say when they specifically meant a deer? Probably hart, stag, or hind. You can feel how important deer were in premodern England by the variety and terseness of the words describing them, and that’s even before considering specific terms like buck, doe, or fawn.

I’m fascinated by generic terms like deer that eventually acquired a specific meaning that invalidated their genericness. Another English example is the word corn, which used to be the generic word for “grain” but now specifically means maize. The original meaning is preserved in phrases like peppercorn, meaning granules of pepper, and corned beef, referring to grains of salt. Similarly, apple used to be the generic word for “fruit” but now specifically means apples. Pineapple used to be the word for pine cones, the “fruits” of pine trees, before pineapples were known to English-speakers. The Edenic forbidden fruit being described as an apple originates from a similar genericization in French. Non-European traditions commonly depict the forbidden fruit as a fig or as grapes.

Microorganisms were originally called animalcules, coined by Leeuwenhoek while experimenting with microscopes in 1677. It’s borrowed from Latin, meaning “little animal”. That term is still sometimes used in a technical sense to refer to specifically marine microbes. It was perhaps chosen in analogy to molecule, which was coined in French from the Latin for “little mass” in 1641, but not given its modern meaning until Avogadro proposed it in 1811.

Diagram of a contemporary tree of life based on cladistics, separating life into three top-level domains and twenty-five kingdoms. Diagram of a contemporary tree of life based on cladistics, separating life into three top-level domains and twenty-five kingdoms.

By 300 BCE, Greek philosophers were clearly dividing life between animals, studied in Plato’s Τῶν περὶ τὰ ζῷα ἱστοριῶν / Historia Animalium; and plants, studied in his student Theophrastus’s Περὶ φυτῶν ἱστορία / Historia Plantarum. Linnaeus originally proposed dividing all life into three top-level kingdoms in his 1735 Systema Naturale, which translate into the now-familiar “Animal, vegetable, or mineral?” While Twenty Questions was a well-known English parlor game by 1790, its iconic first question wasn’t typically included until the 1840s. The phrase was immortalized in Gilbert and Sullivan’s 1879 opera The Pirates of Penzance by its inclusion in “I Am the Very Model of a Modern Major-General”.

But by then, it was clear there was more to life than those three categories. A proposal to add protists (“primitive” microorganisms) as a top-level division was put forward in 1860, and in 1866 another proposal to remove minerals was submitted and accepted. In 1938, the newly-discovered distinction between prokaryotes and eukaryotes was reflected in four top-level divisions: animal, plant, protist, and monera (prokaryotes). The next eighty years of rapid discovery added more complexity than anyone anticipated. Today, we largely discard the idea of artificial top-level distinctions and let genetic similarity determine how things are categorized. If we had to pick seven second-level divisions roughly akin to those historical top-level divisions, we might propose animals, plants, protozoa, bacteria, archaea, fungi, and chromista. But as you can see in the phylogenetic tree above, those are kind of arbitrary choices.

In-N-Out Burger first added animal style to their secret menu in 1961, named after rowdy drive-thru customers the staff privately called “animals”.

A brief history of time units: Minutes

This is part two of a multi-part series on the origins of the time divisions of a day. Part one can be found here.

Minutes

While there was great demand for clocks accurate enough to measure individual minutes, they did not become a reality until 1656! Even the most accurate clocks in 1400 were expected to drift as much as 15 minutes per day. As such, the word minute does not appear in English until the mid-1400s. Instead, people subdivided hours into halves and quarters when they needed to be more precise. This can be seen preserved in phrases like “half past eight” and “quarter of nine”.

Minute comes from classical Latin minūta, meaning “very small”, and arrives at English through both Arabic and French. Specifically, the Latin phrase pars minūta prīma, meaning “first very small part”, was used in geometry to describe subdivisions of degrees. The ancient Sumerians used a sexagesimal (base-60) numeral system. Nearly all the attributes of the minute time unit stem from Sumerian astronomy dividing circles into 360 degrees, and then each degree into 60 subparts. This convention was passed down for 5,000 years through Babylonian, Greek, Roman, Arabic, and French astronomers before arriving in England.

The invention that finally enabled clocks accurate to the minute was the pendulum. Galileo’s studies on the mechanics of pendulums, first published in 1602, would go on to inspire Dutch scientist Christiaan Huygens to construct the first pendulum clock in 1656. By 1690, after a lifetime of iterative improvement, clockmakers had refined the design into what are recognizable as grandfather clocks today. Boasting just seconds of drift per day, these clocks were the first to commonly include minute hands.

Another key component of this hundredfold increase in precision was the addition of a control system. Even pendulums are subject to the laws of physical reality, and exhibit tiny variations in speed due to wear, temperature, and dozens of other variables that stubbornly resist exact calculations. The solution clockmakers ended up devising was a balance spring that measured the speed of the oscillation, then sped it up if it was too slow, or slowed it down if it was too fast.

Scan of a printed page from an 1857 American guide to train travel. Most of the page features a table of US cities and their respective local times when it is noon in Washington, DC. The rest of the page describes how to use the table. Scan of a printed page from an 1857 American guide to train travel. Most of the page features a table of US cities and their respective local times when it is noon in Washington, DC. The rest of the page describes how to use the table.

An additional wrinkle is that sunrise and sunset times also change based on your longitude (east-west location). In 1700, the solution to this problem was for every town to keep its own clock synchronized to its own solar noon. This solution started creating new problems when unimaginably fast rail travel became common in the mid-1800s. In 1799, it was impossible to send information, let alone people or cargo, any faster than a person on a horse. You might imagine sailing ships going faster than that, but skilled crew sailing with favorable winds might only reach an average speed of 10 km/h (6 mph).

European countries solved this new problem by coordinating on a single standard time per country. For example, the UK began tracking “railway time” in 1840 based on the time observed in Greenwich, London. This solution did not work for the US, which spanned so much longitude that when it was 12:12 in New York (and 12:24 in Boston, and 11:18 in Chicago), it was also 9:02 in Sacramento. The eventual solution of resetting every single city’s time to standard hour-wide time zones was only proposed in 1870 and enacted in 1883, on an occasion called “The Day of Two Noons”.