tycoon
The business magnate tycoon is first seen in 1857 as a transliteration of the Japanese honorific 大君 (taikun). It was a term of respect for the shogun of Japan, indicating that while he was neither the emperor, nor of imperial lineage, he was nonetheless the head of state. In 1861, US Cabinet members started jokingly referring to President Lincoln as the Tycoon. Tycoon’s meaning generalized to any important person in business during the late 1800s.
大君 has an exceptionally long history. It was the Japanese shogun’s diplomatic title for over 250 years, until the shogunate’s dissolution in 1868. Tokugawa Hidetada, the shogun from 1605-1623, first chose the title as a rebuke to Chinese imperialism. For over a thousand years, East Asian rulers had deferred to China under the custom of 外王内帝, “Emperor at home, king abroad”. Under this system, domestically rulers could style themselves 帝 (dì), meaning “emperor”, but internationally they had to be called 王 (wáng), meaning “king”, in deference to the Emperor of China (皇帝) ruling above them all. Tokugawa chose to be called “not emperor” to thumb his nose at the practice, while not actually asking to be crushed by the Ming military.
Japanese Empress Kogyoku (ruling 642-661) was the first recorded monarch to use the title 大君 over the customary 天皇 (tennō), signifying that she was not a direct descendant of the legendary emperors. This echoes the title’s c. 800 BCE usage in the 易經 (I Ching; Yijing in modern transliteration), also denoting a ruler without any imperial lineage.
1863 photo of Abraham Lincoln. The iconic black-and-white photograph features Lincoln staring directly at the camera. The texture of his skin looks strangely overemphasized, typical of many photos from the 1860s.
The names describing Gilded Age robber barons like Morgan, Carnegie, Rockefeller, and Charles Schwab[1] have a pleasing diversity of origin. Tycoon is from Japanese. Magnate is from Latin magnātēs, meaning “great man”. Baron is from Norman French barun, meaning “baron”. Mogul is from Iranian Persian مغول (moġul), meaning “Mongol”. That is, people from Mongolia, specifically the rulers of the Mughal Empire, people rich and powerful enough to construct the Taj Mahal.
The video game genre tycoon games where you manage a business are named after Railroad Tycoon (1990). The genre actually predates video games, with notable early examples The Sumerian Game (1964) and M.U.L.E. (1983), but didn’t get its current name until the 1990s. The most notable tycoon game series with tycoon in its name is RollerCoaster Tycoon (1999).
There’s also an unrelated card game called “Tycoon”, although I learned it as the drinking game “Asshole” in the 1990s. Some other English variants include “President”, “Tichu”, and “The Great Dalmuti”. They seem to all be ultimately based on a card game invented in Shanghai in the 1950s called 争上游 (Zhēng Shàngyóu), meaning “Struggling Upstream”. A more recent popularity bump can be attributed to Japanese variant 大富豪 (Daifugō), meaning “Grand Millionaire”, appearing in the 2004 translation of the manga フルーツバスケット (Fruits Basket).
[1] Apparently the most effective way to obliterate a famous billionaire from history is the emergence of an unrelated famous billionaire who happens to have the same name.
Thirty posts per month (ppm)
What doesn’t kill you makes you stranger. So I return from having written thirty posts in thirty days, totaling twenty-one thousand words. A novel is around 50,000 words, so if I had signed up for NaNoWriMo, I would have failed. Fortunately, I had actually signed up for Inkhaven.
I set a few extra restrictions for myself to breed creativity and promote quality. I’m glad I was able to maintain them all month:
- No writing about Inkhaven.
- No writing about writing.
- All posts must be visible to everyone.
Most other residents reflected on the month’s trials on the last day of the retreat, April 30. In keeping with my restrictions, I decided to write a post I’d written before instead. So you’re only getting this reflection now. It was nice to model an abundance mindset around post topics by revisiting an old topic. It was also a stark reminder of how much improvement you can get from a month of deliberate practice.
Lighthaven was a lovely venue. You could tell its caretakers had been optimizing it to host semi-professional conferences for years. It was stuffed with nooks for conversation, books for reading, and carefully maintained natural beauty. The environments were designed for variety. Furniture I’ll remember include a kotatsu, some very nice weighted blankets, a 1750s globe, and a programmable zen garden affectionately named Sisyphus. I’d recommend attending one of the many conferences held there if you get the chance, just to appreciate the interior design.
Everyone came in with their own misconceptions of what being a writer in an environment focused entirely on writing would be like. For some reason, I envisioned myself reading more books. Instead, I found myself reading other residents’ posts. Three hundred of them, apparently. Ten posts a day. I learned a lot about what works and what doesn’t through repetition.
Reading ten posts a day without really trying contrasted nicely with the difficulty and time investment required to write just one post a day. Here are the results of all that investment:
Posts I regret but will fix up later
Posts I feel okay about
- Infinite Lives, 1990: Super Mario World
- The wordification pipeline
- Infinite Lives, 2000: Diablo II
- Why histories of words?
Posts I was anxious about writing that would have been really hard to make outside of this context
- Experiments in self-medication
- 2048
- Misaligned, uncontained slop
- A brief history of time units: Seconds
- Left behind
- A 100-year-old website
- Why are we so bad at reasoning about randomness?
- That’s right, it goes in the religion hole
- On truth
Posts I feel good about
- Like eyeglasses for the mind
- schmancy
- Effective pre-journaling
- Clubs, really? That’s what you call those?
- katsu
- Infinite Lives, 2010: Super Meat Boy
- A brief history of time units: Minutes
- I think I will cause discord on purpose
- neapolitan
Posts I’m proud of
- Mickey Mouse stole our cultural legacy
- Infinite Lives, 2024: Caves of Qud
- Clean Air and Clean Water
- Engineering words for everyone
- Solar Revolution
- How was the game world made?
- animal
Photo of handmade pottery on an outdoor table. Over two dozen slightly different pots are arranged in rows.
That’s a really good ratio! The biggest impact is the list of posts I would never have considered writing publicly before this. One book I did read that month, Art & Fear, provided a sentence that will stay with me. “Making art provides uncomfortably accurate feedback about the gap that inevitably exists between what you intended to do, and what you did.” While the gap feels smaller now, more important was learning not to flinch from that feedback. Internalizing that your work is worth sharing with the world, even if it falls short of what you envisioned.
I was also glad the forcing function forced me to work on two series of posts I’d been having trouble writing. I’ve wanted to write Infinite Lives for literally ten years, and now four of its 50 parts are really real. A brief history of time units has been eating at me for less time, but still long enough to be frustrating.
I also came away with a new appreciation for editing. I spent over half an hour editing every post. The process reminds me of refactoring code to be cleaner and more cohesive. But people are more forgiving readers than machines, so editing feels significantly easier than refactoring. Trusting the edited result will be better than the first draft has even helped me grapple with the gap between vision and reality.
Socially, I acted on advice to try riskier things. A community I cared about where any awkwardness would be temporary made for a good environment to try showing more sides of myself I reflexively hide. Rose’s post on intentions gave me a well-timed framework to experiment with.
What’s next? I explicitly decided to focus on writing over making improvements to the blog, so this week I’ll work on the accumulated tweaks. Then I’d like to maintain my earlier baseline of one or two posts a week, with closer to 50% etymology than the 12% I made last month. See you on the internet!
Like eyeglasses for the mind
36 years ago, Steve Jobs famously described computers as “like a bicycle for the mind”. By analogy, I want to signal boost a tool for thought I think of like eyeglasses for the mind: Jesse Schell’s lenses, as described in his 2008 book The Art of Game Design.
Schell literally wrote the book on game design. 18 years later, his work is considered a default textbook, and priced accordingly. At the time, its novel theorizing, addressing inane-sounding but actually important questions like “What is a game?”, was a revelation.
The world is more complicated than you think, even if you already think the world is very complicated. And the wicked examples given in that article aren’t even alive, much less the combinatoric explosion involved in conscious. In order to make sense of all this rich wildness, we must reduce it to tractable ideas using simplifying abstractions. Using world models. All models are wrong, but some models are useful. No model truly captures reality — the map is not the territory — but intentionally swapping between models can give you a fuller picture of it.
Schell’s book models this modeling with the theory of lenses. Game design is an art, a craft, and a science. A good game designer has to inspect and analyze their creation from very different angles. Schell uses lenses as a metaphor to explicitly decide what angle to look from this time. You can imagine switching between lenses like visors in Metroid Prime. Maybe today it’s useful to detect motion, or heat, or sound.
1895 illustration of lens shapes, as seen from the side. Six different cross-sections, shaded with lines, are labeled with numbers from 1-6. 1 has two convex sides and 4 has two concave sides.
Some of the lenses I find consistently useful for understanding the world are:
Money. Who’s paying for this? Who are they paying? How does your job deliver value? Why do they pay you? What does the person who decides whether to pay for things want? What does the internal financial auditor want? Who paid to put this here? Who pays to maintain it? Who think this person’s job is valuable enough to pay for it?
Incentives. How does this person’s manager know how good a job they’re doing? Metrics, vibes, peer review? Does what you want from them align with any of those legible things, or are you stuck appealing to your mutual relationship or their kindness as a person? How does the company track how good a job this team is doing?
Status. How do this person’s actions make them look good to their peers? Their reports? Their superiors? What are this person’s clothes telling the world about them? How important do they think that is? How can I interpret this action as an attempt to gain status, or prevent losing status? Or were they trying to make this other person lose face? What values did they appeal to?
Problems. This object, this process, this clause, this structure, this checklist. Every single thing you can see. It’s not here arbitrarily. Someone decided to put it here because they had a problem and tried to solve it. What was the problem? Is that still a problem today? How would you know? How can you test that?
Data. This person or organization makes decisions based on what they know about the world. How do they know what they know about the world? What’s easy to track? Clickthrough rate, unique visitors, average call time, unit sales? What does it cost to track those things? What would they be tracking instead if it were easier?
Needs. What does this person need from the world, or from me, that they’re not getting? Quickly check baselines: food, shelter, sleep. Do they feel unsafe? Do they feel valued? Do they have autonomy, mastery, and purpose? Belonging, improvement, choice, equality, predictability, and significance?
Those six lenses are just the ones I personally find most useful. If you subscribe to lens theory, introspection will lead you to other lenses that are useful to you, and practice will tease out their relative importance.
While lenses are useful and powerful, like eyeglasses, they are a tool for noticing. Lenses won’t help you decide what actions to take once you’ve noticed something. A different metaphor would better serve you there. If you have a good one, I’d love to hear about it. Go forth and see the world more clearly!
A brief history of time units: Tenths
This is part four of a multi-part series on the origins of the time divisions of a day.
[Part 1: Hours] [Part 2: Minutes] [Part 3: Seconds]
Tenths
If we extend the pattern we see with hours, minutes, and seconds, then seconds should be subdivided into 60 thirds. Instead, we subdivide them into 1000 milliseconds. Imagine what a world where we used thirds would be like. Movies would run at 0.4 fpt (frames per third), fluorescent lights would flicker at 1.67 or 2 fpt, and games would strive for 1 fpt.
In the world we live in, decimalization and metrication became symbols of reason and revolution in 1790s France and America. Decimal time was officially enacted in France in 1793. Each day would be divided into 10 hours. Each hour would be subdivided into 100 minutes, and each minute would be subdivided into 100 seconds. The resulting 100,000 seconds per day is close enough to our 86,400 seconds per day that decimal seconds feel surprisingly reasonable. Decimal hours and minutes did not. The revolutionary regime gave up trying to enforce the unpopular change in 1795. (Swatch would resurrect decimal time 200 years later as Swatch Internet Time. This was also not widely adopted.)
Unlike decimal seconds, decimal meters and decimal grams (and decimal cents!) did successfully spread. By the mid-1800s, there was a clear consensus that any subdivision of seconds should be decimal, like the other metric units. Tenths was a long established English word for fractions. English ten (and its inverse tenth) shares an origin with German zehn or Dutch tien, rather than French dix, Spanish diez, or Latin decem. Preserving the original term was not a foregone conclusion. For example, English generally prefers the Latin word percent (short for per centum, meaning “by the hundred”) and derived terms percentage and percentile over its native hundredth (German hundert, Dutch honderd).
Painting of a horse race titled “The 1821 Derby at Epsom”, by Théodore Géricault. Four male jockeys ride racing horses across the English countryside against a backdrop of dark clouds. The horses have all of their legs outstreched off the ground in the “flying gallop” pose.
California tycoon Leland Stanford (who founded Stanford in 1885) commissioned an 1878 photo series called The Horse in Motion. Automatic electrophotographs captured how horses actually moved, revealing errors in human observation. The commonly depicted “flying gallop” pose, airborne with all four legs outstretched, was never actually a part of a running horse’s gait. The indifferent machine proved that what we thought we saw with our eyes was wrong. The photos were an international sensation, warranting a cover story in Scientific American.
In 1823, astronomer Friedrich Bessel observed consistent errors while recording very precise event timings. He created the concept of the “personal equation” to explain the errors: different people had different, but predictable, reaction times. There’s a lot of individual variation, but it takes most people 0.2 to 0.3 seconds to notice a stimulus and act in response. This raised a philosophical dilemma. What was the point of being human if impartial machines were better at seeing than we were?
The shift to impressionist and pointillist styles of painting illustrated a similar reaction to the spread of photography. If a camera could capture the reality of a landscape or stern expression more perfectly than any human could hope to, why compete directly? Why not do something else that machines couldn’t do instead?
Official times at the 1896 Olympics were kept by individual referees with stopwatches. This was barely enough precision to keep records to the tenth of a second. People tried many different things in the following decades to achieve hundredths of a second-level Olympic record accuracy, but kept running up against the limits of human reaction time. Any precision beyond tenths would require that humans be removed from the loop entirely.
Infinite Lives, 1990: Super Mario World
Super Mario World had impossibly big shoes to fill. Not just the direct sequel to megahit Super Mario Bros. 3 (1988), not just one of two launch titles justifying the purchase of a new ¥25,000 (roughly $470 equivalent) console, but also the first in a new era of video games where anything was possible and technical limitations were a thing of the past. Not only did Super Mario World successfully fulfill its expectations, it’s still commonly used as a creative canvas 35 years later.
This is the fourth in a series of posts exploring video game history by focusing on one game I loved from each year, 1978–2027.
[2000: Diablo II] [2010: Super Meat Boy] [2024: Caves of Qud]
Screenshot from Super Mario World shown at double its native 256x224 resolution. Wearing a yellow cape, Mario is riding Yoshi in a dense forest, flanked by a hovering Goomba, a shell-less Koopa, and an obscured Wiggler.
Super Mario Bros. 3 was already a nearly perfect 2D platformer. What could Super Mario World add to make people to sit up and take notice? Levels could have secret exits, rewarding exploration with surprise and delight. You could replay levels, helping the world feel more like a world and allowing you to search for secrets at your leisure. Mario could already fly, but his new yellow cape boasted dynamic speed and altitude control. Most visibly, the Mario team had always wanted to have him ride a dinosaur, but weren’t able to make that work within hardware limitations until the Super Nintendo.
Super Mario World’s influence extends far beyond its initial release. It had a central role in the development of romhacks, where video game designers would replace the game’s data with their own levels. One Japanese romhack series, Kaizo Mario World (2007), inspired the category of extremely difficult “kaizo hacks”, and eventually the entire precision platformer genre. A tool-assisted speedrun demonstrated a crowd-pleasing arbitrary code execution exploit in 2014. Wilder still, a human-viable arbitrary code execution skipping to the credits was discovered in 2016.
Like many children’s properties, Super Mario World enjoyed a multimedia merchandising blitz that included comic books and a TV cartoon. The Mario Cinematic Universe became a childhood obsession for me. McDonalds tie-in? Weird cereal? Serial manga? Off-model trading cards? I was there for it, designing levels, drawing comics, and imagining extensions to Super Mario World and its tie-ins like Super Mario Kart (1992), Yoshi’s Cookie (1992), and Mario is Missing (1993). I dressed as Mario for Halloween in 1993. It’s hard to overstate its impact on my childhood.
There’s a tough tradeoff to make when categorizing games before 1995. Do you count the year a game was first released in any market, or just the US market? In this series, it makes more sense to count the first release in any market. This approach better showcases the gradual lifting of the era’s technological limitations. One caveat is that Super Mario World’s US influence is inextricably tied to the media landscape it released into in late 1991, not 1990. Another caveat is this series is a personal history, and my own stories about the game center around 1992, not 1990.
In the US, Super Mario World launched alongside Sonic the Hedgehog (1991). From our future perspective, we can assess that Mario’s fundamentals and continuous reinvention handily outpaced Sonic’s attitude and speed. Both hardware and game design needed to advance before Sonic’s core promise of full speed could be realized. That didn’t line up until as recently as Sonic Mania (2017). In contrast, Super Mario World is still the exemplar of a 2D platformer. Super Mario Wonder (2023) might do more things, do wilder things, and be more kid-friendly, but it isn’t fundamentally better at doing the thing that Super Mario World aimed to do. Anyway, this was all unclear at the time. The two mascots began a heated rivalry that only started dying down a generation later, with Mario & Sonic at the Olympic Games (2007).