Kiyoshi Kurosawa

I laughed at this review of Kiyoshi Kurosawa's movie Creepy on Letterboxd:

I miss the old Kiyoshi. The can't-be-sold Kiyoshi. Back in the fold Kiyoshi. That was the bold Kiyoshi. Remember Cure Kiyoshi? That wasn't your Kiyoshi? 'Cause that was my Kiyoshi. Damn, that was fly, Kiyoshi! You came with Kairo, Kiyoshi. That shit was fire, Kiyoshi! Even Bright Future Kiyoshi, that ill repute Kiyoshi. I dug it all, Kiyoshi. So why'd you stall, Kiyoshi? And then that Journey to the Shore? You got some gall, Kiyoshi!
 
I miss the real Kiyoshi. I miss the real Kiyoshi. The danger you can't see but you can feel Kiyoshi. Stain on the wall Kiyoshi. Man, that was all Kiyoshi. Remember jellyfishin' mesmerism baller Kiyoshi?
 

Click through to read the rest. Letterboxd doesn't seem to have a large audience, and the site is still sluggish, but it's gathered what feels like the last of the cinephiles in a cozy little commune. Whereas many still turn to Rotten Tomatoes or Metacritic to make filmgoing decisions, I'm far more reliant on reviews from folks I follow on Letterboxd.

Having watched over 3,000 movies in my life, I now crave both narrative and formal novelty much more than I used to, and the average mainstream film has lost a lot of its appeal. I still love to sit in a darkened theater, alone with just the image on the screen. True, there are only so many plots, but how the movies choose to shoot them can also do with constant renewal. A quick scan of ratings and reviews on Letterboxd is a far more reliable gauge to locating the type of movie I'm likely to enjoy at this stage in my life. In the area of aesthetic evaluation, score a small victory for niche-community-based reviews over the professional critic community (what little of the latter remains) or over algorithms like Netflix recommendations.

I'm a Kiyoshi Kurosawa fan, but I've yet to see Creepy. I'll always have a sentimental attachment to Kurosawa because his movie Cure was really the first movie I remember seeing at a film festival, at the Seattle International Film Festival way back in 2001 (?). It is creepy and sublime and a great introduction to his techniques for building dread. When I think of his movies I think of suspense built out in a single shot, usually a long or medium shot, with no cuts, as if the other more famous Kurosawa had decided to venture into horror and suspense. Kiyoshi Kurosawa would make an interesting VR director.

You see a Kurosawa scene playing out almost as if shot with a camcorder pointed out a mundane scene from everyday life, and then, bit by bit, you spot it. Evil. Uncoiling almost casually, camouflaged because it moves at the same pace as everyday life. It's what I think of as his signature style, a way to locate the horror hiding in plain sight amidst the seeming order of everyday Japanese life. I've only seen Cure that one time at SIFF and yet I can still picture some of the scenes, the mise-en-scene was so striking.

If you're a fan of Se7en or No Country For Old Men, give Cure a spin. I deliberately chose two very famous movies, though they are formally very distinct from Kurosawa's style, because they rhyme thematically. They inspired, in me, a claustrophobic sense of dread. The most terrifying evil is the one that can't be explained, can't be understood. In confronting it you look into the abyss.

Facts are still high cost, low virality

My friend Aaron wrote me in response to my piece Tower of Babel. It was thoughtful and he gave me permission to copy it here.

Regarding filter bubbles: you addressed information consumption but not information production.
 
Have you ever been to an out-of-control town-hall meeting? You know: one of those nightmare evenings on a hot-button issue that's poorly moderated, overly emotional, under-factual, with low information quality, self-promotional speakers, hardened political positions and an absolute din of side conversations? Such things happen in the real world, too, and that's what the current online state of affairs reminds me of.
 
Part of the reason for this is that Facebook, Twitter and online-comment forums have driven barriers to entry to near zero for OPINION production and broadcasting. And in the attention economy, a frictionless virality is the Holy Grail for any profit-maximizing firm. Nothing drives virality like outrage. And nothing drives opinion production by Homo sapiens sapiens like the opportunity to be liked a whole bunch instantly. Facebook has pulled all the design stops to encourage attention whores to feed the social graph and has used its market power to usurp news traffic while avoiding a concomitant journalistic responsibility. Twitter => 140 characters => anyone can tweet => often. And of course attention-economy profits are further increased for firms that can also minimize their moderation and editorial costs via automated algorithms and user-operated filters.
 
Is the end result at all surprising?
 
Add to these things the explosion of images, meme gifs and video -- further developments that change the nature of what is communicated and the relative densities of information vs. emotion; we have basically experienced the TV-ification of a once text-based Internet. You will recall that much of the BETTER-WORLD HOPE of the Internet 20 years ago was projected from the idea that the web was a super-fancy set of linked-up electronic books and that email was the means for any scholar to write to any other for the advancement of knowledge [imagine any suitable emoji here]. 
 
Meanwhile, journalism's FACT-finding remains expensive while its business model has been eviscerated. And: few facts go viral.
 
The results for our political discourse: 1) the raw information published today is lower average quality; 2) the mechanisms for information refinement are dramatically weaker; 3) the incentive to be heard drives a Darwinian process towards particular signals that cut through whatever present background exists.
 
I think adding these production-side aspects to the consumption-side points that you made provides a more complete picture. Also important (to my eyes anyway) is that we seem to be entering a period of some paradigmatic uncertainty in the United States and globally; such uncertainties would stress political discourse no matter the technological backdrop. It might only be possible to understand some developments with hindsight.
 

A question for readers: what systems, in any realm, at any point in time, rewarded quality over provocation, for a mass audience? I can think of systems that have done so at very small scales, but when the problem to be tackled has the advantage of scale, the solution may need to work at that same level.

The "always on" computer

At one of the early company All Hands meetings sometime during my time at Amazon (1997-2004), during employee Q&A for Jeff Bezos, someone asked him what change in the world might have the largest positive impact on Amazon's business (I don't remember the exact phrasing of the question, but it was along those lines).

I'll never forget his response, which seemed really strange to me at the time. He said the thing that would be the biggest game changer was an "always on computer." Kids these days may not remember this, but back then we all worked on laptops or desktops that booted Windows, and each time you turned a computer on it took a really long time before they were ready to use, on the order of minutes. My usual morning ritual at Amazon was to boot my computer and then go grab a drink from the break room to make use of the wait.

We were all highly attuned to any friction in the shopping process, but my mind gravitated to all the downstream pain points in ecommerce like shipping fees and delivery transit times. Bezos, as always, was already working far beyond that, thinking back upstream, to the near future.

With an always on computer, he explained, you could turn it on like a television or a light bulb and it would just be on, immediately, which might be possible if the computer was mostly running off of RAM (yes, this was the age before SSD's, which I guess he also saw coming). This would get more people online more often, growing the potential market for Amazon shoppers, who had to be online to access our store.

Of course, I think he'd even admit that he had no idea the "always on" computer would take the form of a smartphone connected to a cellular network. Not only do these computers turn on instantly, they're actually almost never off. What's more, they're not just always on but always connected.

I don't know if Amazon All-Hands meetings these days are still as interesting as they were in the old days, but I wish I had written down Jeff's response to the most interesting questions from the ones I attended. He dropped enough wisdom in those Q&A sessions to make for a succinct and brilliant business book.

Tower of Babel

It's been a long time since I've written, and I'm out of shape. Let's go long for this one, to make up for lost time.

This first real post of 2017 has to acknowledge the year that just concluded, which still lingers in the mind like an unwelcome houseguest who vomited on the carpet and is still passed out on the living room sofa the next morning

To begin, let's pull out two responses to the Edge annual question 2017 edition which asks "What scientific term or concept ought to be more widely known?"

The first response is from Eric Weinstein, who nominates Russell Conjugation, a term with which I'm indeed unfamiliar, and one which plays into my weakness for fascinating concepts with cryptic names.

Russell Conjugation (or “emotive conjugation”) is a presently obscure construction from linguistics, psychology and rhetoric which demonstrates how our rational minds are shielded from understanding the junior role factual information generally plays relative to empathy in our formation of opinions. I frequently suggest it as perhaps the most important idea with which almost no one seems to be familiar, as it showed me just how easily my opinions could be manipulated without any need to falsify facts.
 
...
 
The basic principle of Russell Conjugation is that the human mind is constantly looking ahead well beyond what is true or false to ask “What is the social consequence of accepting the facts as they are?”  While this line of thinking is obviously self-serving, we are descended from social creatures who could not safely form opinions around pure facts so much as around how those facts are presented to us by those we ape, trust or fear. Thus, as listeners and readers our minds generally mirror the emotional state of the source, while in our roles as authoritative narrators presenting the facts, we maintain an arsenal of language to subliminally instruct our listeners and readers on how we expect them to color their perceptions. Russell discussed this by putting three such presentations of a common underlying fact in the form in which a verb is typically conjugated:
 
I am firm. [Positive empathy]
You are obstinate. [Neutral to mildly negative empathy]
He/She/It is pigheaded.  [Very negative empathy]
 
In all three cases, Russell was describing people who did not readily change their minds. Yet by putting these descriptions so close together and without further factual information to separate the individual cases, we were forced to confront the fact that most of us feel positively towards the steadfast narrator and negatively towards the pigheaded fool, all without any basis in fact.
 

The next concept in the recipe is coalitional instincts, nominated by John Tooby. Forming coalitions was one of the skills that elevated homo sapiens to the top of the animal kingdom.

Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups. (Although the concept of coalitional instincts has emerged over recent decades, there is no mutually agreed term for this concept yet.) These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. Coalitions are sets of individuals interpreted by their members and/or by others as sharing a common abstract identity (including propensities to act as a unit, to defend joint interests, and to have shared mental states and other properties of a single human agent, such as status and prerogatives).  
 
Why do we see the world this way? Most species do not and cannot: Even those that have linear hierarchies do not: Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.
 

As with so many things in life, a source of strength is also the root of weakness. They can be a great people, Kal-El, but not if they keep ganging up on each other for no reason other than to feel the psychological comforts of being in an in-group.

This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, her friends, and her cherished group identity. This freezes belief revision.  
 
Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. Once scientific propositions are moralized, the scientific process is wounded, often fatally.  No one is behaving either ethically or scientifically who does not make the best-case possible for rival theories with which one disagrees. 
 

You can see where I'm headed with all of this, especially if you're still staggering out of 2016 like a boxer regaining consciousness after being knocked out. "What happened?" you can read on every fighter's lips as they open their eyes and blink at the concerned faces looking down at them on the canvas. "What happened?!" asked a dazed populace after Election Day 2016.

Our next POTUS, whether wittingly or not, weaponized Russell Conjugation and our coalitional instincts and performed a jiu-jitsu toss on his opponents, leaving much of the country wondering whether winner-take-all elections for the Presidency are a good thing in a country so evenly divided.

It seems darkly appropriate, in this Internet age, that a troll is now occupying arguably the most powerful seat in the world. It's a miracle of the worst kind that someone can openly flout nearly every convention of human decency and civility, not to mention statecraft, and walk about untouched. Something's rotten in the state of Denmark alright, but no play is needed to catch the conscience of this king. It's somewhat of a miracle how every Tweet of his sifts the same factions apart; one side screams in disgust and disbelief, the other piles on with glee. Let's call that technique of tweeting the Trump Conjugation. Sad!

Many claim the internet creates filter bubbles, but I believe the mechanism by which the Internet amplifies tribalism doesn't work the way most people describe it. The common explanation is that we form networks with like-minded people and only hear the opinions of those who agree with us, reinforcing our narrow world views.

My belief is that the Internet has increased our exposure to diverse viewpoints, including those from oppositional tribes. I suspect everyone who uses the Internet regularly encounters more diverse opinions, in absolute terms, than prior to the rise of the Internet, and there is research (PDF) to support this. Our information diet is more diverse now, and as opposed to the age before social media or even the Internet itself, we're exposed to more opinions that both strongly confirm AND counter our beliefs.

This shouldn't be so surprising or counterintuitive. Instant access to lots of information is what the Internet excels at better than any medium in history.

In fact, ask many and they'll admit they yearn for the more peaceful age before they were made aware of how those in other tribes thought. The sliding scale of horror starts, relatively harmlessly, with a Facebook post from a friend you didn't realize was a staunch Republican, or an email forward from an uncle still subscribing to a casual racism acceptable in an earlier generation when his views hardened. Maybe it's a segment from Fox News which is, yes, only seen because it's excerpted on The Daily Show, but seen all the same. How can people think this way?

Move up one notch on the scale of horrors and you might find the vitriol in online comment threads attached to articles and op-eds, which one sometimes scans out of some misplaced optimism and which stuns you with the sudden violence of a drive-by-shooting. Pull on that thread of toxicity even further and you may end up encountering direct harassment on services like Twitter. In the pre-Internet age, I don't recall every having such fine-grained resolution on the opinions of the opposition. What many call a filter bubble might just be psychic defensive shields.

The pre-Internet age actually felt much more like a filter bubble, one in which we had a comforting if illusory feeling of kinship with our fellow citizens. Many signal their cosmopolitanism by decrying life in the filter bubble, but what few of those admit is that life outside the filter bubble is a brutal wasteland (minus the poetic language of a Cormac McCarthy, who might be the only one to stare into the abyss of 4chan and find in there a new bogeyman to rival Anton Chigurh for pure nihilism).

[Many disagree and still hold resolutely to the thesis that the internet has cocooned us in filter bubbles, and I'm open to that argument if supporters bring data to the table rather than just their own anecdotal impressions. If there was ever a year that should have made us all suspicious of our feelings about what was happening, 2016 was it. Anecdotal journalism is not inherently good or bad, and that is its fundamental weakness.]

What should terrify us, and what may be the real and deeper problem, is how we reacted to this explosion in diverse thought and information which the Internet unlocked. The Utopian dream was that we'd rethink our hypotheses in the face of all the new ideas and conduct rational debates like civilized adults. A more informed populace would be a wiser one.

Instead, we've regressed, forming teams and grabbing stones to hurl at each other like primates. Welcome to the jungle. 2016 felt like a Hobbesian anarchy of ideological warfare, and it's turned Twitter into a bar where drunken brawls break out every few minutes. It's a downward spiral of almost insufferable negativity from which Twitter may never recover, exacerbated by a 140 character limit of which one side effect is that even reasonable people sound smug. The English language is capable of nuance, but not often in 140 characters, another reason Twitter's absolute refusal to update that outdated rule is so short-sighted.

In contemplating 2016, I went back and reread the myth of the Tower of Babel. Here's the story from the King James edition (source: Wikipedia):

1. Now the whole world had one language and a common speech. As people moved eastward, they found a plain in Shinar and settled there.
 
2. They said to each other, “Come, let’s make bricks and bake them thoroughly.” They used brick instead of stone, and tar for mortar.
 
3 And they said, “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth.”
 
4 But the Lord came down to see the city and the tower which the sons of men had built.
 
5 And the Lord said, “Indeed the people are one and they all have one language, and this is what they begin to do; now nothing that they propose to do will be withheld from them.
 
6 Come, let Us go down and there confuse their language, that they may not understand one another’s speech.”
 
7 So the Lord scattered them abroad from there over the face of all the earth, and they ceased building the city.
 
8 Therefore its name is called Babel, because there the Lord confused the language of all the earth; and from there the Lord scattered them abroad over the face of all the earth.
— Genesis 11:4–9[10]
 

So much in one short story, beginning with the recognition of the power of language to produce coordinated action, with which mankind would be capable of anything. As God phrased it, "nothing that they propose to do will be withheld from them." (it's not ironic but intentional that I pull this reference from Wikipedia, one of the modern examplars of coordinated human action). Language and money are among mankind's greatest creations, allowing for trust and coordinated action never possible previously.

Then The Tower of Babel story concludes with the division of mankind all over the Earth, a succinct metaphor for the rise of tribalism, with all its benefits and ills.

What's worth reconsidering is the causality outlined in the second half of the story. Babel (root of the word babble) claims that in giving humans different languages, he fractured humans into rival tribes incapable of coordinating with each other.

What if the causality is mistaken? What if, even when we share the same language, we cannot, will not, understand each other? That's what 2016 felt like. Russell Conjugation might be a design flaw of the English language. Even among all of us who speak English, even when we're watching the same data, whether it's video or text or economic charts, we can't seem to agree. If differences in language are what divide us, translation is a solution. But if even a common language can't overcome our tribal instincts or our mood affiliation, the solution is not as clear.

Speaking of mood affiliation, Steven Pinker, in an interview with Tyler Cowen, wondered:

I’m hoping that naming and shaming and arguments will give free speech a greater foothold in academia. The fact that academia is not the only arena in which debates are held, that we also have think tanks and we also have a press. We also have the Internet.
 
How we could set up the rules so that despite all of the quirks of human nature — such as intellectual tribalism — are overcome in our collective arena of discourses is, I think, an absolutely vital question, and I just don’t know the answer because we’re seeing at the same time — there was the hope 20 years ago that the Internet would break down the institutional barriers to the best ideas emerging.
 
It hasn’t worked out that way so far because we have the festering of conspiracy theories and all kinds of kooky beliefs that somehow the Internet has not driven out, but if anything has created space for. How we as a broader culture can tilt the rules or the norms of the expectations so that if you believe something that’s false, eventually you’ll be embarrassed about it, I wish I knew. But that’s obviously what we ought to be striving for.
 

I wish I knew, too.

Plenty of much smarter people fear Artificial Intelligence, I don't know where I fall on that debate. However, I'm firmly in the camp of hoping cars learn to drive themselves, that they'll far surpass humans in improving the safety of our roads. But if replacing human fallibility with AI is a path to social good in driving, why not elsewhere?

Humans are capable, at their peak, of being very rational thinkers. But what's concerning for the world is how rarely we operate at the limits of our potential, and in how many contexts we become irrational, or even complete idiots. AI can take many of the best aspects of human logic and scale them, make them reliable.

Some of the best CEO's I've encountered in my life, the Jeff Bezos and Mark Zuckerberg's of the world, are capable of being rational a much higher percentage of the time than the average person. They seem far less susceptible to the usual cognitive biases. When I say someone thinks like a computer, many interpret that as an insult, whereas I see it as a supreme compliment. This is why most middle managers may someday be replaced by AI.

Too much of our pop culture, especially Hollywood, venerates human emotion, despite its often crippling effect on our thinking. Nothing's wrong with that, but very few movies have the courage to follow rational thought to its extremes without softening it with some signs of humanity (read: frailty). The closest pop culture icons of rational thinking that leap to mind are Sherlock Holmes and House, the latter of which was based on Sherlock Holmes (Holmes --> Home --> House), and Spock of Star Trek. Watch enough movies about them, however, and there is always a moment where each of these characters learns about the virtues of love and humanity from Watson or Captain Kirk or another of the less rational around them.

Why pull the punch? If Spock had to confront the trolley problem, he should by all accounts save the lives of the five over the lives of the one, even if the one were his friend Kirk (I'm conveniently leaving out the option in which Spock takes Kirk's place on the train tracks, because Hollywood would, of course, choose that route). We should applaud that, but are human so we shudder.

Which leaves Us versus Them. The darker side to coordinated human action. Us versus them is so powerful a force that confronting it can feel demoralizing. It is everywhere. Even an artificial construct like sports can set people against each other in ways that incite violence. This leaves us a challenge, one which is writ small in corporate environments. How do you turn zero sum games into positive sum games? Because if you can't, perhaps we're doomed to duke it out for eternity.

Two minor pop culture spoiler alerts here, for those who haven't ready through The Dark Forest (book two of The Three Body Problem trilogy, which I just finished today and which has left me giddy to dive into the concluding book) and Watchmen, by Alan Moore and Dave Gibbons (without thinking too deeply, it's likely my favorite graphic novel of all time). If you haven't read those two, this post ends here for you, though if you've read the Watchmen and not The Dark Forest then perhaps you won't mind a minor spoiler.

The Dark Forest's entire story hinges on two axioms of cosmic sociology, described early in the book in a somewhat casual conversation:

“First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant.”
 
To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion, and the technological explosion.
 

It's not often that a book gives you all the clues to decipher the plot up front, but one of this book's pleasures is that it does so in such an offhand way that when all becomes clear at the end, you revisit that earlier moment the way viewers scanned back through The Sixth Sense when the ring dropped to the floor at the end. It's relevant here because of the nature of zero sum games, and those with an interest in game theory will love The Dark Forest.

In Chapter XI of Watchmen (and here I'll say the spoiler is much larger, so if you haven't read it just stop here, do not pass Go, do not collect $200), Adrian Veidt reveals that at a similar moment of despair about human nature, he hatched a plan to harness the power of Us versus Them by turning all of the world into Us. To do so, he creates a fictional Them to unite the world, the famous Watchmen Monster.

It's one of the most mind-blowing mystery reveals in my fiction reading life, and I had a similar moment of delight at the end of The Dark Forest.

If you can't beat Us vs Them, just make a Them so daunting everyone joins Us (though taken out of context, yes, this looks less daunting than like a hippie Octopus tripping on LSD)

If you can't beat Us vs Them, just make a Them so daunting everyone joins Us (though taken out of context, yes, this looks less daunting than like a hippie Octopus tripping on LSD)

It may seem dire to turn to extreme science fiction plot twists for solutions to current predicament, but given the quality of the stories cited, perhaps they deserve credit for seemly remotely plausible. If or when humanity evolves past these fundamental flaws in our design, centuries down the road, it may seem from this vantage point here at the start of 2017 as much like science fiction as an iPhone might to a caveman.

Watership Down

A few people I follow on Twitter acknowledged the recent death of Richard Adams, author of Watership Down. I read the book in grade school for a class and remember being thoroughly absorbed by it, but the details of its language have, for the most part, faded from memory.

However, a passage someone cited from the novel was so poetic as to elevate the book in my memory all at once. It might not be the children's book I recall it to be.

“The full moon, well risen in a cloudless eastern sky, covered the high solitude with its light. We are not conscious of daylight as that which displaces darkness. Daylight, even when the sun is clear of clouds, seems to us simply the natural condition of the earth and air. When we think of the downs, we think of the downs in daylight, as with think of a rabbit with its fur on. Stubbs may have envisaged the skeleton inside the horse, but most of us do not: and we do not usually envisage the downs without daylight, even though the light is not a part of the down itself as the hide is part of the horse itself. We take daylight for granted. But moonlight is another matter. It is inconstant. The full moon wanes and returns again. Clouds may obscure it to an extent to which they cannot obscure daylight. Water is necessary to us, but a waterfall is not. Where it is to be found it is something extra, a beautiful ornament. We need daylight and to that extent it is utilitarian, but moonlight we do not need. When it comes, it serves no necessity. It transforms. It falls upon the banks and the grass, separating one long blade from another; turning a drift of brown, frosted leaves from a single heap to innumerable flashing fragments; or glimmering lengthways along wet twigs as though light itself were ductile. Its long beams pour, white and sharp, between the trunks of trees, their clarity fading as they recede into the powdery, misty distance of beech woods at night. In moonlight, two acres of coarse bent grass, undulant and ankle deep, tumbled and rough as a horse's mane, appear like a bay of waves, all shadowy troughs and hollows. The growth is so thick and matted that event the wind does not move it, but it is the moonlight that seems to confer stillness upon it. We do not take moonlight for granted. It is like snow, or like the dew on a July morning. It does not reveal but changes what it covers. And its low intensity---so much lower than that of daylight---makes us conscious that it is something added to the down, to give it, for only a little time, a singular and marvelous quality that we should admire while we can, for soon it will be gone again.”
 

I will have to revisit it this year. I seem to recall a colony of rabbits who choose to live under authoritarian rule. That seems somewhat relevant now, no?