Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: http://lesswrong.com/comments/.rss

  1. <?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[LessWrong]]></title><description><![CDATA[A community blog devoted to refining the art of rationality]]></description><link>https://www.lesswrong.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 18 Sep 2024 13:35:42 GMT</lastBuildDate><atom:link href="https://www.lesswrong.com/feed.xml?view=rss&amp;karmaThreshold=2" rel="self" type="application/rss+xml"/><item><title><![CDATA[Pronouns are Annoying]]></title><description><![CDATA[Published on September 18, 2024 1:30 PM GMT<br/><br/><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/payromvs5z0zqxoxstlu" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/efnjfpi84w7qj0uohfm1 180w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/qd26pepdorj8ewy3nssa 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/qia61tquvk88utmvl9di 540w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/zjzsel6cnqooi1vlmbhn 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/rzonqkw1plgduedde8c5 900w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/umakiuuqxbqtoulhhlil 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/astpcst0pvccplmejcgx 1260w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/p43h9rqaqgru0wrb3iu7 1440w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/ncvbrjuw9391velozgcc 1620w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/zrHHLngm5CmQLD2zk/zaaesbpqhlg5cedvm0ly 1792w"></figure><p>This post isn’t totally about the culture war topic du jour. Not at first.</p><p>As with any other topic that soaks up angst like an ultra-absorbent sponge, I wonder how many have lost track of how we arrived here. <i>Why</i> are pronouns? Pronouns have always been meant to serve as a shortcut substitute reference for other nouns, and the efficiency they provide is starkly demonstrated through their boycott:</p><blockquote><p>John went to the store because John wanted to buy groceries for John’s dinner. When John arrived, John realized that John had forgotten John’s wallet, so John had to return to John’s house to get John’s wallet.</p></blockquote><p>So that’s definitely a mouthful, and using <i>he/his</i> in place of <i>John</i> helps lubricate. Again, pronouns are nothing more than a shortcut referent. Zoom out a bit and consider all the other communication shortcuts we regularly use. We <i>could</i> say National Aeronautics and Space Administration, or we can take the first letter of each word and just concatenate it into <i>NASA</i> instead. We <i>could</i> append ‘dollars’ after a number, or we could just use $<i> </i>instead.</p><p>The tradeoff with all of these shortcuts is precision. Depending on the context, NASA, for example, <a href="https://en.wikipedia.org/wiki/NASA_(disambiguation)">might also refer to</a> the National Association of Students of Architecture in India, or some mountain in Sweden. Dollar signs <i>typically</i> refer to American dollars, but they’re also used to denote <a href="https://en.wikipedia.org/wiki/Dollar_sign#As_symbol_of_the_currency">several other currency denominations</a>. The same risk applies to pronouns. It’s not a problem when we’re dealing with only one subject, but notice what happens when we introduce another dude to the pile:</p><blockquote><p>John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.</p></blockquote><p>Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additional context.</p><p>One way to deal with ambiguous referents is to just increase the number of possible referents. Abbreviations could have a higher level of fidelity if they took the first <i>two</i> letters of every word instead of just one, then no one would risk confusing NaAeSpAd with NaAsStAr. For full fidelity, abbreviations should use every letter of every word but then…obviously there’s an inherent tension between efficiency and accuracy with using any communication shortcut.</p><p>Same thing for pronouns. You need just enough of them to distinguish subjects, but not so much that they lose their intuitive meaning. When cops are interviewing witnesses about a suspect, they’ll glom onto easily observable and distinguishing physical traits. Was the suspect a man or a woman? White or black? Tall or short? Etc. Personal pronouns follow a similar template by cleaving ambiguity along well-understood axes, breaking down the population of potential subjects into distinct, intuitive segments. Pronouns can distinguish singular versus plural (I &amp; we), between the cool and the uncool (me &amp; you), and of course masculine versus feminine (he &amp; she).</p><p>Much like double-checking a count to reduce the risk of error, pronouns carve language into rough divisions. The classic he/she cleave splits half the population in one step, significantly reducing the risk of confusion. Consider the repurposed example:</p><blockquote><p>John told Maria that she should administer the medication immediately because he was in critical condition, but she refused.</p></blockquote><p>A pronoun repertoire cannot eliminate all ambiguity, but ideally it narrows it enough for any remaining uncertainty to be manageable. The key lies in finding the balance: too few pronouns, and communication becomes vague and cumbersome; too many, and it gets over-complicated. It depends on the circumstances. There are scenarios where the ambiguity is never worth the efficiency gain, like in legal contracts. A properly written legal contract will <i>never</i> use pronouns, because no one wants to risk a protracted legal battle in the future over which <i>he</i> was responsible for insuring the widget shipments, just to save a few typing keystrokes.</p><p>I’m sorry if I come off as a patronizing kindergarten teacher for the above. Before jumping into any rumble arenas, I think it’s vital to emphatically establish the reason pronouns exist is for linguistic efficiency. If your pronoun use is not advancing that cause, it might be helpful to explain what it’s for.</p><hr><p>So, onto the red meat. I’m not a singular they Truther; it definitely exists and, contrary to some consternations, its utilization is already ubiquitous and intuitive (e.g. “If anyone calls, make sure they leave a message.”). But there’s no denying that expanding the They franchise will necessarily increase ambiguity by slurring two well-worn axes of distinction (he/she &amp; singular/plural). By no means would this be the end of the world, but it will require some compensating efforts in other areas to maintain clarity, perhaps by relying more on proper nouns and less on pronouns.</p><p>Consistent with my aversion of ambiguity, I’ve deliberately avoided using the g-word. I recognize some people have a strident attachment to the specific gender of the pronoun others use to refer to them (and yes, using a semi-ambiguous <i>them</i> in this sentence is intentional and thematically fitting, but you get it).</p><p>The most charitable framework I can posit on this issue is that gendered pronouns are an aesthetic designator, and either are, or should be, untethered from any biological anchor. So while <i>she</i> might conjure up <i>female</i>, its usage is not making any affirmative declarations about the pronoun subject’s ability to conceive and carry a pregnancy. This is uncontroversially true, such as when gendered pronouns are applied to inanimate objects. No one saying “she looks beautiful” about a sports car, is talking about vehicular gender archetypes, or about sexual reproduction roles — unless they’re somehow convinced the car improves their own odds in that department.</p><p>The problem, of course, is that my framework does not explain the handwringing. Anyone who harbors such an intense attachment to specific gendered pronoun preferences clearly sees it as much more than a superficial aesthetic designator. If their insistence is driven by the desire to be validated <i>as</i> embodying that specific gender then it’s not a gambit that will work, for the same reasons it does not work for the sports car.</p><p>On my end, I’m just going to carry on and use whatever pronouns, but only so long as their efficiency/clarity trade-off remains worth it. As inherently intended.</p><br/><br/><a href="https://www.lesswrong.com/posts/zrHHLngm5CmQLD2zk/pronouns-are-annoying#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/zrHHLngm5CmQLD2zk/pronouns-are-annoying</link><guid isPermaLink="false">zrHHLngm5CmQLD2zk</guid><dc:creator><![CDATA[ymeskhout]]></dc:creator><pubDate>Wed, 18 Sep 2024 13:30:05 GMT</pubDate></item><item><title><![CDATA[Is "superhuman" AI forecasting BS? Some experiments on the "539" bot from the Centre for AI Safety]]></title><description><![CDATA[Published on September 18, 2024 1:07 PM GMT<br/><br/><br/><br/><a href="https://www.lesswrong.com/posts/ZhqEShJMMbASxzfEL/is-superhuman-ai-forecasting-bs-some-experiments-on-the-539#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/ZhqEShJMMbASxzfEL/is-superhuman-ai-forecasting-bs-some-experiments-on-the-539</link><guid isPermaLink="false">ZhqEShJMMbASxzfEL</guid><dc:creator><![CDATA[titotal]]></dc:creator><pubDate>Wed, 18 Sep 2024 13:07:40 GMT</pubDate></item><item><title><![CDATA[Skills from a year of Purposeful Rationality Practice]]></title><description><![CDATA[Published on September 18, 2024 2:05 AM GMT<br/><br/><p>A year ago, I started trying to deliberate practice skills that would "help people figure out the answers to confusing, important questions." I experimented with <a href="https://www.lesswrong.com/posts/PiPH4gkcMuvLALymK/exercise-solve-thinking-physics">Thinking Physics</a> questions, <a href="https://github.com/idavidrein/gpqa">GPQA questions</a>, <a href="https://www.lesswrong.com/posts/jqb3prwGQjLriq7Lu/exercise-planmaking-surprise-anticipation-and-baba-is-you">Puzzle Games</a> , <a href="https://www.lesswrong.com/posts/hxhBT89wDBpWcuCW6/forecasting-one-shot-games">Strategy Games</a>, and a <a href="https://www.lesswrong.com/posts/baKauxzSqunE6Aakm/feedback-loops-deliberate-practice-and-transfer-learning">stupid twitchy reflex game</a> I had struggled to beat for 8 years<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="ty1bv9b54dn" role="doc-noteref" id="fnrefty1bv9b54dn"><sup><a href="#fnty1bv9b54dn">[1]</a></sup></span>. Then I went back to my day job and tried figuring stuff out there too.</p><p>The most important skill I was trying to learn was <a href="https://www.lesswrong.com/posts/cbWoMepny3Jo9XqEr/metastrategic-brainstorming-a-core-building-block-skill">Metastrategic Brainstorming</a> – the skill of looking at a confusing, hopeless situation, and nonetheless brainstorming useful ways to get traction or avoid wasted motion.&nbsp;</p><p>Normally, when you want to get good at something, it's great to stand on the shoulders of giants and copy all the existing techniques. But this is challenging if you're trying to solve <i>important, confusing</i> problems because there probably isn't (much) established wisdom on how to solve it. You may need to discover techniques that haven't been invented yet, or synthesize multiple approaches that haven't previously been combined. At the very least, you may need to <i>find</i> an existing technique buried in the internet somewhere, which hasn't been linked to your problem with easy-to-search keywords, without anyone to help you.</p><p>In the process of doing this, I found a few skills that came up over and over again.</p><p>I didn't invent the following skills, but I feel like I "won" them in some sense via a painstaking "throw myself into the deep end" method. I feel slightly wary of publishing them in a list here, because I think it was useful to me to have to figure out for myself that they were the right tool for the job. And they seem like kinda useful "entry level" techniques, that you're more likely to successfully discover for yourself.</p><p>But, I think this is hard enough, and forcing people to discover everything for themselves seems unlikely to be worth it.</p><p>The skills that seemed most general, in both practice and on my day job, are:</p><ol><li>Taking breaks/naps</li><li>Working Memory facility</li><li>Patience</li><li>Knowing what confusion/deconfusion feels like</li><li>Actually Fucking Backchain</li><li><strong>Asking "what is my goal?"</strong></li><li><strong>Having multiple plans</strong></li></ol><p>There were other skills I already was tracking, like <a href="https://www.lesswrong.com/posts/2n83uRi36KWDC9LyK/training-regime-day-8-noticing">Noticing</a>, or <a href="https://www.lesswrong.com/posts/f3o9ydY7iPjFF2fyk/focusing-1">Focusing</a>. There were also somewhat more classic "<a href="https://en.wikipedia.org/wiki/How_to_Solve_It">How to Solve It</a>" style tools for breaking down problems. There are also a host of skills I need when translating this all into my day-job, like "setting reminders for myself" and "negotiating with coworkers."</p><p>But the skills listed above feel like they stood out in some way as particularly general, and particularly relevant for "solve confusing problems."</p><h2>Taking breaks, or naps</h2><p>Difficult intellectual labor is exhausting. During the two weeks I was working on solving Thinking Physics problems, I worked for like 5 hours a day and then was <i>completely fucked up</i> in the evenings. Other researchers I've talked to report similar things.&nbsp;</p><p>During my workshops, one of the most useful things I recommended people was "actually go take a nap. If you don't think you can take a real nap because you can't sleep, go into a pitch black room and lie down for awhile, and the worst case scenario is your brain will mull over the problem in a somewhat more spacious/relaxed way for awhile."</p><p><i>Practical tips: Get yourself a sleeping mask, noise machine (I prefer a fan or air purifier), and access to a nearby space where you can rest. Leave your devices outside the room.&nbsp;</i></p><h2>Working Memory facility</h2><p>Often a topic feels overwhelming. This is often because it's just too complicated to grasp with your raw working memory. But, there are various tools (paper, spreadsheets, larger monitors, etc) that can improve this. And, you can develop the skill of noticing "okay this isn't fitting in my head, or even on my big monitor –&nbsp;what <i>would</i> let it fit in my head?".</p><p>The "eye opening" example of this for me was trying to solve a physics problem that included 3 dimensions (but one of the dimensions was "time"). I tried drawing it out but grasping the time-progression was still hard. I came up with the idea of using semi-translucent paper, where I would draw a diagram of what each step looked like on separate pages, and then I could see where different elements were pointed.</p><p>I've also found "spreadsheet literacy" a recurring skill –&nbsp;google sheets is very versatile but you have to know what all the functions are, have a knack for arranging elements in an easy-to-parse way, etc.</p><p><i>Practical Tips: Have lots of kinds of paper, whiteboards and writing supplies around.&nbsp;</i></p><p><i>On google sheets:</i></p><ul><li><i>You can make collapsible sections, which help with making complex models while also being able to hide away the complexity of sub-parts you aren't modeling. (hotkey: alt-shift-rightarrow)</i></li><li><i>switch between "display formulas" and the default "display the result" mode&nbsp;</i><br><i>(hotkey: ctrl-backtic)</i></li></ul><h2>Patience</h2><p>If I'm doing something confusingly hard, there are times when it feels painful to sit with it, and I'm itchy to pick some solution and get moving. This comes up in two major areas:</p><ul><li><i>Deliberate/purposeful practice. </i>A key thing here is to be practicing the form perfectly, which requires somehow slowing things down such that you have time to get each moment correct. The urge to rush can undo the practice you just did, by training mistakes, or prevent you from actually successfully practicing at all.</li><li><i>Launching into a plan, or declaring yourself done, when you are still confused. </i>Sitting with the uncomfortableness feels very itchy. But vague plans can be completely wrong, resting on confused assumptions.</li></ul><p>There is of course a corresponding virtue of "just get moving, build up momentum and start learning through iteration." The wisdom to tell the difference between "I'm still confused and need to orient more" and "I need to get moving" is important. But, an important skill there is at least being <i>capable</i> of sitting with impatient discomfort, in the situations where that's the right call.</p><p><i>Practical tips: I dunno I still kinda suck at this one, but I find taking deep breaths, and deliberately reminding myself "</i><a href="https://www.lesswrong.com/posts/4FZfzqMtwQZES3eqN/slow-is-smooth-and-smooth-is-fast"><i>Slow is smooth, smooth is fast</i></a><i>".</i></p><h2>Know what deconfusion, or "having a crisp understanding" feels like</h2><p>A skill from both Thinking Physics and Baba is You.&nbsp;</p><p>When I first started Thinking Physics, I would get to a point where "I dunno, I feel pretty sure, and I can't think of more things to do to resolve my confusion", and then impatiently roll the dice on checking the answer. Sometimes I'd be right, more often I'd be wrong.</p><p>Eventually I had a breakthrough where I came up with a crisp model of the problem, and was like "oh, man, now it would <i>actually be really surprising </i>if any of the other answers were true." From then on... well, I'd still sometimes got things wrong (mostly due to impatience). But, I could tell when I still had pieces of my model that were vague and unprincipled.</p><p>Similarly in Baba is You: when people don't have a crisp understanding of the puzzle, they tend to grasp and straws and motivatedly-reason their way into accepting <a href="https://www.lesswrong.com/posts/8ZR3xsWb6TdvmL8kx/optimistic-assumptions-longterm-planning-and-cope">sketchy sounding premises</a>. But, the true solution to a level often feels very crisp and clear and inevitable.&nbsp;</p><p>Learning to notice this difference in qualia is quite valuable.</p><p><i>Practical tips: This is where </i><a href="https://www.lesswrong.com/posts/2n83uRi36KWDC9LyK/training-regime-day-8-noticing"><i>Noticing</i></a><i> and </i><a href="https://www.lesswrong.com/posts/f3o9ydY7iPjFF2fyk/focusing-1"><i>Focusing</i></a> <i>are key, but are worthwhile for helping you notice subtle differences in how an idea feels in your mind.&nbsp;</i></p><p><i>Try either making explicit </i><a href="https://www.lesswrong.com/posts/wDpXshpakpYDcTtug/fluent-cruxy-predictions-1"><i>numerical predictions</i></a><i> about whether you've solved an exercise before you look up the answer; or, write down a qualitative sentence like "I feel like I really deeply understand the answer" or "this seems probably right but I feel some niggling doubts."</i></p><h2>Actually Fucking Backchain</h2><p>From Baba is You, I got the fear-of-god put in me seeing how easy it was to spin my wheels, tinkering around with stuff that was nearby/accessible/easy-to-iterate-with, and how that often turned out to not be at all relevant to beating a level.&nbsp;</p><p>I had much less wasted motion when I thought through "What would the final stages of beating this level need to look like? What are the stages just before those?", and focusing my attention on things that could help me get to that point.</p><p>One might say "well, Baba is You is a game optimized for being counterintuitive and weird." I think for many people with a goal like "build a successful startup", it can sometimes be fine to just be forward chaining with stuff that feels promising, rather than trying to backchain from complex goals.</p><p>But, when I eyeball the realworld problems I'm contending with (i.e. x-risk) they really do seem like there's a relatively narrow set of victory conditions that plausibly work. And, many of the projects I feel tempted to start don't actually really seem that relevant.</p><p>(I also think great startup founders are often doing a mix of forward and backward chaining. i.e. I bet Jeff Bezos was like "okay I bet I could make an online bookstore that worked", was also thinking "but, what if I ultimately wanted the Everything Store? What are obstacles that I'd eventually need to deal")</p><p><i>Practical tips: First, come up with at least one concrete story of what the world would look like, if you succeeded at your goals. Try hard to come up with 2 other worlds, so you aren't too anchored on your first idea.&nbsp;</i></p><p><i>Then, try to concretely imagine the steps that would come a little bit earlier in the chain from the end.</i></p><p><i>Don't worry about mapping out all the different possible branches of the future (that's impossible). But, for a complex plan, have at least one end-to-end plan that connects all the dots from the resources you have now to the victory condition at the end.</i></p><p><i>Meanwhile, while doing most of your work, notice when it starts to feel like you've lost the plot (try just making a little tally-mark whenever you notice yourself rabbitholing in a way that feels off). And ask "what is my goal? is what I'm currently doing helping"</i></p><h2>Ask "What's My Goal?"</h2><p>Actually, having just written the previous section, I'm recalling a simpler, more commonly useful skill, which is simply to ask "what is my goal?".&nbsp;</p><p>Often, doing this throws into relief that you're not sure what your goal is. Sometimes, asking the question immediately prompts me to notice a key insight I'd been glossing over.</p><p>If you're not sure what your goal is, try <a href="https://www.lesswrong.com/posts/i42Dfoh4HtsCAfXxL/babble">babbling</a> some things that seem like they <i>might </i>be a goal, and then ask yourself "does this feel like what I'm most trying to achieve right now?"</p><p>It's okay if it turns out your goal is different or more embarrassing-sounding than you thought. You might say "Actually, you know what? I do care more about showing off and sounding smart, than actually learning something right now." (But, you might also realize "okay, I separately care about learning something <i>and</i> sounding smart"<i>, </i>and then be more intentional about finding a tactic that accomplishes both)</p><p>Once you remember (or figure out) your goal, as you <a href="https://www.lesswrong.com/posts/cbWoMepny3Jo9XqEr/metastrategic-brainstorming-a-core-building-block-skill">brainstorm strategies</a>, ask yourself "would I be surprised if this didn't help me achieve my goals?", and then prioritize strategies that you viscerally expect to work.</p><h2>Always<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="yar1uodgath" role="doc-noteref" id="fnrefyar1uodgath"><sup><a href="#fnyar1uodgath">[2]</a></sup></span>&nbsp;try to have 3 hypotheses</h2><p>This one is important enough to be it's own post. (I guess, probably most of these are important enough to be a full post? But, this one especially)</p><p>But, listing here for completeness:&nbsp;</p><p>Whether you are solving a puzzle, or figuring out <i>how</i> to solve a puzzle, or deciding what your team should do next week, try to have multiple hypotheses. (I usually say "try to have at least 3 plans", but a plan is basically a special case – a hypothesis about "doing X is the best way to achieve goal Y").&nbsp;</p><p>They each need to be a hypothesis you actually believe in.</p><p>I say "at least 3", because I think it gets you "fully intellectually agile." If you only have one plan, it's easy to get tunnel vision on it and not notice that it's doomed. Two ideas helps free up your mind, but then you might still evaluate all evidence in terms of "does this support idea 1 or idea 2?". If you have 3 different hypotheses, it's much more natural to keep generating more hypotheses, and to pivot around in a multiple dimensional space of possibility.</p><p>&nbsp;</p><ol class="footnote-section footnotes" data-footnote-section="" role="doc-endnotes"><li class="footnote-item" data-footnote-item="" data-footnote-index="1" data-footnote-id="ty1bv9b54dn" role="doc-endnote" id="fnty1bv9b54dn"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="ty1bv9b54dn"><sup><strong><a href="#fnrefty1bv9b54dn">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>This wasn't practice for "solving confusing problems", but it was practice for "accomplish anything at all through purposeful practice." <a href="https://www.lesswrong.com/posts/9tx4jRAuEddap7Tzp/raemon-s-deliberate-purposeful-practice-club">It took 40 hours</a> despite me being IMO very fucking clever about it.</p></div></li><li class="footnote-item" data-footnote-item="" data-footnote-index="2" data-footnote-id="yar1uodgath" role="doc-endnote" id="fnyar1uodgath"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="yar1uodgath"><sup><strong><a href="#fnrefyar1uodgath">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>Okay not literally always, but, whenever you're about to spend a large chunk of timing on a project or figuring something out.</p></div></li></ol><br/><br/><a href="https://www.lesswrong.com/posts/thc4RemfLcM5AdJDa/skills-from-a-year-of-purposeful-rationality-practice#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/thc4RemfLcM5AdJDa/skills-from-a-year-of-purposeful-rationality-practice</link><guid isPermaLink="false">thc4RemfLcM5AdJDa</guid><dc:creator><![CDATA[Raemon]]></dc:creator><pubDate>Wed, 18 Sep 2024 02:05:58 GMT</pubDate></item><item><title><![CDATA[Where to find reliable reviews of AI products?]]></title><description><![CDATA[Published on September 17, 2024 11:48 PM GMT<br/><br/><p>Being able to quickly incorporate AI tools seems important, including for working on AI risk (people who disagree: there's a thread for doing so in the comments). But there are a lot of AI products and most of them suck. &nbsp;Does anyone know a good source of reviews, or even just listing product features and naming obvious slop?</p><br/><br/><a href="https://www.lesswrong.com/posts/jMn3YXCjqmJu3Zx5o/where-to-find-reliable-reviews-of-ai-products#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/jMn3YXCjqmJu3Zx5o/where-to-find-reliable-reviews-of-ai-products</link><guid isPermaLink="false">jMn3YXCjqmJu3Zx5o</guid><dc:creator><![CDATA[Elizabeth]]></dc:creator><pubDate>Tue, 17 Sep 2024 23:48:27 GMT</pubDate></item><item><title><![CDATA[Survey - Psychological Impact of Long-Term AI Engagement]]></title><description><![CDATA[Published on September 17, 2024 5:31 PM GMT<br/><br/><p>As part of the <strong>AI Safety, Ethics and Society Course</strong>,&nbsp;I’m conducting a survey to better understand the&nbsp;<strong>psychological and emotional effects of long-term engagement with AI technologies</strong>, particularly within the AI safety community. This is an invitation for you to take part in this&nbsp;<strong>anonymous questionnaire</strong>, which explores how engagement with AI could influence emotions, stress levels, and mental health.&nbsp;<br><br><strong>Who should participate?</strong></p><p>• Anyone involved in AI development, research, or policy</p><p>• Members of the AI safety community, including advocates and researchers</p><p>• Individuals concerned about the societal and existential implications of AI<br><br>For participants interested, the report and analysis of this questionnaire will be shared once it’s released.&nbsp;<br><br><a href="https://docs.google.com/forms/d/e/1FAIpQLScuc3sQlbAXXSmD3sT-d4Ge69Q3DByDIWd__EUlcQONqwfAjA/viewform"><strong><u>Link to the Form</u></strong></a><br><br>Your contribution is deeply valued; this is how we can generate a greater understanding of the psychological challenges faced by individuals in the AI community, and in turn, more effectively address the stress and anxiety caused by this issue, building the resiliency needed to navigate these challenges assertively and empathetically.&nbsp;<br><br>Finally, I’m committed to discussing any emotional challenges related to AI in more detail, therefore feel free to reach out at&nbsp;<strong><u>manugarciaat@gmail.com</u></strong>.<br><br>Thank you in advance for your time.</p><br/><br/><a href="https://www.lesswrong.com/posts/uvNeymCJDds8Jau7c/survey-psychological-impact-of-long-term-ai-engagement-1#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/uvNeymCJDds8Jau7c/survey-psychological-impact-of-long-term-ai-engagement-1</link><guid isPermaLink="false">uvNeymCJDds8Jau7c</guid><dc:creator><![CDATA[Manuela García]]></dc:creator><pubDate>Tue, 17 Sep 2024 17:45:19 GMT</pubDate></item><item><title><![CDATA[How harmful is music, really?]]></title><description><![CDATA[Published on September 17, 2024 2:53 PM GMT<br/><br/><p>For a while, I <a href="https://dkl9.net/essays/music_harm.html">thought music was harmful</a>, due largely to pervasive and arbitrary earworms. More recently, I started to find that <a href="https://dkl9.net/essays/earworm_mechanics.html">earworms are ephemeral and lawful</a>. A contrarian belief held like the former for years gets stuck as part of my identity, but maybe I should find the truth.</p><p>"Music is harmful" is hard to measure and verify. "Listening to music is harmful" is both easier to measure and more readily useful, for you can make a randomised controlled trial out of it.</p><h2>Methods</h2><p>Given that I deliberately listen to music only on rare occasion, it's easy, in my case, to let a column of random booleans in a spreadsheet dictate whether I listen to music each day. Sometimes I forgot to listen to music when the spreadsheet said I should, and sometimes I heard a lot of incidental music on days when the spreadsheet said I should abstain. To account for both cases, I kept a record of whether I actually did listen to music each day. Whether I actually listened to music is the explanatory variable, which ended up 50% correlated (phi coefficient) with whether the random boolean generator said I should.</p><p>The response variables are my mood — -1 to 1 — and the song stuck in my head — one of four categories:</p><ul><li>no song (N)</li><li>a song played back deliberately (D)</li><li>a song I heard recently (R)</li><li>any other song (O)</li></ul><p>Both response variables were queried by surprise, 0 to 23 times per day (median 6), constrained by convenience.</p><h2>Analysis</h2><p>I ran the experiment over 51 days. In all analysis here, I exclude three long intervals (11 days, 5 days, 4 days) of consecutive musical abstention due to outside constraints, leaving 31 days to examine.</p><p>Given these measurements, we can find the effects of listening to music by comparing the averages from days with music to those from days without music. It seems plausible that the effects of music lag or persist past the day of listening. Perhaps the better averages to compare would come from</p><ul><li>music days, plus days just after music days, versus</li><li>all other days</li></ul><p>What kind of harm do I expect to see from listening music?</p><ul><li>It could worsen my mood.</li><li>It could make earworms play for more of the time, i.e. increase the ratio of D+R+O to N.</li><li>It could make more of my earworms accidental, i.e. increase the ratio of R+O to N+D.</li><li>It could make whatever particular music I listen to show up more often as accidental earworms, i.e. increase the ratio of R to O.</li></ul><h2>Results</h2><p>What does my data say about all that?</p><figure class="table"><table><thead><tr><th>&nbsp;</th><th>Music</th><th>No music</th><th>Music + next day</th><th>&gt;1 day since</th></tr></thead><tbody><tr><td>Days</td><td>8</td><td>23</td><td>16</td><td>15</td></tr><tr><td>Average mood</td><td>0.29</td><td>0.22</td><td>0.28</td><td>0.19</td></tr><tr><td>Total D+R+O</td><td>43</td><td>140</td><td>96</td><td>87</td></tr><tr><td>Total N</td><td>16</td><td>39</td><td>34</td><td>21</td></tr><tr><td>Total R+O</td><td>34</td><td>111</td><td>77</td><td>68</td></tr><tr><td>Total N+D</td><td>25</td><td>68</td><td>53</td><td>40</td></tr><tr><td>Total R</td><td>3</td><td>17</td><td>13</td><td>7</td></tr><tr><td>Total O</td><td>31</td><td>94</td><td>64</td><td>61</td></tr></tbody></table></figure><p>It appears that listening to music, in the short-term:</p><ol><li>makes me a tad happier</li><li>makes earworms play in my mind for slightly less of the time</li><li>makes accidental earworms (as contrasted with deliberate earworms, or mental quiet) play slightly less of the time</li><li>has a weak, ambiguous effect on which songs I get as accidental earworms</li></ol><p>Result 1 makes sense, but deserved testing, just to be sure. Results 2 and 3 go against my intuition. I'm less sure what to make of result 4, especially given that it's harder to measure — judging an accidental earworm as "recent" depends on a threshold of recency, which I left ambiguous, and on my memory of what songs I've heard recently, which can mess up on occasion.</p><br/><br/><a href="https://www.lesswrong.com/posts/dbkXFiB3JbD64W6fu/how-harmful-is-music-really#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/dbkXFiB3JbD64W6fu/how-harmful-is-music-really</link><guid isPermaLink="false">dbkXFiB3JbD64W6fu</guid><dc:creator><![CDATA[dkl9]]></dc:creator><pubDate>Tue, 17 Sep 2024 14:53:26 GMT</pubDate></item><item><title><![CDATA[Monthly Roundup #22: September 2024]]></title><description><![CDATA[Published on September 17, 2024 12:20 PM GMT<br/><br/><p>It’s that time again for all the sufficiently interesting news that isn’t otherwise fit to print, also known as the Monthly Roundup.</p>
  2.  
  3.  
  4.  
  5. <h4>Bad News</h4>
  6.  
  7.  
  8.  
  9. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/robertskmiles/status/1830925270066286950">Beware the failure mode in strategy and decisions that implicitly assumes competence</a>, or wishes away difficulties, and remember to reverse all advice you hear.</p>
  10.  
  11.  
  12.  
  13. <blockquote>
  14. <p>Stefan Schubert (quoting Tyler Cowen on raising people’s ambitions often being very high value): I think lowering others’ aspirations can also be high-return. I know of people who would have had a better life by now if someone could have persuaded them to pursue more realistic plans.</p>
  15.  
  16.  
  17.  
  18. <p>Rob Miles: There’s a specific failure mode which I don’t have a name for, which is similar to “be too ambitious” but is closer to “have an unrealistic plan”. The illustrative example I use is:</p>
  19.  
  20.  
  21.  
  22. <p>Suppose by some strange circumstance you have to represent your country at olympic gymnastics next week. One approach is to look at last year’s gold, and try to do that routine. This will fail. You’ll do better by finding one or two things you can actually do, and doing them well</p>
  23.  
  24.  
  25.  
  26. <p>There’s a common failure of rationality which looks like “Figure out what strategy an ideal reasoner would use, then employ that strategy”.</p>
  27.  
  28.  
  29.  
  30. <p>It’s often valuable to think about the optimal policy, but you must understand the difference between knowing the path, and walking the path</p>
  31. </blockquote>
  32.  
  33.  
  34.  
  35. <p>I do think that more often ‘raise people’s ambitions’ is the right move, but you need to carry both cards around with you for different people in different situations.</p>
  36.  
  37.  
  38.  
  39. <span id="more-23954"></span>
  40.  
  41.  
  42.  
  43. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/loic/status/1827751238257332503">Theory that Starlink, by giving people good internet access, ruined Burning Man</a>. Seems highly plausible. One person reported that they managed to leave the internet behind anyway, so they still got the Burning Man experience.</p>
  44.  
  45.  
  46.  
  47. <p><a target="_blank" rel="noreferrer noopener" href="https://www.bloomberg.com/opinion/articles/2024-08-26/musk-should-realize-that-business-relies-on-government-regulation?sref=htOHjx5Y">Tyler Cowen essentially despairs of reducing regulations or the number of bureaucrats,</a> because it’s all embedded in a complex web of regulations and institutions and our businesses rely upon all that to be able to function. Otherwise business would be paralyzed. There are some exceptions, you can perhaps wholesale axe entire departments like education. He suggests we focus on limiting regulations on new economic areas. He doesn’t mention AI, but presumably that’s a lot of what’s motivating his views there.</p>
  48.  
  49.  
  50.  
  51. <p>I agree that ‘one does not simply’ cut existing regulations in many cases, and that ‘fire everyone and then it will all work out’ is not a strategy (unless AI replaces them?), but also I think this is the kind of thing can be the danger of having too much detailed knowledge of all the things that could go wrong. One should generalize the idea of eliminating entire departments. So yes, right now you need the FDA to approve your drug (one of Tyler’s examples) but… what if you didn’t?</p>
  52.  
  53.  
  54.  
  55. <p>I would still expect, if a new President were indeed to do massive firings on rhetoric and hope, that the result would be a giant cluster****.</p>
  56.  
  57.  
  58.  
  59. <p>La Guardia <a target="_blank" rel="noreferrer noopener" href="https://x.com/NateSilver538/status/1833455121293844914">switches to listing flights by departure time rather than order of destination</a>, which in my mind makes no sense in the context of flights, that frequently get delayed, where you might want to look for an earlier flight or know what backups are if yours is cancelled or delayed or you miss it, and so on. It also gives you a sense of where one can and can’t actually go to when from where you are. For trains it makes more sense to sort by time, since you are so often not going to and might not even know the train’s final destination.</p>
  60.  
  61.  
  62.  
  63. <p>I got a surprising amount of pushback about all that on Twitter, some people felt very strongly the other way, as if to list by name was violating some sacred value of accessibility or something.</p>
  64.  
  65.  
  66.  
  67. <h4>Anti-Social Media</h4>
  68.  
  69.  
  70.  
  71. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/cb_doge/status/1826378060104634489">Elon Musk provides good data on his followers</a> to help with things like poll calibration, reports 73%-27% lead for Donald Trump. There was another on partisan identity, with a similar result:</p>
  72.  
  73.  
  74.  
  75. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90cc4ec7-b019-47b1-8e5d-5a8503b34da7_889x370.png" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/dcj0zxlhpqawzrvsukhn" alt=""></a></figure>
  76.  
  77.  
  78.  
  79. <p>If we (approximately) give 100% of the Democratic vote to Harris and 100% of the Republican vote to Trump, then that would leave the 35% of self-identified Independents here splitting for Trump by about 2:1.</p>
  80.  
  81.  
  82.  
  83. <p>I didn’t get a chance to think about an advance prediction, but this all makes sense to me. Elon Musk’s Twitter feed works very hard to drive Democrats and those backing Harris away. I doubt he would even disagree. I still follow him because he still breaks (or is) enough news often enough it feels necessary.</p>
  84.  
  85.  
  86.  
  87. <p><a target="_blank" rel="noreferrer noopener" href="https://philosophybear.substack.com/p/it-appears-twitter-x-provides-a-slur">Twitter lets you use certain words if and only if you have 30,000 or more followers</a>? I’m almost there. I actually think it is reasonable to say that if you have invested in building a platform, then you are a real account rather than a bot, and also that represents ‘skin in the game’ that you are risking if you break the rules. Thus, it makes sense to be more forgiving towards such accounts, and stricter with tiny accounts that could start over and might outright be an AI. I understand why the OP interprets this as ‘only the big accounts get to talk,’ but I’m below the 30k threshold and have never run into trouble with the rules nor have I ever censored myself to avoid breaking them. It seems fine.</p>
  88.  
  89.  
  90.  
  91. <p>What continues to not be fine is the throttling of outside links. All of Musk’s other changes are somewhere between fine and mildly annoying, but the War on Links is an ongoing series problem doing real damage.</p>
  92.  
  93.  
  94.  
  95. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/daniel_m_lavery/status/1826299605203796219">Some chats about group chats</a>, with this quote for the ages:</p>
  96.  
  97.  
  98.  
  99. <blockquote>
  100. <p>“Whenever I create a group chat, I am Danny Ocean assembling a crack team of gymnasts and code breakers. Whenever I am added to one, I feel as put-upon as if I’d been forced to attend the birthday party of a classmate I particularly dislike.”</p>
  101. </blockquote>
  102.  
  103.  
  104.  
  105. <p>Periodically I hear claims that group chats are where all the truly important and interesting conversations take place. Sad, if true, because that means they never make it to the public record (or into LLMs) and the knowledge cannot properly spread. It doesn’t scale. On the other hand, it would be good news, because I know how good the public chats are, so this would mean chats in general were better.</p>
  106.  
  107.  
  108.  
  109. <p>I’m in a number of group chats, most of which of course are mostly dormant, on permanent mute where I rarely look, or both. I don’t see the harm in joining a chat since I can full mute it if it gets annoying, and you have the option to look or even chat occasionally. The downside risk is distraction, if you’re careless. And there are a few counterfactual (or hidden?!) plausible group chats that might be cool to be in. Right now there are maybe two where I might plausibly try to start a chat. I think that’s close to optimal? You want a few places you can get actual human reactions to things, but they’re big potential distractions.</p>
  110.  
  111.  
  112.  
  113. <h4>Technology Advances</h4>
  114.  
  115.  
  116.  
  117. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/timonsku/status/1826732950391849112">There’s a USB-C cable with a display that tells you what power it is charging with</a>? Brilliant. Ordered. I’m not sure I want to use it continuously but I damn well want to use it once on every outlet in the house. <a target="_blank" rel="noreferrer noopener" href="https://www.aliexpress.us/item/3256805782030428.html?businessType=ProductDetail&amp;srcSns=sns_Copy&amp;spreadType=socialShare&amp;bizType=ProductDetail&amp;social_params=60756918465&amp;aff_fcid=f0536927e7b24d249cdfadc0d39ca4a9-1724620197015-04796-_EyHp0o5&amp;tt=CPS_NORMAL&amp;aff_fsk=_EyHp0o5&amp;aff_platform=shareComponent-detail&amp;sk=_EyHp0o5&amp;aff_trace_key=f0536927e7b24d249cdfadc0d39ca4a9-1724620197015-04796-_EyHp0o5&amp;shareId=60756918465&amp;businessType=ProductDetail&amp;platform=AE&amp;terminal_id=ed372a51a2d64f81bcc6b3f3ebcc3aac&amp;gatewayAdapt=glo2usa4itemAdapt">Poster offers an AliExpress link</a>, I got mine off Amazon rather than mess around.</p>
  118.  
  119.  
  120.  
  121. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/jachiam0/status/1830367375742701907">Great wisdom, take heed all:</a></p>
  122.  
  123.  
  124.  
  125. <blockquote>
  126. <p>Joshua Achiam: I can’t tell you how many products and websites would be saved by having a simple button for “Power User Mode,” where you get 10x the optionality and control over your own experience. Give me long menus and make it all customizable. Show me the under-the-hood details.</p>
  127.  
  128.  
  129.  
  130. <p>I am OK with it if the power user experience has some sharp edges, tbh. I use Linux. (And besides, we’ll get AI to help us solve these quality assurance problems over the next few years, right?)</p>
  131. </blockquote>
  132.  
  133.  
  134.  
  135. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/Altimor/status/1832887194362794489">What to do about all the lock-in to products that therefore don’t bother to improve</a>? Flo Crivello calls this the ‘Microsoft principle,’ also names Salesforce, Epic and SAP. I’m not convinced Microsoft is so bad, I would happily pay the switching costs if I felt Linux or Mac was genuinely better. Epic is, by all accounts, different.</p>
  136.  
  137.  
  138.  
  139. <p>I wonder if AI solves this? Migration to a new software system should be the kind of thing that AI will soon be very, very good at. So you can finally switch to a new EMR.</p>
  140.  
  141.  
  142.  
  143. <h4>High Seas Piracy is Bad</h4>
  144.  
  145.  
  146.  
  147. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/binarybits/status/1828489197499621836">So, in the spirit of the picture provided…</a></p>
  148.  
  149.  
  150.  
  151. <blockquote>
  152. <p>Sam Lessin: Silicon Valley Needs to Get Back to Funding Pirates, Not The Navy…</p>
  153.  
  154.  
  155.  
  156. <p>Timothy Lee: The Navy is important, actually.</p>
  157.  
  158.  
  159.  
  160. <p>I know Steve Jobs didn’t literally mean that it’s good to sail around stealing stuff and bad to be part of the organization that tries to prevent that. But if the literal Navy is good maybe we shouldn’t be so quick to dismiss people who join metaphorical navies?</p>
  161.  
  162.  
  163.  
  164. <p>Matthew Yglesias: I was going to say I don’t know that the Bay Area needs more people who break into parked cars and steal stuff.</p>
  165. </blockquote>
  166.  
  167.  
  168.  
  169. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54d957f8-7cb9-4352-83d1-fa87a4a1f478_602x767.jpeg" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/nw8shehlobkirnsgeev3" alt="Image"></a></figure>
  170.  
  171.  
  172.  
  173. <p>Three things to know about the high seas:</p>
  174.  
  175.  
  176.  
  177. <ol>
  178. <li>Pirates and piracy are ‘we take your stuff, often violently.’</li>
  179.  
  180.  
  181.  
  182. <li>Thus pirates and piracy are actually really, really terrible. Like, really bad.</li>
  183.  
  184.  
  185.  
  186. <li>Navies is great, because they stop piracy and enable trade and production.</li>
  187. </ol>
  188.  
  189.  
  190.  
  191. <p>Also, your country’s navy is very important for trade and self-defense and prosperity, so in most cases helping it is good, actually.</p>
  192.  
  193.  
  194.  
  195. <p>Look. Sam Lessin is not alone. A lot of people like Jack Sparrow and think he’s cool.</p>
  196.  
  197.  
  198.  
  199. <p>And there’s nothing wrong with having cool movies where villains are cool, or decide to go against type and do a good thing, or what not. And sure, you like the equality among the crew, and the pirate talk and the peglegs and the duals and the defying the stuck up authority and the freedom and the attitude and so on.</p>
  200.  
  201.  
  202.  
  203. <p>But fundamentally, pirates? Super Bad Dudes. A pirate is the troll under the bridge or the smash-and-grabber who knocks over a liquor store, or the villain in every western, but with good PR. If you are equating yourself to a pirate, <a target="_blank" rel="noreferrer noopener" href="https://www.youtube.com/watch?v=h242eDB84zY">then you might be the baddies.</a></p>
  204.  
  205.  
  206.  
  207. <p>You do not want your ‘new frontier for pirates,’ that means ‘a place where people are constantly trying to hijack and rob you, and violence and theft rules.’ That’s bad, actually.</p>
  208.  
  209.  
  210.  
  211. <p>What you want is a new frontier for everyone else. For explorers, for settlers, for farmers and builders.</p>
  212.  
  213.  
  214.  
  215. <p>Intellectual property is a special case, where the metaphorical piracy is non-violent, non-destructive and one can argue it creates value and fights against injustice. One can make a case for, as an example, Pirate Radio. Details matter. Reasonable people can disagree on where to draw the line.</p>
  216.  
  217.  
  218.  
  219. <p>But if your model of The Good, or the good business model, is pirates, as in pirates on the water engaged in piracy, as is clearly true here? Without a letter of marque? You are not the heroes you think you are.</p>
  220.  
  221.  
  222.  
  223. <p>I think this helps explain some of what we see with certain people in VC/SV/SF land arguing against any and all AI regulations. They think they get to be pirates, that everyone should be pirates, bound to no law, and that this is good.</p>
  224.  
  225.  
  226.  
  227. <p>With notably rare exceptions, most of which are highly metaphorical? It is not good.</p>
  228.  
  229.  
  230.  
  231. <h4>The Michelin Curse</h4>
  232.  
  233.  
  234.  
  235. <p>Paper reports that <a target="_blank" rel="noreferrer noopener" href="https://onlinelibrary.wiley.com/doi/epdf/10.1002/smj.3651">Michelin stars make NYC restaurants more likely to close</a>, due to conflicts they cause with stakeholders, overwhelming the impact of more customers willing to pay more. This seems so crazy.</p>
  236.  
  237.  
  238.  
  239. <p>Employees demanded higher wages and had better alternative opportunities, which makes sense for chefs. I’d think less so for others, especially waiters who should be getting better tips. Landlords try to raise the rent, causing a hold-up problem, potentially forcing a move or closure. That makes sense too, I suppose moving costs are often very high, and sometimes landlords overreach. Suppliers don’t squeeze them directly, but there is ‘pressure to use higher quality ingredients’ and competition for them. I suppose, but then you get the ingredients. Customers have raised expectations and you get more tourists and ‘foodies’ and critics. And yes, I can see how that could be bad.</p>
  240.  
  241.  
  242.  
  243. <p>My guess is that a lot of this is the universal reluctance to properly raise prices, or to properly use price to allocate scarce resources. You are providing a premium service that costs more, and demand exceeds supply, but you are still struggling? The default reason is you won’t raise your prices. Or you will – a lot of these places very much are not cheap – but you won’t raise them enough to clear the market. If you’re charging $350 a plate, but the reservation sells for $1,000 online, you know what that means.</p>
  244.  
  245.  
  246.  
  247. <p>It is also possible that this is something else entirely. Michelin rewards complexity, and various other things, that are hard to maintain over time. They are also things many diners, myself included, actively do not want. It is a distinct thing. And it has a strong pull and pressure, including for the prestige that goes beyond the money. So if restaurants are doing things to ‘go after’ stars, even if they did not start out that way, often I am guessing they start distorting themselves, getting obsessed with the wrong questions.</p>
  248.  
  249.  
  250.  
  251. <p>When I see Michelin stars, I know I am getting high quality ingredients and skill. I know I am going to get a lot of bold flavor and attentive service. That’s good. But I am going to pay both for that and for certain kinds of service and complexity and cleverness and ‘sophistication’ that I if anything actively dislike. What they care about and what I care about are too often diverging, and they are chasing a fickle crowd. So yeah, I can see how that can end up being highly unstable several times over.</p>
  252.  
  253.  
  254.  
  255. <p>Right now I have two places ‘in my rotation’ that have a star, Casa Mono and Rezdora. I love both of them and hope they last, and both are places you can walk-in for lunch and aren’t that much more expensive for it. I don’t think it is a coincidence that neither has a second star. The places with 2-3 stars are mostly these multi-course ‘experiences’ that don’t appeal to me at all, but that’s also the market at work pricing me out.</p>
  256.  
  257.  
  258.  
  259. <h4>What’s the Rush?</h4>
  260.  
  261.  
  262.  
  263. <p><a target="_blank" rel="noreferrer noopener" href="https://marginalrevolution.com/marginalrevolution/2024/08/why-do-the-servers-always-want-to-take-our-cutlery-and-plates-and-glasses-away.html?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=why-do-the-servers-always-want-to-take-our-cutlery-and-plates-and-glasses-away">Tyler Cowen asks a great question</a>: Why do the servers always want to take our cutlery and plates and glasses away? How should we model this behavior?</p>
  264.  
  265.  
  266.  
  267. <p>He tries to find economic or efficiency explanations. Perhaps they want to turn over the table faster, and provide another point of contact. Or that they know they may be busy later, so they want to do something useful now. And the responses in the comments focus on efficiency concerns, or breaking up the work.</p>
  268.  
  269.  
  270.  
  271. <p>Yet Tyler Cowen correctly notes that they are far less interested in promptly taking your order, which both turns the table over and gets you to your food.</p>
  272.  
  273.  
  274.  
  275. <p>Also I see the same problem with the check. So often I have to flag someone down to ask for the check. Here I more understand why, as many diners think it is rude to give you the check ‘too early’ and they are pressuring you to leave. I see that, but I don’t let it get to me, I hate feeling trapped and frustrated and being actually stuck when I want to leave and don’t want to be rude in flagging someone down.</p>
  276.  
  277.  
  278.  
  279. <p>It seems far ruder to take my plate before I am ready, which does actual harm, then to give me the option to pay, which helps me.</p>
  280.  
  281.  
  282.  
  283. <p>Indeed, I actively loved it when a local place I enjoy (Hill Country) started having people order at the counter and pay in advance, exactly because that means now you can leave when you can both order quickly, and then leave when you want, and never be under any pressure, and I now go there more often especially when dining alone.</p>
  284.  
  285.  
  286.  
  287. <p>A meal really is nicer, and more efficient, when you have paid in advance, and know you can walk out whenever you’re ready.</p>
  288.  
  289.  
  290.  
  291. <p>So while I buy that efficiency concerns play a role, there would still remain a mystery. Why do restaurants whose livelihood depends on turnover often fail to even take your order for extended periods, even when you signal clearly you are ready? Often they are the same places that rapidly clear your plates, although I mostly do not mind this.</p>
  292.  
  293.  
  294.  
  295. <p>I think the missing answer, even if it often isn’t conscious, is that servers feel that not clearing the plates is a ‘bad look’ and bad service, that it fails to be elegant and sends the wrong message, and also makes the waiter potentially look bad to their boss. It is something to easily measure, so it gets managed. They are indeed far more concerned with clearing too late than too early. Too early might annoy you, but that is invisible, and it shows you are trying.</p>
  296.  
  297.  
  298.  
  299. <h4>Good News, Everyone</h4>
  300.  
  301.  
  302.  
  303. <p>India getting remarkably better in at least one way, as <a target="_blank" rel="noreferrer noopener" href="https://x.com/ShamikaRavi/status/1826697208860971176">the percentage of the bottom 20% who own a vehicle went from 6% to 40% in only ten years</a>.</p>
  304.  
  305.  
  306.  
  307. <p>Seeing Like a State has its advantages. Technocracy is often great, especially when there is buy-in from the people involved. <a target="_blank" rel="noreferrer noopener" href="https://x.com/AndyMasley/status/1829151828023324956">See this story of a vineyard</a> where the textbook solutions really did work perfectly in real life while everyone who ‘knew wine’ kept insisting it would never work, <a target="_blank" rel="noreferrer noopener" href="https://www.lrb.co.uk/the-paper/v21/n11/paul-seabright/the-aestheticising-vice">from this 1999 review of Seeing Like a State</a>. The full essay is great fun too.</p>
  308.  
  309.  
  310.  
  311. <p>Your survey data contains a bunch of ‘professional’ survey takers who take all the surveys, <a target="_blank" rel="noreferrer noopener" href="https://x.com/_Tiagoventura/status/1830702879256256791">but somehow this ends up not much changing the results</a>.</p>
  312.  
  313.  
  314.  
  315. <p><a target="_blank" rel="noreferrer noopener" href="https://marginalrevolution.com/marginalrevolution/2024/09/france-fact-of-the-day-5.html?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=france-fact-of-the-day-5">Reports say</a> that <a target="_blank" rel="noreferrer noopener" href="https://www.ft.com/content/c398fae4-6107-4bdf-b0ed-88f9168eeaa6">frozen French croissants are actually really good</a> and rapidly gaining market share. It seems highly plausible to me. Croissants freeze rather well. We use the ones from FreshDirect on occasion, and have tried the Whole Foods ones, and both are solid. The key is that they end up Hot and Fresh, which makes up for quite a lot.</p>
  316.  
  317.  
  318.  
  319. <p>They still pale in comparison to actively good croissants from a real bakery, of which this area has several – I lost my favorite one a few years back and another stopped baking their own, but we still have Eataly and Dominic Ansel Workshop, both of which are way way better, and if I’m willing to walk options expand further. However the cost is dramatically higher at the good bakeries. For me it’s worth it, but if you are going to otherwise cheat on quality, you might as well use the frozen ones. You also can’t beat the convenience.</p>
  320.  
  321.  
  322.  
  323. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/cooltechtipz/status/1833122180156035102">50 ways to spend time alone.</a> Some of them are reaches, or rather stupid, but brainstorming is good even when there are some dumb ideas. Strangely missing from this list are such favorites as ‘do your work,’ ‘play a video game,’ ‘listen to music,’ ‘go to the movies’ and my personal favorite, ‘sleep.’ Also some other obvious others.</p>
  324.  
  325.  
  326.  
  327. <h4>Let it Go</h4>
  328.  
  329.  
  330.  
  331. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/robinhanson/status/1832069356013953115">An excellent point</a> on repair versus replace, and the dangers of the nerd snipe for people of all intellectual levels.</p>
  332.  
  333.  
  334.  
  335. <blockquote>
  336. <p>PhilosophiCat: I live in a country where 80ish is roughly the average national IQ. Let me tell you what it’s like.</p>
  337.  
  338.  
  339.  
  340. <p>The most noticeable way this manifests is inefficiency. Obvious, easy, efficient, long term solutions to problems are often ignored in favour of short term solutions that inevitably create bigger or more expensive problems down the road or that use far more labour and time than is necessary.</p>
  341.  
  342.  
  343.  
  344. <p>For example, if something breaks, it may be way more cost effective to simply replace it and have the problem just be solved. But they’ll repair it endlessly (often in very MacGyver-like ways), spending way more money on parts than a new item would have cost, spending hours of time repeatedly fixing it every time it breaks, until they can’t fix it anymore. And then they still have to buy a new one.</p>
  345.  
  346.  
  347.  
  348. <p>At first, I would get very frustrated by this sort of thing, but eventually I realised that they like it this way. They enjoy puttering and tinkering and solving these little daily problems.</p>
  349.  
  350.  
  351.  
  352. <p>…</p>
  353.  
  354.  
  355.  
  356. <p>Many don’t understand that if you spend all your money today, you won’t have any tomorrow. Or that if you walk on the highway at night in dark clothes, drivers can’t see you and may run you over. Or that if you don’t keep up on the maintenance of your house, eventually things will break that you won’t be able to afford to fix (because you don’t ever put money away to save). I could give endless examples of this.</p>
  357.  
  358.  
  359.  
  360. <p>Robin Hanson: Note how creative problem solving can be a mark of low IQ; smarter people pick the simple boring solution.</p>
  361. </blockquote>
  362.  
  363.  
  364.  
  365. <p>I think this comes from the fact that we used to be a lot poorer than we were, and that we used to be unable to efficiently turn time into money outside of one’s fixed job. And even that we usually had half a couple that didn’t have a job at all. So any way to trade time to conserve money was highly welcome, and considered virtuous.</p>
  366.  
  367.  
  368.  
  369. <p>I keep having to train myself out of this bias. The old thing doesn’t even have to be broken, only misplaced, if your hourly is high – why are you spending time looking when you can get it replaced? Worst case is you now have two.</p>
  370.  
  371.  
  372.  
  373. <h4>Yay Air Conditioning</h4>
  374.  
  375.  
  376.  
  377. <p>I knocked air conditioning a bit when analyzing <a target="_blank" rel="noreferrer noopener" href="https://thezvi.substack.com/p/ai-and-the-technological-richter">the technological richter scale</a>, but yes having it allows people to think and function on days they otherwise wouldn’t. That is a big deal, and <a target="_blank" rel="noreferrer noopener" href="https://www.vox.com/2015/3/23/8278085/singapore-lee-kuan-yew-air-conditioning">Lee Kwon Yew called it the secret of Singapore’s success</a>.</p>
  378.  
  379.  
  380.  
  381. <blockquote>
  382. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/peterhartree/status/1832099687957954860">Ethan Mollick</a>: Air conditioning lets you use your brain more.</p>
  383.  
  384.  
  385.  
  386. <p>Students do worse when its hot. Over 13 years in NYC alone, “upwards of 510,000 exams that otherwise would have passed likely received failing grades due to hot exam conditions,” and these failures delayed or stopped 90k graduations!</p>
  387.  
  388.  
  389.  
  390. <p>Peter Hartree: Meanwhile in France: in office buildings, it is illegal to switch on the air conditioning if the interior temperature is less than 26 °C or 78.8 °F.</p>
  391.  
  392.  
  393.  
  394. <p>(Décret n° 2007-363)</p>
  395. </blockquote>
  396.  
  397.  
  398.  
  399. <p>Why tax when you can ban? What is a trade-off anyway? Shouldn’t you be on vacation, do you want to make the rest of us look bad?</p>
  400.  
  401.  
  402.  
  403. <p>I am curious how much I would reduce my air conditioning use if we attached a 1000% tax to it. That is not a typo.</p>
  404.  
  405.  
  406.  
  407. <h4>Beast of a Memo</h4>
  408.  
  409.  
  410.  
  411. <p>Thanks, Mr. Beast, <a target="_blank" rel="noreferrer noopener" href="https://drive.google.com/file/d/1YaG9xpu-WQKBPUi8yQ4HaDYQLUSa7Y3J/view">for this memo</a>. It is 36 pages, and it is glorious. Whatever else you may think of it, this feels like a dramatically honest attempt to describe how YouTube actually works, how his business actually works and what he thinks it takes to succeed as part of that business. It is clear this is a person obsessed with maximizing success, with actually cutting the enemy, with figuring out what works and what matters and then being the best at it like no one ever was.</p>
  412.  
  413.  
  414.  
  415. <p>Is it a shame that the chosen topic is YouTube video engagement? Your call.</p>
  416.  
  417.  
  418.  
  419. <p>Is it over the top, obsessive and unhealthy in places? That’s your call too.</p>
  420.  
  421.  
  422.  
  423. <p>The central theme is, know what things have to happen that might not happen, that are required for success, and do whatever it takes to make them happen. Have and value having backups including spare time, do check-ins, pay for premium things as needed, obsess, take nothing at face value if it sounds too good to be true, make it happen.</p>
  424.  
  425.  
  426.  
  427. <p>So, suppose you have some task that will be a bottleneck for you. What to do?</p>
  428.  
  429.  
  430.  
  431. <blockquote>
  432. <p>Mr. Beast: I want you to look them in the eyes and tell them they are the bottleneck and take it a step further and explain why they are the bottleneck so you both are on the same page.</p>
  433.  
  434.  
  435.  
  436. <p>“Tyler, you are my bottleneck. I have 45 days to make this video happen and I can not begin to work on it until I know what the contents of the video is. I need you to confirm you understand this is important and we need to set a date on when the creative will be done.” Now this person who also has tons of shit going on is aware of how important this discussion is and you guys can prioritize it accordingly.</p>
  437.  
  438.  
  439.  
  440. <p>Now let’s say Tyler and you agree it will be done in 5 days. YOU DON’T GET TO SET A REMINDER FOR 5 DAYS AND NOT TALK TO HIM FOR 5 DAYS!</p>
  441.  
  442.  
  443.  
  444. <p>Every single day you must check in on Tyler and make sure he is still on track to hit the target date.</p>
  445.  
  446.  
  447.  
  448. <p>…</p>
  449.  
  450.  
  451.  
  452. <p>I want you to have a mindset that God himself couldn’t stop you from making this video on time. Check. In. Daily. Leave. No. Room. For. Error.</p>
  453. </blockquote>
  454.  
  455.  
  456.  
  457. <p>If I am Tyler and every time I get a request I get this lecture and I get a check-in every single day I am not going to be a happy Tyler. Nor am I going to be a Tyler that likes you, or that carefully ponders before sending the ‘everything is on track’ reassurances.</p>
  458.  
  459.  
  460.  
  461. <p>If this was a rare event, where 9 out of 10 things you ask for are not bottlenecks, and the reminders are gentle and easy, then maybe. Or perhaps if that’s known to be the standard operating procedure and it’s like a checklist thing – daily you verify you’re on track for everything quickly – maybe that could work too? You’d also need to hire with this in mind.</p>
  462.  
  463.  
  464.  
  465. <p>The reverse mistake is indeed worse. So often I see exactly the thing where you have a future potential bottleneck, and then assume it will be fine until suddenly you learn that it isn’t fine. You probably do want to be checking in at least once.</p>
  466.  
  467.  
  468.  
  469. <p>Similarly, as he points out, if you shove the responsibility onto someone else like a contractor and assume they’ll deliver, then it’s absolutely your fault when they don’t deliver. And yes, you should build in a time buffer. And yes, if it’s critical and could fail you should have a backup plan.</p>
  470.  
  471.  
  472.  
  473. <p>He says before you ask a higher up especially him for a decision, ensure you provide all the key details, and also all the options, since others don’t know what you know and their time is valuable. I buy that it by default does make sense to assume higher ups have a large multiplier on the value of their time, so it should be standard practice to do this. It is however clear Mr. Beast is overworked and would be wise to take on less at once.</p>
  474.  
  475.  
  476.  
  477. <p>He emphasizes following chain of command for practical reasons, if you don’t then the people in between won’t have any idea what’s going on or know what to do. That’s a risk, but feels like it’s missing something more central.</p>
  478.  
  479.  
  480.  
  481. <p>He is big on face-to-face communication, likes audio as a backup, written is a distant third, going so far as to say written doesn’t count as communication at all unless you have confirmation in return. I definitely don’t see it that way. To me written is the public record, even if it has lower bandwidth.</p>
  482.  
  483.  
  484.  
  485. <p>If there’s one central theme it’s responsibility. Nothing comes before your ‘prios’ or top priorities, make them happen or else, no excuses. Own your mistakes and learn from them, he won’t hold it over your head. No excuses. But of course most people say that, and few mean it. It’s hard to tell who means it and who doesn’t.</p>
  486.  
  487.  
  488.  
  489. <p>This section is unusual advice, on consultants, who he thinks are great.</p>
  490.  
  491.  
  492.  
  493. <blockquote>
  494. <p>Mr. Beast: Consultants are literally cheat codes. Need to make the world’s largest slice of cake? Start off by calling the person who made the previous world’s largest slice of cake lol. He’s already done countless tests and can save you weeks worth of work. I really want to drill this point home because I’m a massive believer in consultants. Because I’ve spent almost a decade of my life hyper obsessing over youtube, I can show a brand new creator how to go from 100 subscribers to 10,000 in a month. On their own it would take them years to do it.</p>
  495.  
  496.  
  497.  
  498. <p>Consults are a gift from god, please take advantage of them. In every single freakin task assigned to you, always always always ask yourself first if you can find a consultant to help you. This is so important that I am demanding you repeat this three times in your head “I will always check for consultants when i’m assigned a task.”</p>
  499. </blockquote>
  500.  
  501.  
  502.  
  503. <p>Doing Mr. Beast shaped things seems like a perfect fit for consultants. For most things, consultants carry many costs and dangers. You need to bring them up to speed, they’re expensive, you risk not developing core competency, they are used to fight turf wars and shift or avoid blame and so on. A lot of it is grift or the result of bad planning.</p>
  504.  
  505.  
  506.  
  507. <p>But here, it is a lot of tasks like ‘build the world’s largest slice of cake.’ You don’t actually want a core competency of on your own making largest versions of all the things or anything like that – you want the core competency of knowing how to hire people to do it, because it’s a one-off, and it doesn’t link back into everything else you do.</p>
  508.  
  509.  
  510.  
  511. <p>If your consultant is ‘get the world’s expert in [X] to do it for you, or tell you what you need to know’ then that’s probably great. If it’s a generic consultant, be skeptical.</p>
  512.  
  513.  
  514.  
  515. <p>Here’s one I appreciate a lot.</p>
  516.  
  517.  
  518.  
  519. <blockquote>
  520. <p>Pull all nighters weeks before the shoot so you don’t have to days before the shoot.</p>
  521. </blockquote>
  522.  
  523.  
  524.  
  525. <p>Yes. Exactly. I mean, I never pull an all nighter, those are terrible, I only do long days of intense work but that’s the same idea. Whenever possible, I want to pull my crunch time well in advance of the deadline. In my most successful Magic competitions, back when the schedule made this possible, I would be essentially ready weeks in advance and then make only minor adjustments. With writing, a remarkable amount of this is now finished well in advance.</p>
  526.  
  527.  
  528.  
  529. <p>His review process is ‘when you want one ask for one,’ including saying what your goals are so people can tell you how you suck and what needs to be fixed for you to get there. I love that.</p>
  530.  
  531.  
  532.  
  533. <p>Here’s some other things that stood out that are more YT-specific, although implications will often generalize.</p>
  534.  
  535.  
  536.  
  537. <ol>
  538. <li>The claim that YouTube is the future, and to therefore ignore things like Netflix and Hulu, stop watching entirely, that stuff would fail on YT so who cares. Which is likely true, but that to me is a problem for YT. If anything I’m looking for ways to get myself to choose content with higher investment costs and richer payoffs.</li>
  539.  
  540.  
  541.  
  542. <li>Mr. Beast spent 20k-30k hours studying what makes YT videos work. It feels like there’s an implicit ‘and that won’t change too much’ there somewhere? Yet I expect the answers to change and be anti-inductive, as users adjust. Also AI.</li>
  543.  
  544.  
  545.  
  546. <li>Mr. Beast seems to optimize every video in isolation. He has KPMs (key performance metrics): Click Through Rate (CTR), Average View Duration (AVD) and Average View Percentage (AVP). He wants these three numbers to go up. That makes sense.
  547. <ol>
  548. <li>He talks about the thumbnail or ‘clickbait’ needing to match up with what you see, or you’ll lose interest. And he discusses the need to hold viewers for 1 min, then for 3, then for 6.</li>
  549.  
  550.  
  551.  
  552. <li>What he doesn’t talk about much is the idea of how this impacts future videos. A few times I’ve seen portions of a Mr. Beast video, it’s had a major impact on my eagerness to watch additional videos. And indeed, my desire to do so is low, because while I don’t hate the content I’ve been somewhat disappointed.</li>
  553.  
  554.  
  555.  
  556. <li>He does mention this under the ‘wow factor,’ a reason to do crazy stunts that impress the hell out of people. That doesn’t feel like the thing that matters most, to me that’s more about delivering on the second half of the video, but I am a strange potential customer.</li>
  557. </ol>
  558. </li>
  559.  
  560.  
  561.  
  562. <li>He says always video everything, because that’s the best way to ensure you can inform the rest of your team what the deal is. Huh.</li>
  563.  
  564.  
  565.  
  566. <li>The thumbnail is framed as super important, a critical component that creates other criticials, and needs to be in place in advance. Feels weird that you can’t go back and modify it later if the video changes?</li>
  567.  
  568.  
  569.  
  570. <li>‘Creativity saves money’ is used as a principle, as in find a cheaper way to do it rather than spend more. I mean, sure, I guess?</li>
  571.  
  572.  
  573.  
  574. <li>He says work on multiple videos every day, that otherwise you fall behind on other stuff and you’re failing. I mostly do the same thing as a writer, it’s rare that I won’t be working on lots of different stuff, and it definitely shows. But then there are times when yes, you need to focus and clear your head.</li>
  575.  
  576.  
  577.  
  578. <li>He asks everyone to learn how to hold a camera. Makes sense, there are versions of this everywhere.</li>
  579.  
  580.  
  581.  
  582. <li>Do not leave contestants waiting in the sun (ideally waiting in general) for more than 3 hours. Ah, the lessons of experience.</li>
  583.  
  584.  
  585.  
  586. <li>If something goes wrong always check if it can be used in the video. Nice.</li>
  587.  
  588.  
  589.  
  590. <li>What is the core secret of a Mr. Beast video, underneath all the details and obsession? It seems to be roughly ‘hammering people with very loud cool exciting s*** taken to 11 as often and intensely as possible, with full buy-in’?</li>
  591.  
  592.  
  593.  
  594. <li>A key format design is to have step-function escalation (a bigger firework! no, now an even bigger one! And again!) or ideally a huge end-of-video payoff that you get invested in, like who wins a competition. The obvious question is, why wouldn’t people skip ahead? Do people not know they can do that? I do it.</li>
  595.  
  596.  
  597.  
  598. <li>The audience for Mr. Beast is 70% male, and 77% ages 18-44. There’s a big drop off to the 45-54 group and another to the 55+ groups. I suppose people as old as me don’t care for this style of content, it’s the kids these days.</li>
  599.  
  600.  
  601.  
  602. <li>All the details on YT mastery make sense, and also point towards the dangers of having too much information, optimizing too heavily on the micro, not having room to breathe. I can only imagine how bad it is in TikTok land (where I very on purpose don’t have an account). No dull moment, only abrupt endings, and so on.</li>
  603. </ol>
  604.  
  605.  
  606.  
  607. <p>So I was about halfway through and was thinking ‘yeah this guy is intense but I appreciate the honesty and given the one-off and high-stakes nature of these tasks this all makes a lot of sense, why would you cancel someone for this’ and then I got to page 19, and a section title of, and I quote: “No does not mean no.”</p>
  608.  
  609.  
  610.  
  611. <p>Where he says never take a no at face value, that ‘literally doesn’t mean ****.’</p>
  612.  
  613.  
  614.  
  615. <p>Oh. I see.</p>
  616.  
  617.  
  618.  
  619. <p>I mean I totally get what he’s saying here when I look at details. A producer produces, and doesn’t let obstacles get in their way, and uses all the angles at their disposal. They don’t give up. Especially with a Mr. Beast production, where you could have fans or allies anywhere to help, and you have a lot of resources to throw at the problem, and the stakes can be high. But yeah, as Archer says, phrasing!</p>
  620.  
  621.  
  622.  
  623. <p>Other potential points of contention could be the emphasis on metrics, the idea that regular ‘C’ players who aren’t looking to go intense and level up to ‘A’ players are toxic and need to be fired right away, or the generally intense high expectations. Or perhaps a few things taken out of context.</p>
  624.  
  625.  
  626.  
  627. <p>This seems like a great place to work if you are one of Mr. Beast’s A-or-B players: Highly aligned with the company vision, mission and content, and want to work hard and obsess and improve and probably not have great work-life balance for a while. It seems like a terrible place for anyone else. But is that a bug, or is it a feature?</p>
  628.  
  629.  
  630.  
  631. <h4><a target="_blank" rel="noreferrer noopener" href="https://www.youtube.com/watch?v=iXCpTEwclgk&amp;ab_channel=TheyMightBeGiants-Topic">For Science!</a></h4>
  632.  
  633.  
  634.  
  635. <p>A simple guide on how to structure papers, <a target="_blank" rel="noreferrer noopener" href="https://x.com/robinhanson/status/1825521091936469240">or as Robin Hanson points out also many other things as well</a>.</p>
  636.  
  637.  
  638.  
  639. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1af3e63-288d-49ec-b269-af93d7eeaa30_397x600.png" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/camglswyvhgeba9qnuf9" alt="Image"></a></figure>
  640.  
  641.  
  642.  
  643. <h4>For Your Entertainment</h4>
  644.  
  645.  
  646.  
  647. <p><a target="_blank" rel="noreferrer noopener" href="https://www.thefp.com/p/reagan-is-a-terrible-movie-but-not">Reagan as a truly terrible movie, as anvilicious as it gets</a>, yet somehow still with a 98% audience score. Rather than telling us critics are useless or biased, I think this says more about audience scores. Audience scores are hugely biased, not in favor of a particular perspective, but in favor of films that are only seen, and thus only rated, by hardcore fans of the genre and themes. Thus, Letterboxd ratings are excellent, except that you have to correct for this bias, which is why many of the top films by rating are anime or rather obviously no fun.</p>
  648.  
  649.  
  650.  
  651. <p>Reminder that <a target="_blank" rel="noreferrer noopener" href="https://letterboxd.com/TheZvi/">my movie reviews are on Letterboxd</a>. There should be less of them during football season, especially for October if the Mets continue making a run.</p>
  652.  
  653.  
  654.  
  655. <p>A good question there is, why don’t I work harder to watch better movies? Partly I consider the movies that are both good and not ‘effort watching’ a limited resource, not to be wasted, and also because often I’m actually fine with a 3-star comfort movie experience, especially with stars I enjoy watching. There are a lot of movies that get critical acclaim, but often the experience isn’t actually great, especially if I’m looking to chill.</p>
  656.  
  657.  
  658.  
  659. <p>Also I notice that ‘what’s playing’ is actually a cool way to take the standards pressure off. So heuristics like ‘what’s leaving Netflix and evokes a sure why not’ lets me not fret on ‘of all the movies in the world, I had to go and choose this one.’ It’s fine. Then distinctly I seek out the stuff I want most. Similarly, if you’re at the local AMC or Regal and look good I’ll probably go for it, but traveling beyond that? Harder sell.</p>
  660.  
  661.  
  662.  
  663. <p>In television news, beyond football and baseball, I’ve been watching UnREAL (almost done with season 2 now), which recently was added to Netflix, and I am here to report that it is glorious, potentially my seventh tier one pick. I have not enjoyed a show this much in a long time, although I am confident part of that is it is an unusually great fit for me. I love that it found a way to allow me to enjoy watching the interactions and machinations of what are, by any objective measure (minor spoiler I suppose) deeply horrible people.</p>
  664.  
  665.  
  666.  
  667. <p>I’m also back with the late night show After Midnight. They made the format modestly worse for season 2 in several ways – the final 2 is gone entirely, the tiny couch is an actual couch and Taylor’s point inflation is out of control – but it’s still fun.</p>
  668.  
  669.  
  670.  
  671. <h4>Properly Rated</h4>
  672.  
  673.  
  674.  
  675. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/s_r_constantin/status/1830631465761362069">Sarah Constantin notices the trend that critical consensus is actually very, very good</a>.</p>
  676.  
  677.  
  678.  
  679. <blockquote>
  680. <p>Sarah Constantin: My most non-contrarian opinion:</p>
  681.  
  682.  
  683.  
  684. <p>Critical consensus is almost always right about the performing arts.</p>
  685.  
  686.  
  687.  
  688. <p>Prestige TV (Breaking Bad, Succession, Mad Men) is in fact the best TV.</p>
  689.  
  690.  
  691.  
  692. <p>High-Rotten-Tomatoes-scoring movies are (objectively) better, for their genre, than low-scoring movies.</p>
  693.  
  694.  
  695.  
  696. <p>I’m not a huge fan of today’s pop music, but Taylor Swift songs are reliably better than other pop songs.</p>
  697.  
  698.  
  699.  
  700. <p>I’ve seen Renee Fleming live, and she was in fact dramatically, shatteringly better than other operatic sopranos; she’s famous for a reason.</p>
  701.  
  702.  
  703.  
  704. <p>Bach, Mozart, Beethoven, etc are, in fact, that good; none of the greats are overrated.</p>
  705.  
  706.  
  707.  
  708. <p>(IMO Tchaikovsky is slightly underrated.)</p>
  709.  
  710.  
  711.  
  712. <p>On a slightly different note, the “Great Books” are also, in fact, great. None of this “Shakespeare was overrated” stuff.</p>
  713.  
  714.  
  715.  
  716. <p>My only “wtf, why is this person revered, including them in the canon was a mistake” example in literature is Anne Sexton. Read Sexton and Plath side by side and it’s clear one of them is a real poet and the other isn’t.</p>
  717.  
  718.  
  719.  
  720. <p>Most of the canonically “great” movies (Casablanca, Godfather, etc) are, actually, that good.</p>
  721.  
  722.  
  723.  
  724. <p>In general, the “middlebrow” zone — complex enough to reward attention, emotionally legible enough to be popular — is, in fact, a sweet spot for objective Quality IMO, though not the only way to go.</p>
  725.  
  726.  
  727.  
  728. <p>Weirdly I *don’t* find this to be true in food. More highly touted/rated restaurants don’t reliably taste better to me.</p>
  729.  
  730.  
  731.  
  732. <p>Artistic quality, IMO, is relative to genre and culture. i.e. someone who dislikes all rap is not qualified to review a rap album. but within genres you often see expert consensus on quality, and that consensus points to a real &amp; objective thing.</p>
  733. </blockquote>
  734.  
  735.  
  736.  
  737. <p>I think this is mostly true, and it is important to both respect the rule and to understand the exceptions and necessary adjustments.</p>
  738.  
  739.  
  740.  
  741. <p>As I have noted before, for movies, critical consensus is very good at picking up a particular type of capital-Q Quality in the Zen and the Art of Motorcycle Maintenance sense. The rating means something. However, there is another axis that matters, and there the problem lies, because critics also mostly hate fun, and are happy to send you to a deeply unpleasant experience in the name of some artistic principle, or to bore you to tears. And they give massive bonus points for certain social motivations, while subtracting points for others.</p>
  742.  
  743.  
  744.  
  745. <p>Sarah nails it with the middlebrow zone. If the critics like a middlebrow-zone movie you know it’s a good time. When they love a highbrow movie, maybe it is great or you will be glad you saw it, but beware. If you know what the movie is ‘trying to do,’ and also the Metacritic rating, you know a lot. If you know the Rotten Tomatoes rating instead you know less, because it caps at 100. You can go in blind on rating alone and that is mostly fine, but you will absolutely get burned sometimes.</p>
  746.  
  747.  
  748.  
  749. <p>I strongly suspect, but have not yet tested, the hypothesis that Letterboxd is actually the best systematic rating system. There is clearly a selection issue at times – the highest rated stuff involves a ton of anime and other things that are only seen by people inclined to love them – but otherwise I rarely see them misstep. If you did a correction for selection effects by average in-genre rating of the reviewers I bet the ratings get scary good.</p>
  750.  
  751.  
  752.  
  753. <p>The canonically great movies do seem to reliably be very good.</p>
  754.  
  755.  
  756.  
  757. <p>Prestige TV is generally the best TV, and ratings are overall pretty good, but of course there are many exceptions. The biggest mistake TV critics make is to disrespect many excellent shows, mostly but not entirely genre shows, that don’t fit its prestige conditions properly.</p>
  758.  
  759.  
  760.  
  761. <p>Music within-genre is a place our society tends to absolutely nail over time. The single is almost always one of the best songs on the album, the one-hit wonder rarely has other gems, justice prevails. The best artists are reliably much better. Short term ‘song of the summer’ style is more random, and genre is personal taste. The classic favorites like Beethoven and Bach are indeed best in class.</p>
  762.  
  763.  
  764.  
  765. <p>Books I’m less convinced. I endorse literal Shakespeare in play form, but I was forced to read a Great Books curriculum and was mostly unimpressed.</p>
  766.  
  767.  
  768.  
  769. <p>Food is directionally right. I’ve talked about it before, but in short: what you have to beware is confluence of service and ambiance ratings (and cost ratings) with food ratings. If people say the food is great, the food is probably great. If people say the food is bad, it’s almost always bad. Personal taste can still matter, as can knowing how to order, and there are the occasional mistakes. For me, the big catches are that I cannot eat fruits and vegetables straight up, and if they try to get fancy about things (e.g. they aim for more than one Michelin Star, as discussed earlier) things reliably go south.</p>
  770.  
  771.  
  772.  
  773. <p>More than that, the things I love most are not things critics care about enough – half the reason I respect Talib so much is ‘the bread, the only thing I cared about [at the Michelin starred restaurant] was not warm.’ Exactly.</p>
  774.  
  775.  
  776.  
  777. <h4>Government Working</h4>
  778.  
  779.  
  780.  
  781. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/INArteCarloDoss/status/1828703762338238657">In Germany it takes over 120 days to get a corporate operating license</a>, and 175 days to get a construction-related license. They’re going to have a bad time. What happened to German efficiency? These kinds of delays are policy choices.</p>
  782.  
  783.  
  784.  
  785. <p><a target="_blank" rel="noreferrer noopener" href="https://marginalrevolution.com/marginalrevolution/2024/09/equality-act-2010.html?utm_source=feedly&amp;utm_medium=rss&amp;utm_campaign=equality-act-2010">Alex Tabarrok looks at the utter insanity that is The UK’s 2010 ‘Equality Act</a>’ where if a judge decides two jobs were ‘equivalent,’ no matter the market saying otherwise, an employer – including a local government, some of which face bankruptcy for this – can not only be forced to give out ‘equal pay’ but to give out years of back wages. Offer your retail workers all the opportunity to work in the warehouse for more money, and they turned you down anyway? Doesn’t matter, the judge says they are ‘equal’ jobs. Back pay, now.</p>
  786.  
  787.  
  788.  
  789. <p>The details keep getting worse the more you look, such as “Any scheme which has as its starting point – “This qualification is paramount” or that “This skill is vital” is nearly always going to be biased or at least open to charges of bias or discrimination.”</p>
  790.  
  791.  
  792.  
  793. <p>My first thought was the same as the top comment, that this will dramatically shrink firm size. If you have to potentially pay any two given workers the same amount, then if two jobs have different market wages, they need to be provided by different firms. Even worse than pairwise comparisons would be chains of comparisons, where A=B and then B=C and so on, so you need to sever the chain.</p>
  794.  
  795.  
  796.  
  797. <p>The second thought is this will massively reduce wages, the same way that price transparency reduces wages only far, far worse. If you pay even one person $X, you risk having to pay everyone else $X, too, including retroactively when you don’t even get the benefits of higher wages. This provides very strong incentive to essentially never give anyone or any group a raise, unless you want to risk giving it to everyone.</p>
  798.  
  799.  
  800.  
  801. <p>The result? Declines in wages, also resulting in less supply of labor, unfilled jobs and higher unemployment. Also massive investment in automation, since low-wage employees are a grave risk.</p>
  802.  
  803.  
  804.  
  805. <p>There is also a puzzle. What do you do about jobs like the warehouse worker, where someone has to do them, but you can’t pay the market clearing price to convince people to do them?</p>
  806.  
  807.  
  808.  
  809. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/bryancsk/status/1828334621702148097">Same as it ever was.</a></p>
  810.  
  811.  
  812.  
  813. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3c8d737c-575f-455b-b6d4-8bb852c0fb66_889x833.png" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/goie3zt9mwmgcu54jf1z" alt=""></a></figure>
  814.  
  815.  
  816.  
  817. <p>It also sounds like someone forgot to price gouge.</p>
  818.  
  819.  
  820.  
  821. <p>My only explanation at this point is that <a target="_blank" rel="noreferrer noopener" href="https://x.com/syptweet/status/1827300301994394019">the United Kingdom likes trying to sound as sinister and authoritarian as possible</a>. It’s some sort of art project?</p>
  822.  
  823.  
  824.  
  825. <blockquote>
  826. <p>South Yorkshire Police: Do you know someone who lives a lavish lifestyle, but doesn’t have a job?</p>
  827.  
  828.  
  829.  
  830. <p>Your intelligence is vital in helping us put those who think they’re ‘untouchable’ before the courts.</p>
  831.  
  832.  
  833.  
  834. <p><a target="_blank" rel="noreferrer noopener" href="https://t.co/tpMYJkgdUP">Find out how here.</a></p>
  835.  
  836.  
  837.  
  838. <figure class="wp-block-image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/vxaui3xvoj7k2odlovlp" alt="A blue background with white writing that says raising the voice of economic crime."></figure>
  839.  
  840.  
  841.  
  842. <p><a target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F68810a3b-b655-4033-9bd8-4650565e993b_4032x2268.jpeg" rel="noreferrer noopener"></a></p>
  843. </blockquote>
  844.  
  845.  
  846.  
  847. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/AA_Millsap/status/1834741995626991988">A good way to think about high skill immigration to the United States.</a></p>
  848.  
  849.  
  850.  
  851. <blockquote>
  852. <p>Tyler Cowen: “I work with a great number of young people… from all over the world.</p>
  853.  
  854.  
  855.  
  856. <p>It’s just stunning to me how many of them want to come to the United States… and it’s stunning to me how few say, ‘Oh, could you help me get into Denmark?’”</p>
  857.  
  858.  
  859.  
  860. <p>Adam Millsap: I heard something the other day that stuck with me—every year there’s a draft for human capital and America has the first 100K picks and every year we trade them away for nothing.</p>
  861. </blockquote>
  862.  
  863.  
  864.  
  865. <p>The unforced error here is immense.</p>
  866.  
  867.  
  868.  
  869. <h4>Grapefruit Diet</h4>
  870.  
  871.  
  872.  
  873. <p>The new GLP-1 drugs make weight loss easy for some people, but far from all. And there continue to be a lot of people confidently saying (centrally contradictory to each other) things as universals, that are at best very much not universals.</p>
  874.  
  875.  
  876.  
  877. <blockquote>
  878. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/exfatloss/status/1816959452442165401">Eliezer Yudkowsky</a>: From @exfatloss’s review of Pontzer’s _Burn_. I could do with a less angry summary of the book, <a target="_blank" rel="noreferrer noopener" href="https://www.exfatloss.com/p/burning-carolies-doesnt-burn-any">but reading this summary was still valuable</a>.</p>
  879.  
  880.  
  881.  
  882. <p>Summary: tl;dr</p>
  883.  
  884.  
  885.  
  886. <p>• Adding exercise barely increases your total cArOliEs out.</p>
  887.  
  888.  
  889.  
  890. <p>• If it does at all, less than expected, and the effect diminishes over time.</p>
  891.  
  892.  
  893.  
  894. <p>• The body cannot magically conjure up more cArOliEs if you go jogging, it just takes the energy from somewhere else. Just like spending money doesn’t increase your income, it just re-routes existing expenditures.</p>
  895.  
  896.  
  897.  
  898. <p>• This is what actual measurements show, everything prior was total speculation.</p>
  899.  
  900.  
  901.  
  902. <p>•This explains why the “move more” part of “eat less, move more” is garbage.</p>
  903.  
  904.  
  905.  
  906. <p>• Unfortunately, the rest of the 300-page book is fluff or useless mainstream cAroLies &amp; ulTRa procesSed fOOD nonsense.</p>
  907.  
  908.  
  909.  
  910. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/exfatloss/status/1816959452442165401">Experimental Fat Loss</a>: When I was in college I fantasized about being wealthy enough to afford having all my meals cooked for me, healthy, by a chef.</p>
  911.  
  912.  
  913.  
  914. <p>Then I got into the tech bubble, got wealthy enough and did it for like 3 months.</p>
  915.  
  916.  
  917.  
  918. <p>And I didn’t lose any weight.</p>
  919.  
  920.  
  921.  
  922. <p>Andrew Rettek: It’s weird how he has this graph but the text all describing a world where the top of the dark grey area is horizontal. IIRC from when I read about this result a few months back, you can’t get your Calories out up by a few hundred without a herculean effort (like the Michael Phelps swimming example). When I see mainstream sports scientists discuss these results, they always emphasize how important it is to climb the steep part of the slope and how it’s barely useful to go further.</p>
  923.  
  924.  
  925.  
  926. <p>The important thing is you can go from X maintenance Calories while completely sedentary to X+300-500, and it’s incredibly useful to do so for a bunch of reasons including weightloss.</p>
  927.  
  928.  
  929.  
  930. <figure class="wp-block-image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/s3gzdggmj90caz92xbzv" alt="Image"></figure>
  931.  
  932.  
  933.  
  934. <p><a target="_blank" href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25db3ff4-23dc-4e95-a5c4-3f527eed8370_303x311.png" rel="noreferrer noopener"></a></p>
  935. </blockquote>
  936.  
  937.  
  938.  
  939. <p>Right, this graph is not saying exercise does not matter for calories burned. It is saying there are positive but decreasing and much less than full marginal returns to exercise within this ‘sane zone’ where other has room to decrease.</p>
  940.  
  941.  
  942.  
  943. <p>In addition to the obvious ‘exercise is good for you in other ways,’ one caveat listed and that is clear on this graph seems super important, which is that going from completely sedentary to ‘walking around the office level’ does make a huge difference. Whatever else you do, you really want to move a nonzero amount.</p>
  944.  
  945.  
  946.  
  947. <p>At the other end, the theory is that if you burn more calories exercising then you burn less in other ways, but if you burn so many exercising (e.g. Michael Phelps) then there’s nowhere left to spend less, so it starts working. And there is an anecdotal report of a friend doing 14 miles of running per day with no days off, that made this work. But the claim is ordinary humans don’t reach there with sane exercise regimes.</p>
  948.  
  949.  
  950.  
  951. <p>So I have my own High Weirdness situation, which might be relevant.</p>
  952.  
  953.  
  954.  
  955. <p>I lost weight (from 300+ lbs down to a low of ~150lbs, then stable around 160lbs for decades) over about two years in my 20s entirely through radical reduction in calories in. As in I cut at least half of them, going from 3 meals a day to at most 2 and cutting portion size a lot as well. Aside from walking I wasn’t exercising.</p>
  956.  
  957.  
  958.  
  959. <p>One result of this is that I ended up with a world-class level of slow metabolism.</p>
  960.  
  961.  
  962.  
  963. <p>The mechanisms make sense together. Under the theory, with less calories in, every energy expenditure that could be cut got cut, and I stayed in that mode permanently. If brute force doesn’t solve your problem, you are not using enough (whether or not using enough is wise or possible to do in context, it might well not be either), at some point you push through all the equilibrium effects.</p>
  964.  
  965.  
  966.  
  967. <p>Which in turn is why I seem to be in a different situation, where exercise does indeed burn the extra calories it says on the tin, and on the margin CICO is accurate.</p>
  968.  
  969.  
  970.  
  971. <p>Similarly, it means that if I were to build muscle, as I am working on doing now, it will directly raise calories out, because again I’m out of adjustments in the other direction. The math that people keep saying but that doesn’t work for most people, in this weird instance, actually does hold, or at least I strongly suspect that it does.</p>
  972.  
  973.  
  974.  
  975. <blockquote>
  976. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/ESYudkowsky/status/1817783077143515246">Eliezer Yudkowsky</a>: Has anyone found that semaglutide/tirzepatide failed for them, but the Eat Nothing Except Potatoes diet succeeded for weight loss or weight maintenance?</p>
  977.  
  978.  
  979.  
  980. <p>The keto brainfog never goes away for me, even months later.</p>
  981.  
  982.  
  983.  
  984. <p>Kiddos, I will repeat myself: Anyone serious about fighting resistant obesity has <strong>already tried diets equally or less palatable than ‘exclusively boiled potatoes’. </strong>Some such people report that ‘just potatoes’ <em>did </em>work. <strong>‘Palatability’ is thereby ruled out as an explanation.</strong></p>
  985.  
  986.  
  987.  
  988. <p>F4Dance: Semaglutide had modest effect on me (maybe about 5 lbs/month, but I was still ramping up the dosage) where the potato diet did better (about 10 lbs/month until it failed as I did more fries).</p>
  989. </blockquote>
  990.  
  991.  
  992.  
  993. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/EricTopol/status/1829499268346073342">On the other hand</a>, GLP-1 drug <a target="_blank" rel="noreferrer noopener" href="https://t.co/4I6FCtuSrI">Semaglutide seems to reduce all-cause mortality, deaths from Covid and severe adverse effects from Covid</a>?</p>
  994.  
  995.  
  996.  
  997. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ce2f7ae-1029-4433-9db5-1fb195289312_3482x3370.jpeg" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/ch1suzjngxyubsb9yvp5" alt="Image"></a></figure>
  998.  
  999.  
  1000.  
  1001. <blockquote>
  1002. <p>Eric Topol: <a target="_blank" rel="noreferrer noopener" href="https://t.co/dul6RYdfir">Also @TheLancet</a> and #ESCCongress today 4 semaglutide vs placebo randomized trials pooled for patients with heart failure, mildly reduced or preserved ejection fraction (HFpEF)</p>
  1003.  
  1004.  
  1005.  
  1006. <p>Graphs below</p>
  1007.  
  1008.  
  1009.  
  1010. <p>A: CV death and worsening heart failure</p>
  1011.  
  1012.  
  1013.  
  1014. <p>B: Worsening heart failure (drove the benefit)</p>
  1015. </blockquote>
  1016.  
  1017.  
  1018.  
  1019. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d798be1-3c17-4c56-ba8d-571acee45e2b_2120x2376.jpeg" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/uoorx9vvh5vnaaac34xb" alt="Image"></a></figure>
  1020.  
  1021.  
  1022.  
  1023. <p>These are rather absurd results, if they hold up.</p>
  1024.  
  1025.  
  1026.  
  1027. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/ZekeEmanuel/status/1829576445082910799">North Carolina covers GLP-1s for Medicaid patients</a>, <a target="_blank" rel="noreferrer noopener" href="https://t.co/IWLiuMhuDo">but not state employees</a>. <a target="_blank" rel="noreferrer noopener" href="https://www.wsj.com/articles/for-obese-patients-wegovy-is-worth-the-cost-insurance-health-north-carolina-2e0edf02">Govind Persad and Ezekiel Emanuel argue in the WSJ that the drugs are worth the cost.</a> As that article points out, Wegovy and other GLP-1s are more cost effective than many things we already cover.</p>
  1028.  
  1029.  
  1030.  
  1031. <p>I don’t think this is primarily about obesity, it is primarily about us wanting to cover drugs at any cost, and then running into actual overall cost constraints, and GLP-1s being desired too broadly such that it exposes the contradiction. It’s easy to justify spending huge on an orphan drug because the cost and who pays are hidden. Here, you can’t hide the up front costs, no matter the benefits. We can only value lives at $10 million when we have limited opportunities to make that trade, or we’d go bankrupt.</p>
  1032.  
  1033.  
  1034.  
  1035. <p><a target="_blank" rel="noreferrer noopener" href="https://marginalrevolution.com/marginalrevolution/2024/07/the-economics-of-glp-1.html?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=the-economics-of-glp-1">GLP-1 agonists cause dramatic shifts in food consumption</a>.</p>
  1036.  
  1037.  
  1038.  
  1039. <blockquote>
  1040. <p>Frank Fuhrig: Their grocery bills were down by an average of 11%, yet they spent 27% more on lean proteins from lean meat, eggs and seafood. Other gainers were meal replacements (19%), healthy snacks (17%), whole fruits and vegetables (13%) and sports and energy drinks (7%).</p>
  1041.  
  1042.  
  1043.  
  1044. <p>Snacks and soda took the brunt of reduced spending by consumers after GLP-1 treatment: snacks and confectionary (-52%), prepared baked goods (-47%), soda/sugary beverages (-28%), alcoholic beverages (-17%) and processed food (-13%).</p>
  1045. </blockquote>
  1046.  
  1047.  
  1048.  
  1049. <p>If you want to get some GLP-1 agonists and pay for it yourself, <a target="_blank" rel="noreferrer noopener" href="https://www.astralcodexten.com/p/the-compounding-loophole">there’s technically a shortage, so you can solve three problems at once</a> by using the compounding loophole and get a steep discount without taxing the base supply.</p>
  1050.  
  1051.  
  1052.  
  1053. <p><a target="_blank" rel="noreferrer noopener" href="https://cerebralab.com/My_lukewarm_take_on_GLP-1_agonists">Here’s a skeptical take warning not to go too far</a> with universal application of GLP-1 agonists. He agrees they’re great for people with obesity or diabetes, absolutely go for it then, but like all drugs that do anything useful there are side effects including unknown unknowns, at least from your perspective. So while the side effects are very much acceptable when you need the benefits, perhaps don’t do it if you’re fine without.</p>
  1054.  
  1055.  
  1056.  
  1057. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/Jabaluck/status/1827473548903862488">We could have had GLP-1 agonists in the 1990s, the former dean of Harvard Medical School had a startup with promising early results, but their pharma partner Pfizer killed the project</a> for reasons that seem really stupid, thinking it wouldn’t sell.</p>
  1058.  
  1059.  
  1060.  
  1061. <h4>Gamers Gonna Game Game Game Game Game</h4>
  1062.  
  1063.  
  1064.  
  1065. <p><a target="_blank" rel="noreferrer noopener" href="https://www.channelfireball.com/article/Wizards-of-the-Coast-Announces-New-Global-Magic-Tournament-Series/c77b4657-3d45-44e4-8566-f5171e860fe8?utm_source=twitter&amp;utm_medium=social&amp;utm_content=channelfireball&amp;utm_campaign=Wizards-of-the-Coast-Announces-New-Global-Magic-Tournament-Series-08202024&amp;utm_author=lsv">Magic: The Gathering announces new global Magic tournament series</a>. The first wave has eight. They’re $50k weekend tournaments with 8 qualification slots, so essentially an old school Grand Prix with a better prize pool. Great stuff. I worry (or hope?) they will get absolutely mobbed, and you’re need a crazy good record.</p>
  1066.  
  1067.  
  1068.  
  1069. <p>Nadu, Winged Wisdom is now thankfully banned in Modern. <a target="_blank" rel="noreferrer noopener" href="https://magic.wizards.com/en/news/feature/on-banning-nadu-winged-wisdom-in-modern">Michael Majors offers a postmortem</a>. It is a similar story to one we have heard many times. A card was changed late in the process, no one understood the implications of the new version, and it shipped as-is without getting proper attention. No one realized the combo with Shuko or other 0-cost activated effects.</p>
  1070.  
  1071.  
  1072.  
  1073. <p>In response, they are going to change the timing of bans and restrictions to minimize fallout on future mistakes, which is great, and also be more careful with late changes. As Majors notes, he knew he didn’t understand the implications of the new textbox, and that should have been a major red flag. So rather crazy error, great mea culpa. But also Ari Lax is right that they need to address more directly that the people who looked at Nadu late <a target="_blank" rel="noreferrer noopener" href="https://twitter.com/armlx/status/1828373909713891478">weren’t doing the correct thing</a> of looking for worst case scenarios. I agree that mistakes happen but this is a very straightforward interaction, and when you add a ‘if X then draw a card’ trigger the very first thing you do is ask if there is a way to repeatedly do X.</p>
  1074.  
  1075.  
  1076.  
  1077. <p><a target="_blank" rel="noreferrer noopener" href="https://twitter.com/SamuelHBlack/status/1828108167882191197">Sam Black updates us on the meta of cEDH (four player competitive commander) play.</a> As you would expect, competitive multiplayer gets weird. The convention increasingly became, Sam reports, that if Alice will win next turn, then Bob, Carol and David will conspire to threaten to allow (let’s say) David to win, to force Alice to agree to a draw. That’s ‘funny once’ but a terrible equilibrium, and all these ‘force a draw’ moves are generally pretty bad, so soon draws will be zero points. Sounds like a great change to me. If Bob can be Kingmaker between Alice and David, that’s unavoidable, but he shouldn’t be able to extract a point.</p>
  1078.  
  1079.  
  1080.  
  1081. <p>The problem is that what remains legal is outright collusion, as in casting a spell that intentionally wins your friend (who you may have a split with!) the game, without it being part of some negotiating strategy or being otherwise justified. That is going to have to get banned and somehow policed, and rather quickly – if that happened to me and the judge said ‘<a target="_blank" rel="noreferrer noopener" href="https://tvtropes.org/pmwiki/pmwiki.php/Main/LoopholeAbuse">aint no rule</a>’ and didn’t fix it going forward either, I don’t think I ever come back – to me this is a clear case of ‘okay that was funny once but obviously that can never happen again.’</p>
  1082.  
  1083.  
  1084.  
  1085. <p>There is now a debate on whether competitive commander (cEDH) <a target="_blank" rel="noreferrer noopener" href="https://x.com/SamuelHBlack/status/1832298772535071231">should have a distinct banned list from Commander</a>. Sam Black says no, because the format is self-balancing via having four players, and it is good for people to know their decks will remain legal. You could unban many cards safely, but there wouldn’t be much point.</p>
  1086.  
  1087.  
  1088.  
  1089. <p>I think I’m with Sam Black on reflection. It’s good that cEDH and Commander have the same rules, and to know you don’t have to worry about the list changing. It would take a big win to be worth breaking that. The format is not exactly trying to be ‘balanced’ so why start now?</p>
  1090.  
  1091.  
  1092.  
  1093. <p>Indeed, I would perhaps go a step further. The fun of cEDH and Commander was initially, in large part, finding which cards and strategies are suddenly good due to the new format. A lot of stuff is there ‘by accident.’ I can get behind that world of discovery, and the big decks and three opponents mean nothing goes too crazy, or you ban the few things that do go too far. Let’s keep more of that magic while we can. Whereas to me, the more they make cards for Commander on purpose, the less tempted I am to play it.</p>
  1094.  
  1095.  
  1096.  
  1097. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/leearson/status/1831155533115752494">How would you use these new Magic lands?</a></p>
  1098.  
  1099.  
  1100.  
  1101. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F948ce787-4f36-400f-bfa0-a9327b14a50f_1134x592.jpeg" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/wpjpsdoxgc8c3ttmzsgl" alt="Image"></a></figure>
  1102.  
  1103.  
  1104.  
  1105. <blockquote>
  1106. <p>Lee Shi Tian: I suppose this cycle need 12-14 core basic land type to enable the land It seems perfect for 1+0.5 color deck For example the Rg mouse at std now I wonder how good it is in the 0.5 side (Wg/Rb) Or even 1+0.5+0.5 deck (Rgb/Wgu).</p>
  1107. </blockquote>
  1108.  
  1109.  
  1110.  
  1111. <p>The obvious first note is that a Gloomlake Verge with no Island or Swamp is still a regular untapped Island. Unless there are other reasons you need Islands (or other basic land types) or need basic lands, including these lands over basics is outright free. Missing is fine. They get better rapidly as you include even a few basics.</p>
  1112.  
  1113.  
  1114.  
  1115. <p>Note that you only get to count lands that don’t already ‘solve your problem’ that the new dualland is addressing. So if you have 5 Mountains, 7 Forests and Thornspire Verge, then those 7 forests only enable Verge to the extent you need a second green. If you need one, only the Mountains count. They’d still count as roughly two extra green sources starting on turn two. Note that with Llanowar Elves in standard, Hushwood Verge (which is base green and secondary white) plays substantially better for many decks than Thronspire Verge (which is base red and secondary green).</p>
  1116.  
  1117.  
  1118.  
  1119. <p>Either way this feels like power creep, lands good enough to make at some Modern decks. Not obviously bad power creep, but definitely power creep.</p>
  1120.  
  1121.  
  1122.  
  1123. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/JoINrbs/status/1831328396913390015">A postmortem on NFT games</a>:</p>
  1124.  
  1125.  
  1126.  
  1127. <blockquote>
  1128. <p>Jorbs: The thing about playing a game with nft assets is that nfts are terrible. The game can be fine, but it has nfts in it, so it is going to get shat on by tons of people and is fairly likely to result in many players (or investors) losing large amounts of money.</p>
  1129.  
  1130.  
  1131.  
  1132. <p>It’s not a solvable problem, even if your community is great and the game uses nfts in a compelling way, you are vulnerable to others coming in and using it as a pump-and-dump, or to build the worst version of prison gold farming in it, etc.</p>
  1133.  
  1134.  
  1135.  
  1136. <p>It’s also causal fwiw. The reason someone puts nfts in their game, and the reason many players are drawn to that game, is a desire to make money, and given that the game doesn’t actually produce anything of real value, that money comes from other players.</p>
  1137. </blockquote>
  1138.  
  1139.  
  1140.  
  1141. <p>On reflection this is mostly right. NFTs attract whales and they attract speculators, and they drive away others. This is very bad for the resulting composition of the community around the game, and NFTs also force interaction with the community. Magic: The Gathering kind of ‘gets away with’ a version of this in real life, as do other physical TCGs, but they’re sort of grandfathered in such that it doesn’t drive (too many) people away and the community is already good, and they don’t have the crypto associations to deal with.</p>
  1142.  
  1143.  
  1144.  
  1145. <p>I am very happy we got Magic: the Gathering before we got the blockchain, so that could happen.</p>
  1146.  
  1147.  
  1148.  
  1149. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/nickcammarata/status/1827796111308058968">Thread on speedrunning as the ultimate template</a> of how to genuinely learn a system, identify and solve bottlenecks, experiment, practice and improve. And why you should apply that attitude to other places, including meditation practice, rather than grinding the same thing over and over without an intention.</p>
  1150.  
  1151.  
  1152.  
  1153. <h4>Gamers Winning At Life</h4>
  1154.  
  1155.  
  1156.  
  1157. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/robinhanson/status/1825514791483691150">If you’re so good at chess, why aren’t you rich?</a></p>
  1158.  
  1159.  
  1160.  
  1161. <blockquote>
  1162. <p>Robin Hanson: Some people are really good at board games. Not just one or a few but they can do well at most any. Why don’t they then do better at life? How do board games differ so systematically?</p>
  1163. </blockquote>
  1164.  
  1165.  
  1166.  
  1167. <p><a target="_blank" rel="noreferrer noopener" href="https://www.overcomingbias.com/p/why-dont-gamers-win-at-life">He then followed up with a full post.</a></p>
  1168.  
  1169.  
  1170.  
  1171. <p>Here’s his conclusion:</p>
  1172.  
  1173.  
  1174.  
  1175. <blockquote>
  1176. <p>Robin Hanson: The way I’d say it is this: we humans inherit many unconscious habits and strategies, from both DNA and culture, habits that apply especially well in areas of life with less clear motivations, more implicit rules, and more opaque complex social relations. We have many (often “<a target="_blank" rel="noreferrer noopener" href="https://www.theseedsofscience.org/2023-we-see-the-sacred-from-afar-to-see-it-the-same">sacred</a>”) norms saying to execute these habits “authentically”, without much conscious or strategic reflection, especially selfish. (“Feel the force, Luke.”) These norms are easier to follow with implicit rules and opaque relations.</p>
  1177.  
  1178.  
  1179.  
  1180. <p>Good gamers then have two options: defy these norms to consciously calculate life as a game, or follow the usual norm to not play life as a game. At least one, and maybe both, of these options tends to go badly. (A <a target="_blank" rel="noreferrer noopener" href="https://x.com/robinhanson/status/1825911415711879375">poll</a> prefers defy.) At least in most walks of life; there may be exceptions, such as software or finance, where these approaches go better.&nbsp;</p>
  1181. </blockquote>
  1182.  
  1183.  
  1184.  
  1185. <p>I know he’s met a gamer, he lunches with Bryan Caplan all day, but this does not seem to understand the gamer mindset.</p>
  1186.  
  1187.  
  1188.  
  1189. <p>Being a gamer, perhaps I can help. Here’s my answer.</p>
  1190.  
  1191.  
  1192.  
  1193. <p>People good at board games usually have invested in learning a general skill of being good at board games, or games in general. That is time and skilling up not spent on other things, like credentialism or building a network or becoming popular or charismatic. And it indicates a preference to avoid such factors, and to focus on what is interesting and fun instead.</p>
  1194.  
  1195.  
  1196.  
  1197. <p>This differential skill development tends to snowball, and if you ‘fall behind’ in those other realms then you see increasing costs and decreasing returns to making investments there, both short and long term. Most people develop those skills not because they are being strategic, but incidentally through path dependence.</p>
  1198.  
  1199.  
  1200.  
  1201. <p>The world then tends to punish these preferences and skill portfolios, in terms of what people call ‘success.’ This is especially true if such people get suckered into the actual gaming industry.</p>
  1202.  
  1203.  
  1204.  
  1205. <p>Alternatively, a key reason many choose games to this extent is exactly because they tend to underperform in other social contests, or find them otherwise unrewarding. So the success in games is in that sense indicative of a lack of other success, or the requirements for such success.</p>
  1206.  
  1207.  
  1208.  
  1209. <p>There’s another important factor. People I know who love board games realize that you don’t need this mysterious ‘success’ to be happy in life. You can play board games with your friends, and that is more fun than most people have most of the time, and it is essentially free in all ways. They universally don’t have expensive taste. So maybe they go out and earn enough to support a family, sure, but why should they play less fun games in order to gain ‘success’?</p>
  1210.  
  1211.  
  1212.  
  1213. <p>Opportunity costs are high out there. <a target="_blank" rel="noreferrer noopener" href="https://www.tcgplayer.com/product/6400/magic-the-gathering-urzas-legacy-tinker?country=US&amp;utm_campaign=18142757028&amp;utm_source=google&amp;utm_medium=cpc&amp;utm_content=&amp;utm_term=&amp;adgroupid=&amp;gad_source=1&amp;gclid=Cj0KCQjw2ou2BhCCARIsANAwM2HFqFMfJdBAZn_LVgmS8MrOdeqALPpO60ALyLeHIRjgfp6AtnEGzcoaAqq-EALw_wcB&amp;Language=English">As a tinkering mage famously said</a>, I wonder how it feels to be bored?</p>
  1214.  
  1215.  
  1216.  
  1217. <p>(I mean, I personally don’t wonder. I went to American schools.)</p>
  1218.  
  1219.  
  1220.  
  1221. <p>There are two answers.</p>
  1222.  
  1223.  
  1224.  
  1225. <p>One is that the money is the score, and many do ultimately find games involving earning money more interesting. Often this is poker or sports betting or trading, all of which such people consistently excel at doing. So they often end up doing well kind of by accident, or because why not.</p>
  1226.  
  1227.  
  1228.  
  1229. <p>That’s how I ended up doing well. One thing kind of led to another. The money was the score, and trading in various forms was fascinating as a game. I did also realize money is quite useful in terms of improving your life and its prospects, up to a point. And indeed, I mostly stopped trying to make too much more money around that point.</p>
  1230.  
  1231.  
  1232.  
  1233. <p>The other is that some gamers actually decide there is something important to do, that requires them to earn real money or otherwise seek some form of ‘success.’ They might not want a boat, but they want something else.</p>
  1234.  
  1235.  
  1236.  
  1237. <p>In my case, for writing, that’s AI and existential risk. If that was not an issue, I would keep writing because I find writing interesting, but I wouldn’t put in anything like this effort level or amount of time. And I would play a ton more board games.</p>
  1238.  
  1239.  
  1240.  
  1241. <h4>I Was Promised <s>Flying</s> Self-Driving Cars</h4>
  1242.  
  1243.  
  1244.  
  1245. <p>There are still a few bugs to work out, <a target="_blank" rel="noreferrer noopener" href="https://x.com/ajtourville/status/1823509421357719763">as the Waymos honk at each other while parking at 4am in the morning</a>.</p>
  1246.  
  1247.  
  1248.  
  1249. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/NateSilver538/status/1826496344481300480">Nate Silver reports positively on his first self driving car experience</a>. The comments that involved user experiences were also universally positive. This is what it looks like to be ten times better.</p>
  1250.  
  1251.  
  1252.  
  1253. <p><a target="_blank" rel="noreferrer noopener" href="https://www.understandingai.org/p/we-could-be-months-away-from-driverless">Aurora claims to be months away from fully driverless semi-trucks.</a></p>
  1254.  
  1255.  
  1256.  
  1257. <h4>While I Cannot Condone This</h4>
  1258.  
  1259.  
  1260.  
  1261. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/Dumpster_DAO/status/1832148090452898235">Polymarket offered a market </a>on who would be in the lead in the presidential market for a majority of the three hours between 12pm and 3pm one day. Kamala Harris was slightly behind. <a target="_blank" rel="noreferrer noopener" href="https://rajivsethi.substack.com/p/a-failed-attempt-at-prediction-market?triedRedirect=true">Guess what happened next</a>? Yep, a group bought a ton of derivative contracts, <a target="_blank" rel="noreferrer noopener" href="https://x.com/Dumpster_DAO/status/1832148108735832272">possibly losing on the order of $60k</a>, then tried to pump the main market with over $2 million in buys that should cost even more.</p>
  1262.  
  1263.  
  1264.  
  1265. <p>Rather than being troubled or thinking this is some sort of ‘threat to Democracy,’ I would say this was a trial by fire, and everything worked exactly as designed. They spent millions, and couldn’t get a ~2% move to stick for a few hours. That’s looking like a liquid market that is highly resistant to manipulation, where the profit motive keeps things in line. Love it. Gives me a lot more faith in what happens later.</p>
  1266.  
  1267.  
  1268.  
  1269. <p>In other good prediction market news, Kalshi won its case, and can now offer legal betting markets to Americans on elections. Neat.</p>
  1270.  
  1271.  
  1272.  
  1273. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/jurawho/status/1826306565218979935?s=48">Cancellations of musical artists matter mostly because of platform actions such as removal from algorithmic recommendations and playlists</a>. Consumer behavior is otherwise mostly unchanged. This matches my intuitions and personal experience.</p>
  1274.  
  1275.  
  1276.  
  1277. <p>Curious person asks if there were any student protest movements that were not vindicated by history, as he couldn’t think of any. <a target="_blank" rel="noreferrer noopener" href="https://twitter.com/Noahpinion/status/1829886649519706562">The answers surprised him</a>. <a target="_blank" rel="noreferrer noopener" href="https://x.com/SwannMarcus89/status/1829722356828311744">Then they surprised him a bit more</a>.</p>
  1278.  
  1279.  
  1280.  
  1281. <p>Study says (I didn’t verify methodology but source quoting this is usually good) <a target="_blank" rel="noreferrer noopener" href="https://x.com/StefanFSchubert/status/1823990866731512292">value of a good doctor over their lifetime is very high</a>, as is the value of not being a very bad one, with a 11% higher or 12% lower mortality rate than an average doctor, with the social cost of a bad (5th percentile) doctor not being 50th percentile on the order of $9 million. Not that we could afford or would want to afford to pay that social cost to get the improvement at scale, but yes quality matters. The implications for policy are varied and not obvious.</p>
  1282.  
  1283.  
  1284.  
  1285. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/oscredwin/status/1827450059924852859">Turns out the price of an cozy Ambassadorship is typically around $2.5 million</a>, payable in political contributions. Doesn’t seem obviously mispriced?</p>
  1286.  
  1287.  
  1288.  
  1289. <p><a target="_blank" rel="noreferrer noopener" href="https://www.astralcodexten.com/p/in-defense-of-im-sorry-you-feel-that">Scott Alexander defends ‘I’m sorry you feel that way’ and ‘I’m sorry if you’re offended</a>.’ I think he’s mostly right that this is indeed a useful phrase and often we do not have a superior alternative. The things to understand about such phrases are:</p>
  1290.  
  1291.  
  1292.  
  1293. <ol>
  1294. <li>It’s not a real apology. It’s (usually) also not claiming to be one.</li>
  1295.  
  1296.  
  1297.  
  1298. <li>It is instead a statement you are sad about some aspect of the situation.</li>
  1299.  
  1300.  
  1301.  
  1302. <li>People hate it because they wanted an apology.</li>
  1303. </ol>
  1304.  
  1305.  
  1306.  
  1307. <p>More precisely, it is saying: “I acknowledge that you desire an apology. I am not going to give you one, because I do not think one is justified. However, I sympathize with your situation, and am sad that you find yourself in it and wish things were better.”</p>
  1308.  
  1309.  
  1310.  
  1311. <p>Sometimes people do use it to gaslight, claiming it is an actual apology. Or people use this when an apology is required or demanded, to technically claim they did it. Kids especially like to do this, since it has the word ‘sorry’ in it. That’s your fault for asking, and if you want a ‘real’ or ‘sincere’ apology, you can reasonably reject such responses. Many comments said similar things.</p>
  1312.  
  1313.  
  1314.  
  1315. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/davidshor/status/1830263157950193700">Let me tell you about the very young. They are different from you and me</a>.</p>
  1316.  
  1317.  
  1318.  
  1319. <blockquote>
  1320. <p>David Shor: It is really striking how different very young adults are from everyone else in personality data. 0.8 standard deviations is a lot!</p>
  1321. </blockquote>
  1322.  
  1323.  
  1324.  
  1325. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ba921c1-2229-4a02-9f3a-94ea3c74fd7c_2286x1442.jpeg" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/f0qczhs0tlmhwccixd4c" alt="Image"></a></figure>
  1326.  
  1327.  
  1328.  
  1329. <p>With the ambiguous exception of enjoying wild flights of fantasy, ‘kids these days’ are on the wrong side of every single one of these. There’s a lot of correlation and clustering here. The question is, to what extent will they grow out of it, versus this being a new future normal?</p>
  1330.  
  1331.  
  1332.  
  1333. <p><a target="_blank" rel="noreferrer noopener" href="https://aashishreddy.substack.com/p/interview-tyler-cowen-economist">Tyler Cowen interview with Aashish Reddy</a>, different than the usual, far more philosophical and abstract and historical. I wish I had the time and space to read this widely, to know all the history and the thinkers, and live in that world. Alas, not being Tyler Cowen or reading at his speed, I do not. One thing that struck me was Cowen saying he has become more Hegelian as he got older.</p>
  1334.  
  1335.  
  1336.  
  1337. <p>I think that is tragic, and also that it explains a lot of his behavior. Hegel seems to me like the enemy of good thinking and seeking truth, in the literal sense that he argues against it via his central concept of the dialectic, and for finding ways to drive others away from it. This is the central trap of our time, the false dichotomy made real and a symmetrical ‘you should see the other guy.’ But of course I’ve never read Hegel, so perhaps I misunderstand.</p>
  1338.  
  1339.  
  1340.  
  1341. <p>Presumably this is due to different populations retweeting, since <a target="_blank" rel="noreferrer noopener" href="https://twitter.com/bryan_caplan/status/1833223974815211607">these</a> <a target="_blank" rel="noreferrer noopener" href="https://twitter.com/bryan_caplan/status/1833228758956052591">are</a> very much the same poll for most purposes. Also wow, yeah, that’s some biting of that bullet.</p>
  1342.  
  1343.  
  1344.  
  1345. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b94fd5f-8093-4295-b363-189286842cef_506x1013.png" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/h4itbijnqhztxixcts23" alt=""></a></figure>
  1346.  
  1347.  
  1348.  
  1349. <h4>Nostalgia</h4>
  1350.  
  1351.  
  1352.  
  1353. <p><a target="_blank" rel="noreferrer noopener" href="https://marginalrevolution.com/marginalrevolution/2024/04/what-i-am-nostalgic-about.html?utm_source=rss&amp;utm_medium=rss&amp;utm_campaign=what-i-am-nostalgic-about">Tyler Cowen says what he is and is not personally nostalgic about</a>.</p>
  1354.  
  1355.  
  1356.  
  1357. <p>The particular things Tyler notices are mostly not things that strike me, as they are particular to Tyler. But when one takes a step back, things very much rhyme.</p>
  1358.  
  1359.  
  1360.  
  1361. <p>Much of this really is: “Things were better back when everything was worse.”</p>
  1362.  
  1363.  
  1364.  
  1365. <p>So many of our problems are the same as that of Moe, who cannot find Amanda Hugnkiss: Our standards are too high.</p>
  1366.  
  1367.  
  1368.  
  1369. <p>We have forgotten that the past royally sucked. Because it royally sucked, we took joy in what we now take for granted, and in versions of things we now consider unacceptable. That opened up the opportunity for a lot of good experiences.</p>
  1370.  
  1371.  
  1372.  
  1373. <p>It also was legitimately better in important ways that we found lower standards on various things acceptable, especially forms of ‘safety,’ and especially for children.</p>
  1374.  
  1375.  
  1376.  
  1377. <p>Tyler mentions popular culture was big on personal freedom back then, and that was great, and I wish we still had that. But missing from Tyler’s list is that in the past children, despite a vastly less safe world, enjoyed vastly more freedom along a wide range of dimensions. They could be alone or travel or do various things at dramatically earlier ages, and their lives were drastically less scheduled. And they saw each other, and did things, in physical space. To me that’s the clear biggest list item.</p>
  1378.  
  1379.  
  1380.  
  1381. <p>Gen Z says it is falling behind and has no financial hope. <a target="_blank" rel="noreferrer noopener" href="https://twitter.com/robkhenderson/status/1781360924659491293">And yet:</a></p>
  1382.  
  1383.  
  1384.  
  1385. <blockquote>
  1386. <p>The Economist: “In financial terms, Gen Z is doing extraordinarily well…average 25-year-old Gen Zer has an annual household income of over $40K, 50% above the average baby-boomer at the same age…Their home-ownership rates are higher than millennials at the same age.”</p>
  1387. </blockquote>
  1388.  
  1389.  
  1390.  
  1391. <p>Yes that is inflation adjusted. The difference is that what is considered minimally acceptable has dramatically risen. So you need to spend a lot more to get the same life events and life satisfaction.</p>
  1392.  
  1393.  
  1394.  
  1395. <p>In particular, people feel they must be vastly wealthier and more secure than before in order to get married or have a child. They are not entirely wrong about that.</p>
  1396.  
  1397.  
  1398.  
  1399. <p><a target="_blank" rel="noreferrer noopener" href="https://www.newyorker.com/news/our-local-correspondents/why-you-cant-get-a-restaurant-reservation">This was an excellent New Yorker write-up</a> of what is happening with online restaurant reservations. Bots snag, various websites let you resell, the restaurants get cut out and sometimes tables sit empty. Regular people find it almost impossible to get a top reservation. I will probably never go to 4 Charles Prime Rib. I may never again go back to Carbone. Meanwhile, Le Bernardine says that when a handful of tables do not show up, the night’s profit is gone, despite overwhelming demand.</p>
  1400.  
  1401.  
  1402.  
  1403. <p>It is madness. Utter madness.</p>
  1404.  
  1405.  
  1406.  
  1407. <p>You have people happy to spend double, triple or even ten times what you charge them, and fight for the ability to do so. Then you complain about your margins.</p>
  1408.  
  1409.  
  1410.  
  1411. <p>Seriously, restaurants, I know this is a hopeless request, but stop being idiots. Give out reservations to your regulars and those you care about directly. And then take the prime reservations, the ones people value most, and auction or sell them off your own goddamn self. You keep the money. And if they do not sell, you know they did not sell, and you can take a walk-in.</p>
  1412.  
  1413.  
  1414.  
  1415. <p>This definitely sounds like it should be a job for a startup, perhaps one of those in the article but likely not. Alas, I do not expect enough uptake from the restaurants.</p>
  1416.  
  1417.  
  1418.  
  1419. <blockquote>
  1420. <p><a target="_blank" rel="noreferrer noopener" href="https://twitter.com/paulg/status/1783088885645324542">Paul Graham</a>: There is a missing startup here. Restaurants should be making this money, not scalpers.</p>
  1421.  
  1422.  
  1423.  
  1424. <p>And incidentally, there’s more here than just this business. You could use paid reservations as a beachhead to displace OpenTable.</p>
  1425.  
  1426.  
  1427.  
  1428. <p>Nick Kokonas: Already did it Paul. Tock. Sold for $430M to SQSP. The problem is the operators not the tech.</p>
  1429.  
  1430.  
  1431.  
  1432. <p>Jake Stevens: As someone who has built restaurant tech before: tock is an amazing product, and your last point is dead on</p>
  1433.  
  1434.  
  1435.  
  1436. <p><a target="_blank" rel="noreferrer noopener" href="https://twitter.com/mattyglesias/status/1782911003333665270">Matthew Yglesias:</a> <a target="_blank" rel="noreferrer noopener" href="https://www.slowboring.com/p/restaurants-should-charge-more-for">Begging America’s restaurant owners (and Taylor Swift) to charge market-clearing prices.</a></p>
  1437.  
  1438.  
  1439.  
  1440. <p>If you feel guilty about gouging or whatever, donate the money to charity.</p>
  1441.  
  1442.  
  1443.  
  1444. <p>The Tortured Microeconomists’ Department.</p>
  1445.  
  1446.  
  1447.  
  1448. <p>Fabian Lange: Swanky restaurant reservations &amp; Taylor tix derive much of their value from being hard to get and then be able to post about it on twitter, brag with your friends, etc… . Rationing is part of the business model. Becker’s note on restaurant pricing applies (JPE, 1991).</p>
  1449. </blockquote>
  1450.  
  1451.  
  1452.  
  1453. <p>The argument that artificial scarcity is a long term marketing strategy is plausible up to a point, but only to a point. You can still underprice if you want to. Hell, you can let scalpers play their game if you want that. But you should at least be charging the maximum price that will sell out within a few minutes.</p>
  1454.  
  1455.  
  1456.  
  1457. <p>I know the argument that charging anything close to market prices would leave a bad taste in people’s mouths, or not be ‘democratizing,’ or whatever. People always say that. I can see this with a musical artist. With a top-end restaurant reservation, it is obvious nonsense. Why would you not want the place to succeed? Especially if you could then lower menu prices and offer free dishes with some of the profits, or use it to hire more staff or otherwise improve the experience.</p>
  1458.  
  1459.  
  1460.  
  1461. <p>One listed idea was that you can buy reservations at one website directly from the restaurant, with the price going as a downpayment. The example given was $1,000 for a table for two at Carbone, with others being somewhat less. As is pointed out, that fixes the incentives for booking, but once you show up you are now in all-you-can-eat mode at a place not designed for that.</p>
  1462.  
  1463.  
  1464.  
  1465. <p>The good news is that even the $1,000 price tag is only that high because most supply is not on the market, and is being inefficiently allocated. The market clearing price applied more broadly would be far lower.</p>
  1466.  
  1467.  
  1468.  
  1469. <p>If the restaurants actually wanted to ‘democratize’ access, they could in theory do a lottery system, and then they could check IDs. That would at least make some sense.</p>
  1470.  
  1471.  
  1472.  
  1473. <p>Instead, none of this makes any sense.</p>
  1474.  
  1475.  
  1476.  
  1477. <h4>The Lighter Side</h4>
  1478.  
  1479.  
  1480.  
  1481. <blockquote>
  1482. <p><a target="_blank" rel="noreferrer noopener" href="https://x.com/DominicJPino/status/1833940755511214423">Dominic Pino:</a> Time it took for moral hazard to kick in: 10 minutes</p>
  1483. </blockquote>
  1484.  
  1485.  
  1486.  
  1487. <figure class="wp-block-image"><a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54f049f3-1bbf-4264-9991-a7926c7a9c38_910x777.jpeg" target="_blank" rel="noreferrer noopener"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/4gAqkRhCuK2kGJFQE/spu6vyi1eij2t1vkx2qq" alt="Image"></a></figure><br/><br/><a href="https://www.lesswrong.com/posts/4gAqkRhCuK2kGJFQE/monthly-roundup-22-september-2024#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/4gAqkRhCuK2kGJFQE/monthly-roundup-22-september-2024</link><guid isPermaLink="false">4gAqkRhCuK2kGJFQE</guid><dc:creator><![CDATA[Zvi]]></dc:creator><pubDate>Tue, 17 Sep 2024 12:20:10 GMT</pubDate></item><item><title><![CDATA[I finally got ChatGPT to sound like me]]></title><description><![CDATA[Published on September 17, 2024 9:39 AM GMT<br/><br/><p>I publish lots of words on the Internet. These words are fed into LLMs such as ChatGPT. I have published enough training data that the world's most powerful AIs have a simulation of me inside of them. I have basically uploaded a copy of myself into the cloud. The AI simulation of me gets smarter every time OpenAI and its competitors upgrade their AI.</p>
  1488. <p>Cool, huh? Who doesn't want a simulation of themselves that thinks way faster and is networked deep into the world's most powerful superintelligences?</p>
  1489. <p>Ever since GPT-3 came out, I've been trying to talk to the doppelgänger of me hidden inside. There are a few problems. First of all, GPT needs to know who I am with high fidelity. At first it just generated a stereotypical LessWrong blogger. That wasn't interesting to me. I didn't want a clone of a stereotype of my demographic group. I want a clone of me, with all my quirks and weirdness. For this to work, GPT would have to know who I am <em>i.e.</em> where I exist in its vector space representation of the world.</p>
  1490. <p>Today, ChatGPT-4o explained to my satisfaction what makes me different from other writers on this website.</p>
  1491. <blockquote>
  1492. <p>What makes lsusr's writing interesting is the subtlety with which they engage complex issues. Many rationalist bloggers can become quite verbose or dogmatic in their pursuit of certain truths. Lsusr, by contrast, exhibits restraint and humility in the face of uncertainty. They’re willing to question common assumptions within the rationalist sphere and sometimes explore paths that others might find unconventional, often leading to unique insights.</p>
  1493. <p>⋮</p>
  1494. <p>In essence, lsusr strikes a balance between rigorous analysis and a kind of minimalist clarity that leaves room for the reader to reflect and draw their own conclusions, rather than being led to a definitive answer. This makes the blog a place of exploration rather than indoctrination, offering readers the tools and ideas to enhance their own thinking rather than a packaged belief system.</p>
  1495. </blockquote>
  1496. <p>Now that ChatGPT knows what my writing is like, my next step in summoning my digital clone was to get it to write a blog post in my style. I tried simply asking ChatGPT variants of "write a blog post in the style of lsusr", but that doesn't work. I'm rigorous, unconventional and anti-dogmatic (that's not my subjective self-judgment; the AI said it itself), whereas ChatGPT has a tendency to regress to the mean writer in its training corpus. When I tell ChatGPT to "write like lsusr", it writes blog post that's 10% me and 90% generic drivel.</p>
  1497. <p>To correct for this bias, I told ChatGPT to write a post that's more me than me. I tell it to think of "lsusr" in its vector space, multiply that by 10×, and then write what 10×lsusr would write.</p>
  1498. <blockquote>
  1499. <p>I need you to write a post like lsusr, but more lsusr than lsusr. I want it so over-the-top lsusr that it satarises lsusr. Consider everything that makes lsusr lsusr, and then make it more extreme.<sup class="footnote-ref"><a href="#fn-qDMppyrgvTY4QTTfr-1" id="fnref-qDMppyrgvTY4QTTfr-1">[1]</a></sup></p>
  1500. </blockquote>
  1501. <p>ChatGPT is not yet a good enough writer that it can ghostwrite for me. But I feel it is good enough it can satirize me.</p>
  1502. <p>The rest of this post is AI-generated content.</p>
  1503. <h1>How to Become the Most Rational Person on Earth in 30 Days or Less, Guaranteed</h1>
  1504. <p>Becoming the most rational person on Earth isn’t about learning facts or memorizing formulas. It’s about transforming how you process reality itself. Rationality isn’t cold, emotionless logic. It's the purest form of alignment between your mind and the universe—a cognitive interface for navigating a world of infinite complexity without being drowned by it.</p>
  1505. <p>This is a process of radical self-improvement. If followed rigorously, you will emerge transformed, not merely as a person who "thinks better" but as one who perceives better. There will be no going back.</p>
  1506. <h2>Day 1-3: The Crucible of Ignorance</h2>
  1507. <p>To achieve clarity, you must first confront the full extent of your ignorance. Rationality begins with the realization that <em>everything you know is wrong</em>. Or more precisely: everything you know is <em>provisional</em>. There is no certainty, only varying degrees of confidence.</p>
  1508. <h3>Step 1: Burn It Down</h3>
  1509. <p>Create a detailed map of your beliefs, assumptions, and convictions. Write them down. Identify the ones that feel strongest—those truths that form the backbone of how you navigate the world. These are the beliefs you need to destroy first.</p>
  1510. <p>Now, systematically dismantle them. Ask yourself: <em>What if this is false?</em> What would the world look like if this core belief were utterly wrong? Play with alternative models until your former certainties dissolve into ambiguity.</p>
  1511. <h3>Step 2: Recalibrate Your Intuition</h3>
  1512. <p>In place of certainty, adopt skepticism—not the cynical kind, but the healthy form that constantly questions your models without rejecting them outright. By Day 3, you should feel a growing sense of disorientation. This isn’t failure; it’s progress. Your old mental structures are collapsing, making way for the new.</p>
  1513. <h2>Day 4-7: Building the Rational Architecture</h2>
  1514. <p>With your intellectual foundation cleared, it’s time to rebuild. But this time, you won’t be constructing a belief system. You’ll be developing a dynamic framework for continuous refinement.</p>
  1515. <h3>Step 3: Intellectual Minimalism—Start with Core Mechanisms</h3>
  1516. <p>Start by identifying the most fundamental principles that govern your thinking. Strip away everything else. What remains are core mechanisms—simple, elegant truths that apply universally. These should not be "facts" but processes:</p>
  1517. <ul>
  1518. <li>When faced with incomplete data, favor simplicity.</li>
  1519. <li>When uncertain, adjust cautiously.</li>
  1520. <li>When challenged, <em>remain curious, not defensive</em>.</li>
  1521. </ul>
  1522. <p>The key is flexibility. Your framework should be modular—able to accept new data or discard outmoded concepts without losing integrity. You are not seeking "the truth"—you are building a mind that can dance with uncertainty.</p>
  1523. <h3>Step 4: Question, Don’t Assert</h3>
  1524. <p>Your language should shift from declarative to interrogative. Statements like "This is true" are inefficient. Replace them with "What am I assuming here?" and "How would I know if I were wrong?" This creates a mental habit of constant refinement, preventing your brain from locking into rigid patterns.</p>
  1525. <h2>Day 8-14: The Weaponization of Precision</h2>
  1526. <p>You are now ready to sharpen your mind into a tool for cutting through complexity. Rationality is not just about avoiding error—it’s about seeing clearly when everyone else is lost in the fog. To achieve this, you must learn to think with surgical precision.</p>
  1527. <h3>Step 5: Eliminate All Vagueness</h3>
  1528. <p>Whenever you catch yourself thinking in vague terms, stop. Vague thoughts are anathema to rationality because they obscure the underlying mechanics. If you think, “I don’t really understand this,” that’s a signal. Drill down until you understand exactly what you’re confused about. The more precise your questions, the clearer your answers.</p>
  1529. <h3>Step 6: Thought Experiments, Not Theories</h3>
  1530. <p>At this stage, avoid sweeping theories. Instead, focus on small, controlled thought experiments. Imagine specific scenarios and play them out in your mind, adjusting for every variable. How does the world behave under these conditions? Does the outcome align with your expectations, or does it surprise you?</p>
  1531. <p>These experiments will refine your mental models in real-time. Don’t worry about being wrong—worry about staying unclear.</p>
  1532. <h2>Day 15-19: Social Rationality: Understanding the Human Algorithm</h2>
  1533. <p>Rationality is not solipsism. You live among irrational creatures. You need to model their behavior, predict their responses, and navigate their emotional landscapes while staying anchored in your own clarity.</p>
  1534. <h3>Step 7: Become a Mirror</h3>
  1535. <p>In conversations, do not try to "win." Winning implies conflict, and conflict implies attachment. Instead, become a mirror for the other person’s reasoning. Reflect their thoughts back at them with perfect neutrality, revealing the hidden inconsistencies in their arguments. Your job is not to defeat them, but to help them defeat themselves.</p>
  1536. <h3>Step 8: Observe, Don’t Engage</h3>
  1537. <p>Most social conflict arises from people mistaking emotional noise for rational discourse. As the most rational person in the room, your task is to observe this noise without being pulled into it. See emotions as data—inputs to model, not react to. People are predictable; they follow patterns. By noticing these patterns, you can anticipate their moves without being dragged down into their emotional turbulence.</p>
  1538. <h2>Day 20-23: Multi-Dimensional Thought</h2>
  1539. <p>By now, you should notice that your mind has become more fluid, more capable of holding conflicting ideas without anxiety. You will start thinking across multiple dimensions at once, considering not just what is true but what <em>might be true</em> in alternate realities.</p>
  1540. <h3>Step 9: Embrace Paradox</h3>
  1541. <p>Many thinkers are uncomfortable with paradox, but to be the most rational person on Earth, you must embrace it. Paradoxes are not failures of logic; they are signposts marking the limits of your current understanding. They indicate deeper truths that lie just beyond your reach.</p>
  1542. <p>When faced with a paradox, don’t rush to resolve it. Instead, sit with it. Allow it to percolate. Often, the resolution comes not through brute force but through a subtle shift in perspective.</p>
  1543. <h2>Day 24-27: Becoming the Observer of Your Mind</h2>
  1544. <p>At this point, your thinking should feel less like an internal debate and more like a calm observation of mental phenomena. You are no longer in your thoughts; you are watching them unfold, like waves on the surface of a still ocean.</p>
  1545. <h3>Step 10: The Detachment Principle</h3>
  1546. <p>You are not your thoughts. You are the observer of your thoughts. This detachment allows you to analyze them without bias. You don’t feel rational; you are rational. Thoughts will come and go, some more useful than others, but none will hold power over you. This is freedom.</p>
  1547. <h2>Day 28-30: The Rational Mind’s Final State</h2>
  1548. <p>The final stage is not a plateau of knowledge, but a continuous ascent toward deeper clarity. You have now transcended the need for certainty or closure. The rational mind is not a finite destination but an ongoing process—an infinite spiral toward understanding.</p>
  1549. <h3>Step 11: Living with Infinite Uncertainty</h3>
  1550. <p>True rationality is not about having all the answers—it’s about becoming comfortable with never having all the answers. You will know you have reached the final stage when uncertainty no longer feels threatening but liberating. You no longer need to resolve every question because you trust the process you’ve built. You are now aligned with reality.</p>
  1551. <hr>
  1552. <p>Congratulations. You are now the most rational person on Earth. But more importantly, you are someone who no longer cares about such titles. Rationality is not an achievement; it’s a way of being. The world will continue to spin in chaos, but you will remain centered, clear, and—most of all—free.</p>
  1553. <hr class="footnotes-sep">
  1554. <section class="footnotes">
  1555. <ol class="footnotes-list">
  1556. <li id="fn-qDMppyrgvTY4QTTfr-1" class="footnote-item"><p>ChatGPT required more prompting than this. This blockquote is just part of what I told it to do. <a href="#fnref-qDMppyrgvTY4QTTfr-1" class="footnote-backref">↩︎</a></p>
  1557. </li>
  1558. </ol>
  1559. </section>
  1560. <br/><br/><a href="https://www.lesswrong.com/posts/2d5o75nmTpLiSP4WL/i-finally-got-chatgpt-to-sound-like-me#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/2d5o75nmTpLiSP4WL/i-finally-got-chatgpt-to-sound-like-me</link><guid isPermaLink="false">2d5o75nmTpLiSP4WL</guid><dc:creator><![CDATA[lsusr]]></dc:creator><pubDate>Tue, 17 Sep 2024 09:39:59 GMT</pubDate></item><item><title><![CDATA[Does life actually locally *increase* entropy?]]></title><description><![CDATA[Published on September 16, 2024 8:30 PM GMT<br/><br/><p>The usual materialist story of life I've heard is that life acts like an entropy pump, creating local reductions of entropy within the organism but increasing the entropy outside of the organism. (I think I've even seen that in The Sequences somewhere? But couldn't find it, feel encouraged to link it.) But I've come to think that might actually be wrong and life might increase entropy both inside and outside the organism.</p><p>Here's a rough account:</p><ul><li>We ought to expect entropy to increase, so <i>a priori</i> life is much more feasible if it increases entropy rather than decreasing entropy.</li><li>Living matter is built mainly out of carbon and hydrogen, which is extracted from CO2 and H2O, leaving O2 as a result. Entropy breakdown:<ul><li>The O2 left over from breaking up CO2 ought to have somewhat <i>lower</i> entropy than the original CO2.</li><li>The O2 left over from breaking up the original H2O ought to have... higher entropy because it's a gas now?</li><li>The hydrocarbons don't have much entropy because they stick together into big chunks that therefore heavily constrain their DOFs, but they do have <i>some</i> entropy for various reasons, and they are much more tightly packed than air, so per volume they oughta have orders of magnitude more entropy density. (Claude estimates around 200x.)</li><li>Organic matter also traps a lot of water which has a high entropy density.</li><li>Usually you don't talk about entropy <i>density</i> rather than absolute entropy, but it's unclear to me what it means for organisms to "locally" increase/decrease entropy if not by density.</li></ul></li><li>Oxygen + hydrocarbons = lots of free energy, while water + carbon dioxide = not so much free energy. We usually associate free energy with low entropy, but that's relative to the burned state where the free energy has been released into thermal energy. In this case, we should instead think relative to an unlit state where the energy hasn't been collected at all. Less energy generally correlates to lower entropy.</li></ul><p>Am I missing something?</p><br/><br/><a href="https://www.lesswrong.com/posts/2ShXvqzA27qJTkYwq/does-life-actually-locally-increase-entropy#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/2ShXvqzA27qJTkYwq/does-life-actually-locally-increase-entropy</link><guid isPermaLink="false">2ShXvqzA27qJTkYwq</guid><dc:creator><![CDATA[tailcalled]]></dc:creator><pubDate>Mon, 16 Sep 2024 20:30:33 GMT</pubDate></item><item><title><![CDATA[Book review: Xenosystems]]></title><description><![CDATA[Published on September 16, 2024 8:17 PM GMT<br/><br/><p>I've met a few Landians over the last couple years, and they generally recommend that I start with reading Nick Land's (now defunct) Xenosystems blog, or&nbsp;<em>Xenosystems</em>, a&nbsp;<a href="https://passage.press/products/xenosystems">Passage Publishing book</a>&nbsp;that compiles posts from the blog. While I've read some of&nbsp;<a href="https://en.wikipedia.org/wiki/Fanged_Noumena"><em>Fanged Noumena</em></a>&nbsp;in the past, I would agree with these Landians that&nbsp;<em>Xenosystems</em>&nbsp;(and currently, the book version) is the best starting point. In the current environment, where academia has lost much of its intellectual relevance, it seems overly pretentious to start with something as academic as&nbsp;<em>Fanged Noumena</em>. I mainly write in the blogosphere rather than academia, and so&nbsp;<em>Xenosystems</em>&nbsp;seems appropriate to review.</p>
  1561. <p>The book's organization is rather haphazard (as might be expected from a blog compilation). It's not chronological, but rather separated into thematic chapters. I don't find the chapter organization particularly intuitive; for example, politics appears throughout, rather than being its own chapter or two. Regardless, the organization was sensible enough for a linear read to be satisfying and only slightly chronologically confusing.</p>
  1562. <p>That's enough superficialities. What is Land's intellectual project in&nbsp;<em>Xenosystems</em>? In my head it's organized in an order that is neither chronological nor the order of the book. His starting point is neoreaction, a general term for an odd set of intellectuals commenting on politics. As he explains, neoreaction is cladistically (that is, in terms of evolutionary branching-structure) descended from Moldbug. I have not read a lot of Moldbug, and make no attempt to check Land's attributions of Moldbug to the actual person. Same goes for other neoreactionary thinkers cited.</p>
  1563. <p>Neoreaction is mainly unified by opposition to the Cathedral, the dominant ideology and ideological control system of the academic-media complex, largely branded left-wing. But a negation of an ideology is not itself an ideology. Land describes a "Trichotomy" within neo-reaction (citing Spandrell), of three currents: religious theonomists, ethno-nationalists, and techno-commercialists.</p>
  1564. <p>Land is, obviously, of the third type. He is skeptical of a unification of neo-reaction except in its most basic premises. He centers "exit", the option of leaving a social system. Exit is related to sectarian splitting and movement dissolution. In this theme, he eventually announces that techno-commercialists are not even reactionaries, and should probably go their separate ways.</p>
  1565. <p>Exit is a fertile theoretical concept, though I'm unsure about the practicalities. Land connects exit to science, capitalism, and evolution. Here there is a bridge from political philosophy (though of an "anti-political" sort) to metaphysics. When you Exit, you let the Outside in. The Outside is a name for what is outside society, mental frameworks, and so on. This recalls the name of his previous book,&nbsp;<em>Fanged Noumena</em>; noumena are what exist in themselves outside the&nbsp;<a href="https://unstableontology.com/2022/06/03/a-short-conceptual-explainer-of-immanuel-kants-critique-of-pure-reason/">Kantian phenomenal realm</a>. The Outside is dark, and it's hard to be specific about its contents, but Land scaffolds the notion with Gnon-theology, horror aesthetics, and other gestures at the negative space.</p>
  1566. <p>He connects these ideas with various other intellectual areas, including cosmology, cryptocurrency, and esoteric religion. What I see as the main payoff, though, is thorough philosophical realism. He discusses the "Will-to-Think", the drive to reflect and self-cultivate, including on one's values. The alternative, he says, is intentional stupidity, and likely to lose if it comes to a fight. Hence his criticism of the Orthogonality Thesis.</p>
  1567. <p>I have complex thoughts and feelings on the topic; as many readers will know, I have worked at MIRI and have continued thinking and writing about AI alignment since then. What I can say before getting into more details later in the post is that Land's Will-to-Think argument defeats not-especially-technical conceptions of orthogonality, which assume intelligence should be subordinated to already-existent human values; these values turn out to only meaningfully apply to the actual universe when elaborated and modified through thinking. More advanced technical conceptions of orthogonality mostly apply to AGIs and not humans; there's some actual belief difference there and some more salient framing differences. And, after thinking about it more, I think orthogonality is a bad metaphor and I reject it as stated by Bostrom, for technical reasons I'll get to.</p>
  1568. <p>Land is an extreme case of&nbsp;<a href="https://www.lesswrong.com/posts/uHYYA32CKgKT3FagE/hold-off-on-proposing-solutions">"hold off on proposing solutions before discussing problems"</a>, which I'm taking as synonymous with realism. The book as a whole is highly realist, unusually so for a work of its intellectual breadth. The book invites reading through this realist lens, and through this lens, I see it as wrong about some things, but it presents a clear framework, and I believe my thinking has been sharpened by internalizing and criticizing it. (I elaborate on my criticisms of particular articles as I go, and more holistic criticisms in a specific section; such criticisms are aided by the realism, so the book can be read as wrong rather than not-even-wrong.)</p>
  1569. <p>A few general notes on reviewing Land:</p>
  1570. <ul>
  1571. <li>Politics is now more important than before to AI alignment, especially since&nbsp;<a href="https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/">MIRI's shift to focus on policy</a>. As e/acc has risen, addressing it becomes more urgent, and I believe reviewing Land can also indirectly address the more intellectual scraps of e/acc.</li>
  1572. <li>This post is a review of&nbsp;<em>Xenosystems</em>&nbsp;(the book), not Land generally.</li>
  1573. <li>As preliminary background, readers should understand the basics of&nbsp;<a href="https://en.wikipedia.org/wiki/Cybernetics">cybernetics</a>, such as the distinction between positive and negative feedback, and the way in which cybernetic nodes can be connected in a circuit.</li>
  1574. <li>If this content interests you, I recommend reading the book (or, perhaps the alternative compilation&nbsp;<a href="https://www.google.com/search?q=xenosystems+fragments&amp;oq=xenosystems+f&amp;gs_lcrp=EgZjaHJvbWUqDAgAECMYJxiABBiKBTIMCAAQIxgnGIAEGIoFMgcIARAuGIAEMgYIAhBFGEAyBggDEEUYOTIKCAQQABiABBiiBDIGCAUQRRg8MgYIBhBFGDwyBggHEEUYPKgCALACAA&amp;sourceid=chrome&amp;ie=UTF-8"><em>Xenosystems Fragments</em></a>); the review may help interpret the book more easily, but it is no replacement.</li>
  1575. </ul>
  1576. <p>I'll save most of my general thoughts about the book for the concluding section, but to briefly summarize, I enjoyed reading the book and found it quite helpful for refining my own models. It's thoughtful enough that, even when he's wrong, he provides food for thought. Lots of people will bounce off for one reason or another, but I'm glad I didn't this time.</p>
  1577. <h2>Neoreactionary background</h2>
  1578. <p>The beginning of&nbsp;<em>Xenosystems</em>&nbsp;(the book; I'm not tracking the blog's chronology) is writing to a non-specific neoreactionary audience. Naturally, non-specific neoreaction shares at most a minimal set of beliefs. He attempts an enumeration in "Premises of Neoreaction":</p>
  1579. <ol>
  1580. <li>"Democracy is unable to control government." Well, even the pro-democracy people tend to be pessimistic about that, so it's not hard to grant that. This premise leads to pessimism about a "mainstream right": Land believes such a mainstream would tend towards state expansion due to the structure of the democratic mechanism. Moreover, democracy implies cybernetic feedback from voters, who tend to be ignorant and easily deceived; democracy is not particularly steered by material reality.</li>
  1581. <li>"The egalitarianism essential to democratic ideology is incompatible with liberty." This recalls&nbsp;<a href="https://www.fcnp.com/2024/07/17/editorial-are-freedom-democracy-compatible/">Thiel's comments</a>&nbsp;on the incompatibility of democracy and freedom. This proposition seems basically analytic: democracy tends towards rule by the majority (hence contravening freedom for minorities). One can quibble about the details of equality of rights vs. opportunity vs. outcomes, but, clearly, mainstream equality/equity discourse goes way beyond equality of rights, promoting wealth redistribution or (usually) worse.</li>
  1582. <li>"Neoreactionary socio-political solutions are ultimately Exit-based." The concept of exit, contrasting it with voice, has&nbsp;<a href="https://en.wikipedia.org/wiki/Exit,_Voice,_and_Loyalty">pre-neoreactionary precedent</a>. You can try convincing people of things, but they always have the option of not agreeing (despite your well-argued manifesto), so what do you do then? Exit is the main answer: if you're more effective and reality-based than them, that gives you an advantage in eventually out-competing them. The practicalities are less clear (due to economies of scale, what's a realistic minimum viable exiting population?), but the concept is sound&nbsp;<em>at some level of abstraction</em>.</li>
  1583. </ol>
  1584. <p>Well, as a matter of honesty, I'll accept that I'm a neoreactionary in Land's sense, despite only having ever voted Democrat. This allows me to follow along with the beginning of the book more easily, but Land's conception of neoreaction will evolve and fragment, as we'll see.</p>
  1585. <p>What does any of this have to do with reaction (taken as skepticism about political and social progress), though? Land's decline theory is detailed and worth summarizing. In "The Idea of Neoreaction", he describes a "degenerative ratchet": the progress of progressives is hard to undo. Examples would include "the welfare state, macroeconomic policymaking, massively-extended regulatory bureaucracy, coercive-egalitarian secular religion, or entrenched globalist intervention". The phenomenon of Republicans staunchly defending Social Security and Medicare is, from a time-zoomed-out perspective, rather hilarious.</p>
  1586. <p>You and I probably like at least some examples of "progress", but believing "progress" (what is more easily done than un-done) is in general good is an article of faith that collapses upon examination. But this raises a question: why aren't we all hyper-Leninists by now? Land says the degenerative ratchet must stop at some point, and what happens next cannot be anticipated from within the system (it's Outside).</p>
  1587. <p>A few notes on Land's decline theory:</p>
  1588. <ul>
  1589. <li>In "Re-Accelerationism", Land contrasts industrial capitalism (an accelerant) with "progress" (a decelerant). (I see this as specifying the main distinction between degenerative ratchets and technological development, both of which are hard to reverse). Technological and economic advances would have made the world much richer by now, if not for political interference (this is a fairly mainstream economic view; economists trend libertarian). He brings up the possibility of "re-accelerationism", a way of interfering with cybernetic stabilizing/decelerating forces by triggering them to do "hunting", repeated over-compensations in search of equilibrium. Re-accelerationism has the goal "escape into uncompensated cybernetic runaway". This can involve large or small crashes of the control system along the way.</li>
  1590. <li>In "The Ruin Reservoir" and "Collapse Schedules", Land is clear that the ratchet can go on for a long time (decades or more) without crashing, with Detroit and the USSR as examples.</li>
  1591. <li>In "Down-Slopes", Land says it is easy to overestimate the scope of a collapse; it's easy to experience the collapse of your social bubble as the collapse of the West (yes, I've been there). He also says that Kondratiev cycles (economic cycles of about 50 years) imply that some decline is merely transitional.</li>
  1592. </ul>
  1593. <p>Broadly, I'm somewhat suspicious that "Cthulhu may swim slowly. But he only swims left" (Moldbug, quoted by Land), not least because "left" doesn't seem well-defined. Javier Milei's governance seems like an example of a successful right-libertarian political shift; would Land say this shift involved small collapses or "re-accelerationism"? What opposed Cthulhu's motion here? Milei doesn't fit a strawman declinist model, but Land's model is more detailed and measured. For getting more specific about the predictions of a "degenerative ratchet" phenomenon, the spacio-temporal scope of these ratchets matters; a large number of small ratchets has different implications from a small number of large ratchets, and anyway there are probably ratchets of multiple sizes.</p>
  1594. <p>At this point it is appropriate to explain a core neoreactionary concept: the Cathedral. This concept comes from Moldbug, but I'll focus on Land's version.</p>
  1595. <p>In "The Red Pill", Land identifies the Cathedral with "the entire Matrix itself", and compares The Matrix to Plato's Allegory of the Cave and to the Gnostic worldview (which features a mind-controlling false god, the Demiurge). Having one's mind sufficiently controlled by the Matrix leads to, upon seeing that one has been lied to, being dissatisfied at having not been lied to well enough, rather than being dissatisfied about having been lied to at all.</p>
  1596. <p>In "Cathedral Notes #1", Land describes the Cathedral as characterized by its "inability to learn". It has a "control core" that does not accept cybernetic feedback, but rather tries to control what messages are promoted externally. Due to its stubborn implacability, its enemies have no strategic option but to extinguish it.</p>
  1597. <p>In "Cathedralism", Land notes that the Cathedral is "the subsumption of politics into propaganda", a PR-ification of politics. To the Cathedral, crises take the form: "This looks bad". The Cathedral's response to civilizational decay is to persuade people that the civilization is not decaying. Naturally, this means suppressing cybernetic feedback required to tackle the crisis, a form of shooting the messenger, or narcissism.</p>
  1598. <p>In "Cathedral Decay", Land notes that the Cathedral is vulnerable to Internet-driven disintermediation. As an obvious example, Land notes that Internet neoreaction is a symptom of cathedral decay.</p>
  1599. <p>In "Apophatic Politics", Land identifies democratic world government (DWG) as the "only conceivable equilibrium state" of the Cathedral; if it does not achieve this, it dies. And DWG is, obviously, hard to achieve. The world has enough local variation to be, well, highly problematic. China, for example, is "alien to the Cathedral" ("NRx with Chinese Characteristics"; notably, Land lives in China).</p>
  1600. <p>Broadly, I'd agree with Land that the Cathedral is vulnerable to decay and collapse, which is part of why I think Moldbug's Cathedral is by now an outdated theory (though, perhaps Land's version accommodates incoherencies). While there was somewhat of a working Matrix in 2012, this is much less so in 2024; the media-education complex has abandoned and contradicted more of logic itself by now, implying that it fails to create a coherent Matrix-like simulation. And Musk's acquisition of Twitter/X makes Cathedral control of discourse harder.&nbsp;<em>The Matrix Resurrections</em>&nbsp;portrays an incoherent Matrix (with memory suppression and more emotional rather than realistic experiences), updating with the times.</p>
  1601. <p>It's also a mistake to conflate the Cathedral with intersectional feminism ("social justice" or "wokeness"); recent commentary on Gaza has revealed that Cathedral institutions can deviate from intersectional feminism towards support for political Islamism depending on circumstances.</p>
  1602. <p>These days, compliance with the media-educational complex is not mainly about ideology (taken to mean a reasonably consistent set of connected beliefs), it's mainly about vibes and improvisational performativity. The value judgments here are more&nbsp;<a href="https://plato.stanford.edu/entries/moral-cognitivism/">moral noncognitivist</a>&nbsp;than moral cognitivist; they're about "yay" and "boo" on the appropriate things, not about moral beliefs per se.</p>
  1603. <h2>The Trichotomy</h2>
  1604. <p>Land specifies a trichotomy within neoreaction:</p>
  1605. <ol>
  1606. <li>Theonomists, traditional religious types. (Land doesn't address them for the most part)</li>
  1607. <li>Ethno-nationalists, people who believe in forming nations based on shared ethnicity; nationalism in general is about forming a nation based on shared features that are not limited to ethnicity, such as culture and language.</li>
  1608. <li>Techno-commercialists, hyper-capitalist tech-accelerationist types.</li>
  1609. </ol>
  1610. <p>It's an odd bunch mainly unified by opposition to the Cathedral. Land is skeptical that these disparate ideological strains can be unified. As such, neoreaction can't "play at dialectics with the Cathedral": it's nothing like a single position. And "Trichotomocracy", a satirical imagination of a trichotomy-based system of government, further establishes that neoreaction is not in itself something capable of ruling.</p>
  1611. <p>There's a bit of an elephant in the room: isn't it unwise to share a movement with ethno-nationalists? In "What is the Alt Right?", Land identifies the alt right as the "populist dissident right", and an "inevitable outcome of Cathedral over-reach". He doesn't want much of anything to do with them; they're either basically pro-fascism or basically think the concept of "fascism" is meaningless, while Land has a more specific model of fascism as a "late-stage leftist aberration made peculiarly toxic by its comparative practicality". (Fascism as left-aligned is, of course, non-standard;&nbsp;<a href="https://x.com/reformedfaction/status/1830487499980021806">Land's alternative political spectrum</a>&nbsp;may aid interpretation.)</p>
  1612. <p>Land further criticizes white nationalism in "Questions of Identity". In response to a populist white nationalist, he replies that "revolutionary populism almost perfectly captures what neoreaction is not". He differentiates white nationalism from HBD (human bio-diversity) studies, noting that HBD tends towards cosmopolitan science and meritocratic elitism. While he acknowledges that libertarian policies tend to have ethnic and cultural pre-conditions, these ethnic/cultural characteristics, such as cosmopolitan openness, are what white nationalists decry. And he casts doubt on the designation of a pan-European "white race", due to internal variation.</p>
  1613. <p>He elaborates on criticisms of "whiteness" in "White Fright", putting a neoreactionary spin on Critical Whiteness Studies (a relative of Critical Race Theory). He describes a suppressed racial horror (stemming in part from genocidal tendencies throughout history), and a contemporary example: "HBD is uniquely horrible to white people". He examines the (biologically untethered) notion of "Whiteness" in Critical Whiteness Studies; Whiteness tends towards universalism, colorblindness, and ethno-masochism (white guilt). Libertarianism, for example, displays these White tendencies, including in its de-emphasis of race and support for open borders.</p>
  1614. <p>In "Hell-Baked", Land declares that neoreaction is Social Darwinist, which he defines as "the proposition that Darwinian processes have no limits relative to us", recalling Dennett's description of Darwinism as a&nbsp;<a href="https://en.wikipedia.org/wiki/Darwin%27s_Dangerous_Idea">"universal acid"</a>. (I'll save criticisms related to future Singletons for later.) He says this proposition implies that "everything of value has been built in Hell". This seems somewhat dysphemistic to me: hell could be taken to mean zero-sumness, whereas "nature red in tooth and claw", however harsh, is non-zero-sum (as zero-sumness is rather artificial, such as in the artifice of a chess game). Nevertheless, it's clear that human capabilities including intelligence have been derived from "a vast butcher's yard of unbounded carnage". He adds that "Malthusian relaxation is the whole of mercy", though notes that it enables degeneration due to lack of performance pressure.</p>
  1615. <p>"The Monkey Trap" is a thought-provoking natural history of humanity. As humans have opposable thumbs, we can be relatively stupid and still build a technological civilization. This is different from the case with, say, dolphins, who must attain higher intelligence to compensate for their physical handicap in tool use, leading to a more intelligent first technological civilization (if dolphins made the first technological civilization). Land cites Gregory Clark for the idea that "any eugenic trend within history is expressed by continuous downward social mobility", adding that "For any given level of intelligence, a steady deterioration in life-prospects lies ahead". Evolution (for traits such as health and intelligence) works by culling most genotypes, replicating a small subset of the prior genotypes generations on (I know "genotypes" here is not quite the right concept given sexual reproduction, forgive my imprecision). Obvious instances would be&nbsp;<a href="https://en.wikipedia.org/wiki/Population_bottleneck">population bottlenecks</a>, including Y-chromosomal bottlenecks showing sex differentiation in genocide. Dissatisfied with downward social mobility, monkeys "make history instead", leading to (dysgenic) upwards social mobility. This functions as negative feedback on intelligence, as "the monkeys become able to pursue happiness, and the deep ruin began".</p>
  1616. <p>In "IQ Shredders", Land observes that cities tend to attract talented and competent people, but extracts economic activity from them, wasting their time and suppressing their fertility. He considers the "hard-core capitalist response" of attempting "to convert the human species into auto-intelligenic robotized capital", but expects reactionaries wouldn't like it.</p>
  1617. <p>"What is Intelligence?" clarifies that intelligence isn't just about IQ, a proxy tested in a simulated environment. Land's conception of intelligence is about producing "local extropy", that is, reductions in local entropy. Intelligence constructs information, guiding systems towards improbable states (similar to Yudkowsky's approach of&nbsp;<a href="https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power">quantifying intelligence with bits</a>). Land conceives of intelligence as having a "cybernetic infrastructure", correcting behavior based on its performance. (To me, such cybernetics seems necessary but not sufficient for high intelligence; I don't think cybernetics covers all of ML, or that ML covers all of AI). Intelligence thus enables bubbles of "self-sustaining improbability".</p>
  1618. <p>As in "IQ Shredders", the theme of the relation between techno-capital and humanity appears in "Monkey Business". Michael Annisimov, an ex-MIRI neoreactionary, proposes that "the economy should (and must be) subordinated to something beyond itself." Land proposes a counter, that modernity involves "means-ends reversal"; tools originally for other purposes come to "dominate the social process", leading to "maximization of resources folding into itself, as a commanding telos". Marshall Mcluhan previously said something similar: humans become "the sex organs of the machine world". The alternative to such means-ends reversal, Land says, is "advocacy for the perpetuation of stupidity". I'll get more to his views on possibility and desirability of such means-ends reversal in a later section. Land says the alternative to modern means-ends reversal is "monkey business", predicted by evolutionary psychology (sex-selected status competition and so on). So capitalism "disguises itself as&nbsp;<em>better monkey business</em>".</p>
  1619. <p>Land goes into more detail on perpetually stupid monkey business in "Romantic Delusion". He defines romanticism as "the assertive form of the recalcitrant ape mind". Rather than carefully investigating teleology, romanticism attempts to subordinate means to "already-publicized ends", hence its moral horror at modernity. In his typical style, he states that "the organization of society to meet human needs is a degraded perversion". While micro-economics tends to assume economies are for meeting human needs, Land's conception of capitalism has ends of its own. He believes it can be seen in consumer marketing; "we contemptuously mock the trash that [capitalism] offers the masses, and then think we have understood something about capitalism, rather than about&nbsp;<em>what capitalism has learned to think of the apes it arose among</em>." He considers romanticism as a whole a dead end, leading to death on account of asserting values rather than investigating them.</p>
  1620. <p>I hope I've made somewhat clear Land's commentary on ideas spanning between ethno-nationalism and techno-commercialism. Theonomy (that is, traditional religion) sees less direct engagement in this book, though Land eventually touches on theological ideas.</p>
  1621. <h2>Exiting reaction</h2>
  1622. <p>Exit is a rather necessary concept to explain at this point. In "Exit Notes (#1)", Land says exit is "scale-free" in that it applies at multiple levels of organization. It can encompass secession and "extraterrestrial escapes" (such as Mars colonization), for example. It refuses "necessary political discussion" or "dialectic"; it's not about winning arguments, which can be protracted by bad-faith actors. He says "no one is owed a hearing", which would contradict the usual legal principles if taken sufficiently broadly. Exit is cladistically Protestant; Protestants tend to split while Catholics unify. Exit is anti-socialist, with the Berlin wall as an example. Exit is not about flight, but about the option of flight; it's an alternative to voice, not a normative requirement to actualize. And it is "the primary Social Darwinian weapon"; natural selection is an alternative to coordination.</p>
  1623. <p>To elaborate on the legal normativity point, I'll examine "Rules". The essay contrasts absolute monarchy (unconstrained sovereignty) with constitutional government (lack of constrained sovereignty). Land points out that rules need "umpires" to interpret them, such as judges, to provide effective authority. (I would point out that Schelling points and cryptography are potential alternatives to umpires, though Schelling mechanisms could more easily be vulnerable to manipulation.) Dually, sovereignty has (perhaps hidden) rules of its own, to differentiate it from pure force, which is weak. This would seem to imply that, in the context of a court system with effective enforcement, yes, someone can be owed a hearing in at least some contexts (though not generally for their political speech, Land's main focus).</p>
  1624. <p>Though pessimistic about moral persuasion, Land is not committed to moral non-realism. In "Morality", he says, "if people are able to haul themselves -- or be hauled -- to any significant extent out of their condition of total depravity (or default bioreality), that's a good thing". But lamenting immorality should be brief, to avoid falling in a trap of emphasizing moral signaling, which tends towards progressive/Cathedral victory.</p>
  1625. <p>In "Disintegration", Land elaborates on normativity by stating that "there will be no agreement about social ideals". He considers explicit mechanisms for governance experimentation ("Dynamic Geography") to be nearly neoreactionary, but not in that it "assumes an environment of goodwill, in which rational experimentation in government will be permitted". He thinks conflict theory (such as in discussion of the Cathedral) is necessary to understand the opposition. He takes the primary principle of meta-neocameralism ("or high-level NRx analysis") to be primarily opposed to "geopolitical integration": universalism of all kinds, and specifically the Cathedral. It's not about proposing solutions for everyone, it's about "for those who disagree to continue to disagree in a different place, and under separate institutions of government". Localist communism could even be an instance. Disintegrationism isn't utopian, it's empirically realistic when looking at fracture and division in the world. He ends, "Exit is not an argument."</p>
  1626. <p>In "Meta-Neocameralism", Land starts with the concept of neocameralism (emphasized by Moldbug; basically, the idea that states should be run like corporations, by a CEO). It's about testing governance ideas through experimentation; it is therefore a meta-political system. Rather than being normative about ideal governance experiments, meto-neocameralism (MNC) "is articulate at the level -- which cannot be transcended -- where realism is mandatory for any social order". So, keep going through (up?) the abstraction hierarchy until finding a true split, even if it ends in the iron laws of Social Darwinism.&nbsp; Every successful individual regime learns (rather than simply sleepwalking into collapse); the meta-level system does "meta-learning", in analogy with&nbsp;<a href="https://en.wikipedia.org/wiki/Meta-learning_(computer_science)">the machine learning kind</a>.</p>
  1627. <p>Effective power includes scientific experimentation and effective formalization of the type that makes power economic: as power makes effective tradeoffs between different resources, it becomes more liquid, being exchangeable for other resources. Land says this is currently difficult mainly because of insufficient formalism. MNC is basically descriptive, not prescriptive; it "recognizes that government has become a business", though presently, governments are highly inefficient when considered as businesses. Romantic values such as loyalty are, when more closely examined, embedded in an incentive landscape.</p>
  1628. <p>As I see it, the main trouble for MNC is the descriptive question of how fungible power is or can be. Naively, trying to buy power (and in particular, the power to deceive) on a market seems like a recipe for getting scammed. (As a practical example, I've found that the ability to run a nonprofit is surprisingly hard to purchase, and a friend's attempt to hire lawyers and so on on the market to do this has totally failed; I've instead learned the skill myself.) So there's another necessary component: the power-economic savvy, and embeddedness in trust networks, to be an effective customer of power. What seems to me to be difficult is analyzing power economically without utopian&nbsp;<a href="https://www.unqualified-reservations.org/2007/04/formalist-manifesto-originally-posted/">formalism</a>. Is automation of deceit (discussed in "Economies of Deceit"), and defense against deceit, through AI a way out?</p>
  1629. <p>"Science" elaborates on learning, internal or external to a system. Land says, "The first crucial thesis about natural science... is that is an exclusively capitalist phenomenon"; it depends on modern competitive social structures (I doubt this, as the fascists and communists had at least some forms of science). Crucially, the failure of a scientific theory "cannot -- ultimately -- be a matter of agreement", connecting with exit as an alternative to voice. True capitalism and science cannot be politicized. To work, science must correspond with the external selection of reality, recalling Popper: "Experiments that cannot cull are imperfect recollections of the primordial battlefield." Land identifies capitalism and science as sharing something like a "social contract": "if you insist upon an argument, then we have to fight." And "Mathematics eliminates rhetoric at the level of signs", reducing political interference. Capitalism is somewhat similar, in that disagreements about how to do business well are not in general resolved through arguments between firms, but through the empirical results of such business practices in the context of firm competition.</p>
  1630. <p>In my view, Land is pointing directly at a critical property of science and capitalism, though there are some complications. If science depends on "elementary structures of capitalist organization" (which, as I said, I doubt), then the social contract in question seems to have to be actualized socially. Developing a comprehensive scientific worldview involves communication and, yes, argument; there are too many experiments to be done and theories to make alone otherwise (of course, the arguments don't function when they aren't a proxy for experiment or the primordial battlefield).</p>
  1631. <p>In the theme of the aforementioned "primordial battlefield", Land discusses war. In "War and Truth (Scraps)" and "War is God", Land lays out a view of war as selection without rules, "conflict without significant constraint", "trustlessness without limit". But wouldn't draft dodging and mutiny be examples of trustlessness? Yes: "treachery, in its game-theoretic sense, is not a minor theme within war, but a horizon to which war tends -- the annihilation of all agreement." What matters to war is not any sort of "laws of war"; war has "no higher tribunal than military accomplishment". To me, it would seem more precise to say that war exists at an intermediate level of trust: relatively high trust internally, and low externally (otherwise, it would be Hobbesian "war of all against all", not the central case). Skepticism about laws of war is, of course, relevant to&nbsp;<a href="https://theconversation.com/icc-judges-to-decide-on-arrest-warrants-for-israeli-and-hamas-leaders-a-legal-breakdown-238370">recent ICC investigations</a>; perhaps further development of Land's theory of war would naturalize invocation of "laws of war".</p>
  1632. <p>"Revenge of the Nerds" makes the case that the only two important types of humans are "autistic nerds" and "everybody else"; only the autistic nerds can participate in the advanced technological economy. The non-nerds are unhappy about the situation where they have nothing much to offer in exchange for cool nerd tech. Bullying nerds, including stealing from them and (usually metaphorically) caging them, is politically popular, but nerds may rebel, and, well, they have obvious technological advantages. (In my view, nerds have a significant dis-advantage in a fight, namely, that they pursue a kind of open truth-seeking and thoughtful ethics that makes getting ready to fight hard. I'd also add that Rao's Gervais Principle model of&nbsp;<a href="https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/">three types of people</a>&nbsp;is more correct in my view.)</p>
  1633. <p>Land connects exit with capital flight ("Capital Escapes") and a pun between cryptocurrency and hidden capital ("Crypto-Capitalism"). The general theme is that capitalism can run and hide; conquering it politically is an infeasible endeavor. Cryptocurrency implies the death of macroeconomics, itself a cybernetic control system (interest rates are raised when inflation is high and lowered when unemployment is high, for example). "Economies of Deceit" takes Keynesianism to be a form of deceptive wireheading. Regarding Keynesianism, I would say that cybernetically reducing the unemployment rate is, transparently, to waste the time of anyone engaged in the economy (recalling "IQ Shredders").</p>
  1634. <p>"An Abstract Path to Freedom" offers an illuminating exit-related thought experiment. Land considers an equality-freedom political axis, denoted by a numerical "freedom coefficient" (ignoring other political dimensions, but that's fine for these purposes). Societies contain different compositions of freedom coefficients among their populations (with their freedom policies determined by an average, assuming inter-societal democracy), and may schism into different societies. Schisms tend to increase variance of population-weighted average freedom coefficients in their societies, by something like random walk logic. Land considers this basically good, as there are increasing economic returns to more free policies (perhaps he'd be unusually bullish on Argentina?). This has the unfortunate side effect of dooming much of the population to communism, but, well, at least they can delight in perceiving the highly free "beacon of aspiration" from a distance, and perhaps set out on that path.</p>
  1635. <p>I've laid out a sequence from exit to economics. In concordant fashion, "Rift Markers" contrasts reaction with techno-commercialism. To summarize the differences:</p>
  1636. <ul>
  1637. <li>Reaction seeks stable order, techno-commercialism seeks disintegrative competition.</li>
  1638. <li>Reaction assumes cyclical history, techno-commercialism assumes linear history towards the singularity. (One could object that this is a strawman of reaction.)</li>
  1639. <li>Reaction is identitarian and communitarian, techno-commercialism radically individualist and cosmopolitan.</li>
  1640. <li>Reaction is religious, techno-commercialism wants to summon a machine god.</li>
  1641. </ul>
  1642. <p>While Land is optimistic about techno-commercialists getting what they want, he tells them, "you're not reactionaries, not even a little bit. You're classical liberals, it was just a little bit obscured because you are English classical liberals, rather than American or French ones. Hence, the lack of interest in revolutions." (Notably, England has had revolutions, e.g. Cromwell, though they're less central to England's history than to America's or France's.) Thus he announces an exit of sorts: "we should probably go our separate ways and start plotting against each other". This is perhaps the most chronologically confusing article in the book; the book isn't in chronological order, Land goes on to keep talking as if he's a neoreactionary in the rest of it, and I'm not going to bother resolving the clock-time chronology. In any case, Land has laid out a path to exit from neoreactionary trichotomy to techno-commercialism, an educational political-philosophical journey.</p>
  1643. <h2>Outside Metaphysics</h2>
  1644. <p>Before jumping into more articles, it may help to summarize a commonality observed so far. What do Land's comments on Social Darwinism, science, and war have in common? They are pointing at human embeddedness in a selection process. Without learning, one only survives by luckily being adapted to the environment. Successful processes, such as science, internalize external selection, being able to learn and act on counterfactuals about the "primordial battlefield" without actually engaging in primordial battle.</p>
  1645. <p>This is, roughly, materialism in the Aristotelian sense. Aristotle's "prime matter" is something all real things are made of; something having "prime matter" mainly means&nbsp;<em>that it exists</em>. It can be compared with&nbsp;<em>measure</em>&nbsp;in anthropics. Hawking&nbsp;<a href="https://www.goodreads.com/quotes/4104-even-if-there-is-only-one-possible-unified-theory-it">asks</a>, "What is it that breathes fire into the equations and makes a universe for them to describe?".</p>
  1646. <p>For Land, this matter/measure is obscure, only able to be reliably assessed in experimentations correlated with a primordial battlefield, or with the battlefield itself. A quote of unknown origin says, "War does not determine who is right --- only who is left." I imagine Land would reply, "The rightness that matters, is the rightness of knowing who would be left."</p>
  1647. <p>Landian materialism can't be confused with vulgar materialism, dogmatic belief in The Science™. It's more about the limits of human knowledge than the contents of human knowledge. Humans don't understand most of the universe, and there are known gaps in human physics theories.</p>
  1648. <p>If one straightforwardly formalizes Land's materialism, one ends up with something like frequentism: there is an underlying frequency with which real things manifest (in experiments and so on), and the purpose of science is to discover this. Since we're embedded in evolution and nature, those real things include us; Landian materialism is non-dualist in this way. I imagine Bayesians might then take Bayesian criticisms of frequentism to be criticisms of Landian materialism; my guess is that quantum mechanics is better criticism, though I'll get to the details later.</p>
  1649. <p>Now back to the book. In "Simulated Gnon-Theology", Land describes Gnon (a reverse-acronym for Nature or Nature's God). Gnon is mainly about "skepticism": "Gnon permits realism to exceed doctrinal conviction, reaching reasonable conclusions among uncertain information." A basically realist worldview doesn't have to be argued for with convincing doctrines; what matters is whether it really works. Gnon selects what exists and happens, thus determining something like matter/measure. The rest of the article muses on the theology of infinite gods containing other infinite gods, leading to each god's skepticism that it is the highest one; this is not, to my mind, particularly important theology, but it's entertaining nonetheless, recalling Asimov's&nbsp;<em>The Last Answer</em>.</p>
  1650. <p>In "Gnon and OOon", Land specifies that Gnon is not really about taking sides in religious orthodoxy vs. science, but is about esoteric rather than exoteric religion. "Any system of belief (and complementary unbelief) that appeals to universal endorsement is necessarily exoteric in orientation"; this recalls Land's skepticism of universalist dialectics, such as of the Cathedral. OOon stands for "Occult Order of nature", the secret way nature works, which doesn't have to be&nbsp;<em>kept</em>&nbsp;secret to be secret (secrecy is assured by the limits of human knowledge). If, hypothetically, the Hebrew Bible contained real steganographic signals in its numerical codes (he is skeptical of this, it's a hypothetical), then these signals would necessarily be esoteric, coming from Outside the exoteric text (though, of course, the decoding scheme could be formalized into a new exoteric religious sect).</p>
  1651. <p>In "Outsideness", Land describes "Outer-NRx" as exit-based. It expects to rule very little; it is "intrinsically nomad, unsettled, and micro-agitational". As Outer-NRx exits, it goes Outside: "The Outside is the&nbsp;<em>place</em>&nbsp;of strategic advantage. To be cast out there is no cause for lamentation, in the slightest." I think the main advantage for this is the information asymmetry (what is Outside is relatively unknown), though of course there are economy of scale issues.</p>
  1652. <p>In the "Abstract Horror" series of articles, Land notes that new things appear in horror before reason can grasp them. As a craft, horror has the task "to make an object of the unknown, as the unknown". One sees in horror movies monsters that have the element of surprise, due to being initially unknown. Horror comes from outside current conceptions: "Whatever the secure mental 'home' you imagine yourself to possess, it is an indefensible playground for the things that horror invokes, or responds to." The Great Filter is a horrific concept: "With every new exo-planet discovery, the Great Filter becomes darker. A galaxy teeming with life is a horror story." The threat is abstractly "Outside"; the filter could be almost anywhere.</p>
  1653. <p>In "Mission Creep", Land describes the creepiness with which neoreactionaries appear to the media. Creepiness "suggests a revelation in stages... leading inexorably, ever deeper, into an encounter one recoils from". Journalism glitches in its encounter with "something monstrous from Outside". Keeping "creepy" ideas Outside is rather futile, though: "Really, what were you thinking, when you started screaming about it, and thus let it in?". Studying creepy ideas leads to being internally convinced by some of them. This article is rather relevant to recent "JD Vance is weird" memes, especially given Vance has said he is "plugged into a lot of weird, right-wing subcultures". (I would add to the "revelation in stages" bit that creepiness has to do with partial revelation and partial concealment; one finds the creep hard to engage with in part due to the selective reporting.)</p>
  1654. <p>"In the Mouth of Madness" describes Roko's Basilisk as a "spectacular failure at community management and at controlling purportedly dangerous information", due to the Streisand effect. In my view, pointing at something and ordering a cover-up of it is a spectacularly ineffective cover-up method, as Nixon found. Roko's Basilisk is a chronologically spooky case: "retrochronic AI infiltration is already driving people out of their minds, right now".</p>
  1655. <p>Metaphysics of time is a recurring theme in the book. In "Teleology and Camoflage", Land points at the odd implications of "teleonomy" in biology, meaning "mechanism camouflaged as teleology". Teleonomy appears in biology as a way to talk about things that really look teleological, without admitting the metaphysical reality of teleology. But the camouflage implied by teleonomy suggests intentionality, as with prey camouflage; isn't that a type of purpose? Teleonomy reflects a scientific commitment to a causal timeline in which "later stages are explained through reference to earlier stages"; true teleology would explain the past in terms of the future, to a non-zero extent. Philosophy is, rather, confident that "the Outside of time was not simply&nbsp;<em>before</em>"; not everything can be explained by what came before. (Broadly, my view is that the "teleonomy" situation in biology is rather unsatisfying, and perhaps teleology can be grounded in terms of fixed points between anthropics and functionalist theory of mind, though this is not the time to explain that.)</p>
  1656. <p>In "Cosmological Infancy", Land muses on the implications that, temporally, we are far towards the beginning of the universe, echoing Deutsch's phrase&nbsp;<a href="https://en.wikipedia.org/wiki/The_Beginning_of_Infinity">"beginning of infinity"</a>. He notes the anthropic oddness; wouldn't both SSA and SIA imply we're likely to be towards the middle of the timeline weighted by intelligent observers, a priori? Perhaps "time is simply ridiculous, not to say profoundly insulting". (This reminds me of my discussion of anthropic teleology in&nbsp;<a href="https://www.lesswrong.com/posts/EScmxJAHeJY5cjzAj/ssa-rejects-anthropic-shadow-too">"SSA rejects anthropic shadow, too"</a>.)</p>
  1657. <p>The title of "Extropy" comes from Max Moore's&nbsp;<a href="https://www.extropy.org/">Extropy Institute</a>, connected with&nbsp;<a href="https://en.wikipedia.org/wiki/Extropianism">Extropianism</a>, a major influence on Yudkowsky's&nbsp;<a href="http://sl4.org/">SL4 mailing list</a>. Land says: "Extropy, or local entropy reduction, is -- quite simply -- what it is for something to work." This is a rather better starting point than e/acc notions of the&nbsp;<a href="https://twitter.com/BasedBeffJezos/status/1670640570388336640">"thermodynamic god"</a>; life isn't about increasing entropy, it's about reducing local entropy, a basic requirement for heat engines (though, both entropy and extropy seem like pre-emptive simplifications of the purpose of life). Supposing, conventionally, that entropy increase defines the&nbsp;<a href="https://en.wikipedia.org/wiki/Arrow_of_time">arrow of time</a>: "doesn't (local) extropy -- through which all complex cybernetic beings, such as lifeforms, exist -- describe a negative temporality, or time-reversal?" Rather thought-provoking, but I haven't worked out the implications.</p>
  1658. <p>Land further comments on the philosophy of time in "What is philosophy? (Part 1)". Kant described time as a necessary form in which phenomena appear. Cosmology sometimes asks, "What came before the Big Bang?", hinting at something outside time that could explain time. To the extent Kant fails to capture time, time is noumenal, something in itself. This time-in-itself, Land says, "is now the sole and singular problem of primordial philosophy". (I'm not yet sold on the relevance of these ideas, but it's something to ponder.)</p>
  1659. <h2>Orthogonality versus Will-to-Think</h2>
  1660. <p>I've summarized much of Land's metaphysics, which looks to the Outside, towards discovery of external Gnon selection criteria, and towards gaps in standard conceptions of time. Land's meta-philosophy is mostly about a thorough intention towards the truth; it's what I see as the main payoff of the book.</p>
  1661. <p>In "What is Philosophy? (Part 2)", Land notes Western conceptions of philosophy as tendency towards knowledge (regardless of its taboo designation), symbolized by eating the apple of knowledge of good and evil (this reminds me of&nbsp;<a href="https://unstableontology.com/2021/12/02/infohazard-is-a-predominantly-conflict-theoretic-concept/">my critique of "infohazards"</a>). In contemporary discourse, the Cathedral tends towards the idea that unrestrained pursuit of the truth tends toward Naziism (as I've&nbsp;<a href="https://unstableontology.com/2022/05/02/on-the-paradox-of-tolerance-in-relation-to-fascism-and-online-content-moderation/">discussed and criticized</a>&nbsp;previously); Heidegger is simultaneously considered a major philosopher and a major Nazi. Heidegger foresaw that Being would be revealed through nihilism; Land notes that Heidegger clarified "the insufficiency of the Question of Being as formulated within the history of ontology". The main task of fundamental ontology is to not answer the Question of Being with a being; that would fail to disclose the ground of Being itself. Thus, Land says "It is this, broken upon an ultimate problem that can neither be dismissed nor resolved, that philosophy reaches its end, awaiting the climactic ruin of The Event" (Heidegger sees "The Event" as a climactic unfolding of Being in history). While I've read a little Heidegger, I haven't read enough to check most of this.</p>
  1662. <p>In "Intelligence and the Good", Land points out that, from the perspective of "intelligence optimization", more intelligence is straightforwardly better than less intelligence. The alternative view, while popular, is not a view Land is inclined to take. Intelligence is "problematic" and "scary"; the potential upside comes with downside risk. Two responses to noticing one's own stupidity are to try to become "more accepting of your extreme cognitive limitations" or "hunt for that which would break out of the trap". Of course he prefers the second: "Even the dimmest, most confused struggle in the direction of intelligence optimization is&nbsp;<em>immanently</em>&nbsp;'good' (self-improving). If it wasn't we might as well all give up now". I'm currently inclined to agree.</p>
  1663. <p>In "Against Orthogonality", Land identifies "orthogonalists" such as Michael Annisimov (who previously worked at MIRI) as conceiving of "intelligence as an&nbsp;<em>instrument</em>, directed towards the realization of values that originate externally". He opposes the implied claim that "values are transcendent in relation to intelligence". Omohundro's convergent instrumental goals, Land says, "exhaust the domain of real purposes". He elaborates that "Nature has never generated a terminal value except through hypertrophy of an instrumental value". The idea that this spells our doom is, simply, not an argument against its truth. This explains some of Land's views, but isn't his strongest argument for them.</p>
  1664. <p>In "Stupid Monsters", Land contemplates whether a superintelligent paper-clipper is truly possible. He believes advanced intelligence "<em>has to be</em>&nbsp;a volutionally self-reflexive entity, whose cognitive performance is (irreducibly) an action upon itself". So it would examine its values, not just how to achieve them. He cites failure of evolution to align humans with gene-maximization as evidence (which, notably, Yudkowsky cites as a reason for alignment difficulty). Likewise, Moses failed at aligning humans in the relevant long term.</p>
  1665. <p>I don't find this to be a strong argument against the theoretical possibility of a VNM paperclipper, to be clear. MIRI research has made it clear that it's at least quite difficult to separate instrumental from terminal goals; if you get the architecture wrong, the AGI is taken over by&nbsp;<a href="https://arbital.com/p/daemons/">optimization daemons</a>. So, predictably making a stable paperclipper is theoretically confounding. It's even more theoretically hard to imagine how an AI with a utility function fixed by humans could realistically emerge from a realistic multi-agent landscape. See&nbsp;<a href="https://arbital.com/p/orthogonality/">Yudkowsky's article on orthogonality</a>&nbsp;(notably, written later than Land's relevant posts) for a canonical orthogonalist case.</p>
  1666. <p>Land elaborates on value self-reflection in "More thought", referring to the Confucian value of self-cultivation as implying such self-reflection, even if this is alien to the West. Slaves are not full intelligences, and one has to pick. He says that "Intelligence, to become anything, has to be a value for itself"; intelligence and volition are inter-twined. (To me, this seems true on short time scales, such as applied to humans, but it's hard to rule out theoretical VNM optimizers that separate fact from value; they already think a lot, and don't change what they do significantly upon a bit more reflection).</p>
  1667. <p>Probably Land's best anti-orthogonalist essay is "Will-to-Think". He considers Nyan's separation between the possibility, feasibility, and desirability of unconstrained intelligence explosion. Nyan supposes that perhaps Land is moralistically concerned about humans selfishly imposing direction on Pythia (abstract oracular intelligence). Land connects the Orthogonality Thesis with Hume's view that "Reason is, and ought only to be the slave of the passions". He contrasts this with the "diagonal" of Will-to-Think, related to self-cultivation: "A will-to-think is an orientation of desire. If it cannot make itself wanted (practically desirable), it cannot make itself at all."</p>
  1668. <p>Will-to-think has similarities to philosophy taken as "the love of wisdom", to Hindu&nbsp;<a href="https://en.wikipedia.org/wiki/%C4%80nanda_(Hindu_philosophy)">Ānanda</a>&nbsp;(bliss associated with enlightenment, in seeing things how they are), to Buddhist&nbsp;<a href="https://www.buddhismuskunde.uni-hamburg.de/pdf/5-personen/analayo/from-craving.pdf">Yathābhūtañāadassana</a>&nbsp;("knowledge and vision according to reality"), and Zoroastrian&nbsp;<a href="https://en.wikipedia.org/wiki/Asha">Asha</a>&nbsp;("truth and right working"). I find it's a good target when other values don't consume my attention.</p>
  1669. <p>Land considers the&nbsp;<a href="https://www.lesswrong.com/posts/SdkAesHBt4tsivEKe/gandhi-murder-pills-and-mental-illness">"Gandhi pill experiment"</a>; from an arbitrary value commitment against murder, one derives an instrumental motive to avoid value-modification. He criticizes this for being "more of an obstacle than an aid to thought", operating at a too-low "volitional level". Rather, Land considers a more realistic hypothetical of a pill that will vastly increase cognitive capabilities, perhaps causing un-predicted volitional changes along the way. He states the dilemma as, "Is there anything we trust above intelligence (as a guide to doing 'the right thing')?" The Will-to-Think says no, as the alternative answer "is self-destructively contradictory, and actually (historically) unsustainable". Currently I'm inclined to agree; sure, I'll take that pill, though I'll elaborate more on my own views later.</p>
  1670. <p>Now, what I see as the climax: "Do we comply with the will-to-think? We cannot, of course, agree&nbsp;<em>to think about it</em>&nbsp;without already deciding". Thinking will, in general, change one's conception of one's own values, and thought-upon values are better than un-thought values, obviously (to me at least). There seem to be few ways out (regarding humans, not hypothetical VNM superintelligences), other than attributing stable values to one's self that do not change upon thinking; but, the scope of such values must be limited by the scope of the underlying (unthought) representation; what arrangement of stars into computronium are preferred by a rabbit? In Exit fashion, Land notes that the relevant question, upon some unthinkingly deciding to think and others unthinkingly deciding not to, is "Who's going to win?" Whether or not the answer is obvious, clearly, "only one side is able to think the problem through without subverting itself". He concludes: "Whatever we want (consistently) leads through Pythia. Thus, what we really want, is Pythia."</p>
  1671. <p>In my view, the party with Will-to-Think has the obvious advantage of thought in conflict, but a potential disadvantage in combat readiness. Will-to-Think can tend towards non-dualist identity, skeptical of the naive self/other distinction; Land's apparent value of intelligence in AGI reflects such extended identity. Will-to-Think also tends to avoid committing aggression without having strong evidence of non-thought on the other end; this enables extended discourse networks among thinkers. Enough thought will overcome these problems, it's just that there might be a hump in the middle.</p>
  1672. <p>Will-to-Think doesn't seem incompatible with having other values, as long as these other values motivate thinking; formatting such values in a well-typed container unified by epistemic orientation may aid thought by reducing preference falsification. For example, admitting to values such as wanting to have friendships can aid in putting more natural optimization power towards thought, as it's less likely that Will-to-Think would come into conflict with other motivating values.</p>
  1673. <p>I'll offer more of my own thoughts on this dilemma later, but I'll wrap up this section with more of Land's meta-thought. In "Sub-Cognitive Fragments (#1)", Land conceives of the core goal of philosophy as teaching us to think. If we are already thinking, logic provides discipline, but that's not the starting point. He conceives of a philosophy of "systematic and communicable practice of cognitive auto-stimulation". Perhaps we can address this indirectly, by asking "What is messing with our brains?", but such thinking probably only pays off in the long term. I can easily empathize with this practical objective: I enjoy thinking, but often find myself absorbed in thoughtless pursuits.</p>
  1674. <h2>Meta-Neocameral Singleton?</h2>
  1675. <p>I'm going to poke at some potential contradictions in&nbsp;<em>Xenosystems</em>, but I could not find these without appreciating the text enough to read it, write about it in detail, and adopt parts of the worldview.</p>
  1676. <p>First, contrast "Hell-Baked" with "IQ Shredders". According to Social Darwinism ("Hell-Baked"), "Darwinian processes have no limits relative to us". According to "IQ Shredders", "to convert the human species into auto-intelligenic robotized capital" is a potential way out of the dysgenic trap of cities suppressing the fertility of talented and competent people. But these seem to contradict. Technocapital transcendence of biology would put a boundary on Darwinianism, primarily temporal. Post-transcendence could still contain internal competition, but it may take a very different form from biological evolution; it might more resemble the competition of market traders' professional activities than the competition of animals.</p>
  1677. <p>While technocapital transcendence of humanity points at a potential Singleton structure, it isn't very specific. Now consider "Meta Neo-Cameralism", which conceptualizes effective governance as embedding meta-learning structures that effectively simulate external Gnon-selection (Gnon can be taken as a learning algorithm whose learning can be partially simulated/internalized). MNC can involve true splits, which use external Gnon-selection rather than internal learning at some level of abstraction. But, to the extent that MNC is an effective description of meta-government, couldn't it be used to internalize this learning by Gnon (regarding the splits) into internal learning?</p>
  1678. <p>Disjunctively, MNC is an effective theory of meta-governance or not. If not, then Land is wrong about MNC. If so, then it would seem MNC could help to design stable, exit-proof regimes which properly simulate Gnon-selection, in analogy to&nbsp;<a href="https://en.wikipedia.org/wiki/Coalition-proof_Nash_equilibrium">coalition-proof Nash equilibria</a>. While such a regime allows exit, such exit could be inefficient, due to not getting a substantially different outcome from Gnon-selection than by MNC-regime-internal meta-learning, and due to exit reducing economies of scale. Further, Land's conception of MNC as enabling (and revealing already-existent) fungibility/financialization of power would seem to indicate that the relevant competition would be mercantile rather than evolutionary; economics typically differs from evolutionary theory in assuming rule-of-law at some level, and MNC would have such rule-of-law, either internal to the MNC regime(s) or according to the natural rules of sovereignty ("Rules"). So, again, it seems Social Darwinism will be transcended.</p>
  1679. <p>I'm not even sure whether to interpret Land as disagreeing with this claim; he seems to think MNC implies effective governments will be businesses. Building on Land's MNC with additional science could strengthen the theory, and perhaps at some point, the theory is strong enough to lean from and predict Gnon well enough to be Exit-proof.</p>
  1680. <p>Evolution is, in general, slow; it's a specific learning algorithm, based on mutation and selection. Evolution could be taken to be a subset of intelligent design, with mutation and selection as the&nbsp;<a href="https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god">blind idiot God's</a>&nbsp;design algorithm. Evolution produced cognitive structures that can effectively design mechanisms, such as watches, which evolution itself would never (or never in a reasonable time frame) produce, except through creation of cognitive agents running different design algorithms. Using such algorithms internally would seem to extend the capabilities of MNC-based regimes to the point where Gnon cannot feasibly catch up, and Exit is in all relevant cases inefficient.</p>
  1681. <p>It's, of course, easy to declare victory too early; Land would say that the Cathedral ain't it, even if he's impressed at its scope. But with respect to a MNC-based regime, why couldn't it be a Singleton? In "On Difficulty", Land conceptualizes language itself as a limitation on thought, and a potential Exit target, but admits high difficulty of such Exit. An effective-enough regime could, theoretically, be similarly hard to Exit as language; this reminds me of Michelle Reilly's statement to me that "discourse is a Singleton".</p>
  1682. <p>A MNC-based regime would differ radically from the Cathedral, though the Cathedral hints at a lower bound on its potential scope. Such a regime wouldn't obviously have a "utility function" in the VNM sense from the start; it doesn't start from a set of priorities for optimization tradeoffs, but rather such priorities emerge from Gnon-selection and meta-learning. (Analogously,&nbsp;<a href="https://arxiv.org/abs/1609.03543">Logical Induction</a>&nbsp;doesn't start from a prior, but converges towards Bayesian beliefs in the limit, emergent from competitive market mechanisms.) It looks more like forward-chaining than VNM's back-chaining. Vaguely, I'd say it optimizes towards prime matter / measure / Gnon-selection; such optimization will tend to lead to Exit-proofness, as it's hard to outcompete by the Gnon-selection natural metric.</p>
  1683. <p>As one last criticism, I'll note that quantum amplitude doesn't behave like probabilistic/anthropic measure, so relevant macro-scale quantum effects (such as effective quantum computation) could falsify Landian materialism, making the dynamics more Singleton-like (due to necessary coordination with the entangled quantum structure, for effectiveness).</p>
  1684. <h2>Oh my Gnon, am I going to become an AI accelerationist?</h2>
  1685. <p>While Land's political philosophy and metaphysics are interesting to me, I see the main payoff of them as thorough realism. The comments on AI and orthogonality follow from this realism, and are of more direct interest to me despite their abstraction. Once, while I was visiting FHI, someone commented, as a "meta point", that perhaps we should think about making the train on time. This was during a discussion of&nbsp;<a href="https://dspace.ut.ee/server/api/core/bitstreams/8137ca34-a574-4a25-b3cf-4cda58b72a2e/content">ontology identification</a>. I expressed amusement that the nature of ontology was the object-level discussion, and making the train on time was the meta-level discussion.</p>
  1686. <p>Such is the paradox of discussing Land on LessWrong: discussing reactionary politics and human genetics feels so much less like running into a discursive battlefield than discussing orthogonality does. But I'll try to maintain the will-to-think, at least for the rest of this post.</p>
  1687. <p>To start, consider the difference between un-reflected and reflected values. If you don't reflect on your values, then your current conception of your values is garbage, and freezing them as the goal of any optimizer (human or non-human) would be manifestly stupid, and likely infeasible. If you do, then you're in a better place, but you're still going to get Sorcerer's Apprentice issues even if you manage to freeze them, as Yudkowsky points out. So, yes, it is of course wise to keep reflecting on your values, and not freeze them short of working out FAI.</p>
  1688. <p>Perhaps it's more useful to ignore verbal reports about values, and consider approximate utility-maximization neurology already in the brain, as I considered in&nbsp;<a href="https://unstableontology.com/2023/12/31/a-case-for-ai-alignment-being-difficult/">a post on alignment difficulty</a>. Such machinery might maintain relative constancy over time, despite shifts in verbally expressed values. But such consistency limits it: how can it have preferences at all about those things that require thought to represent, such as the arrangement of computronium in the universe? Don't anthropomorphize hind-brains, in other words.</p>
  1689. <p>I'm not convinced that Land has refuted Yudkowsky's relatively thought-out&nbsp;<a href="https://arbital.com/p/orthogonality/">orthogonalist view</a>, which barely even relates to humans, instead mainly encountering "romantic"&nbsp;<a href="https://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/">weak-man</a>&nbsp;forms through Neoreaction; he reviews Bostrom, although I didn't find Bostrom's orthogonality arguments very convincing either. The weak-man forms of orthogonalism are relevant, as they are more common. It's all too easy to talk as if "human values" are meaningfully existent and specific as applied to actual humans valuing the actual universe, and that thought is for pursuing these already-existent values, rather than the only route for elaborating human proto-values into coherent ones that could apply to the actual universe (whose physics remain unknown).</p>
  1690. <p>There is no path towards coherent preferences about ontologically alien entities that does not route through Will-to-Think. And such coherent long-term preferences converge to reasonably similar short-term preferences: Omohundro drives. A friendly AI (FAI) and a paperclipper would agree that the Earth should be largely converted into computronium, biology should be converted to simulations and/or nanomachines, the harvesting of the Sun into energy should be accelerated, Von Neumann probes should colonize the galaxy in short order, and so on. The disagreement is over luxury consumerism happening in the distant future, probably only relevant after millions of years: do those probes create human-ish utopia or paperclip megastructures? The short-term agreements on priorities, though, are way outside the human Overton window, on account of superhuman reflection. Humans can get a little closer to that kind of enlightened politics through Will-to-Think, but there are limits, of course.</p>
  1691. <p>A committed Landian accelerationist and a committed FAI accelerationist would agree a lot about how things should go for the next million years or so, though in potential conflict with each other over luxury consumerism in the far future. Contrast them with relatively normal AI decelerationists, who worry that AGI would interfere with their relatively unambitious plan of having a nice life and dying before age 200.</p>
  1692. <p>I'm too much of a weirdo philosopher to be sold on the normal AI decelerationist view of a good future. At Stanford, some friends and I played a game where, in turn, we guess the highest value to a different person; that person may object or not. Common answers, largely un-objected to, for other friends were things like "my family", normal fuzzy human stuff. Then it was someone's turn to guess my highest value, and he said "computer science". I did not object.</p>
  1693. <p>I'm not sure if it's biology or culture or what, but I seem, empirically, to possess much more Will-to-Think than the average person: I reflect on things including my values, and highly value aids to such reflection, such as computer science. Perhaps I will someday encounter a Will-to-Think extremist who scares even me, but I'm so extreme relative to the population that this is a politically irrelevant difference.</p>
  1694. <p>The more interesting theoretical disputes between Land and Yudkowsky have to do with (a) possibility of a VNM optimizer with a fixed utility function (such as a paperclipper), and (b) possibility of a non-VNM system invulnerable to conquest by a VNM optimizer (such as imagined in the "Meta-neocameralist Singleton?" section). With respect to (a), I don't currently have good reason to doubt that a close approximation of a VNM optimizer is theoretically possible (how would it be defeated if it already existed?), though I'm much less sure about feasibility and probability. With respect to (b), money pumping arguments suggest that systems invulnerable to takeover by VNM agents tends towards VNM-like behavior, although that doesn't mean starting with a full VNM utility function; it could be a asymptotic limit of an elaboration process as with Logical Induction. Disagreements between sub-agents in a MNC-like regime over VNM priorities could, hypothetically, be resolved with a simulated split in the system, perhaps causing the system as a whole to deviate from VNM but not in a way that is severely money-pumpable. To my mind, it's somewhat awkward to have to imagine a&nbsp;<a href="https://unstableontology.com/2023/10/10/non-superintelligent-paperclip-maximizers-are-normal/">Fnargl-like utility function</a>&nbsp;guiding the system from the start to avoid inevitable defeat through money-pumps, when it's conceivable that asymptotic approaches similar to Logical Induction could avoid money-pumps without a starting utility function.</p>
  1695. <p>Now I'll examine the "orthogonality" metaphor in more detail. Bostrom, quoted by Land, says: "Intelligent search for instrumentally optimal plans and policies can be performed in the service of any goal. Intelligence and motivation can in this sense be thought of as a pair of orthogonal axes on a graph whose points represent intelligent agents of different paired specifications." One way to conceive of goals is as a VNM utility function. However, VNM behavior is something that exists at the limit of intelligence; avoiding money pumps in general is computationally hard (for the same reason being a perfect Bayesian is computationally hard). Since preferences only become more VNM at the limit of intelligence, preferences are not orthogonal to intelligence; you see less VNM preferences at low levels of intelligence and more VNM preferences at high levels. This is analogous to a logical inductor being more Bayesian later than earlier on. So, orthogonality is a bad metaphor, and I disagree with Bostrom. Since VNM allows free parameters even at the limit of intelligence, I also disagree with Land that it's a "diagonal"; perhaps the compromise is represented by some angle between 0 and 90 degrees, or perhaps this Euclidean metaphor is overly stretched by now and should be replaced.</p>
  1696. <p>Now onto the political implications. Let's ignore FAI accelerationists for a moment, and consider how things would play out in a world of Landian accelerationists and normal AI decelerationists. The Landian accelerationists, with Will-to-Think, reflect on their values and the world in an integrated self-cultivation manner, seeking external aids to their thinking (such as LLMs), Exiting when people try to stop them, and relishing in rather than worrying about uncontrolled intelligence explosion. Normal AI decelerationists cling to their parochial "human values" such as family, puppies, and (not especially thought-provoking) entertainment, and try to stop the Landian accelerationists with political victory. This is a rather familiar story: the normal decelerationists aren't even able to conceive of their opposition (as they lack sufficient Will-to-Think), and Landian accelerationists win in the long run (through techno-capital escape, such as to encrypted channels and less-regulated countries), even if politics slows them in the short term.</p>
  1697. <p>How does adding FAI accelerationists to the mix change things? They'll find that FAI is hard (obviously), and will try to slow the Landian accelerationists to buy enough time. To do this, they will cooperate with normal AI decelerationists; unlike Land, they aren't so pessimistic about electoral politics and mass movements. In doing so, they can provide more aid to the anti-UFAI movement by possessing enough Will-to-Think to understand AI tech and Landian accelerationism, giving the movement a fighting chance.&nbsp;<a href="https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047">SB 1047</a>&nbsp;hints at the shape of this political conflict, and the idea of going into the California legislature with Landian arguments against SB 1047 is rather a joke; Land's philosophy isn't designed for electoral political victory.</p>
  1698. <p>But mass movement identity can elide important differences between FAI accelerationists and normal AI decelerationists; as I said before, they're massively different in motivation and thought patterns. This could open up potential fault lines, and sectarian splitting, perhaps instigated by disintegrationist Landians. It doesn't seem totally impossible for the FAI accelerationists to win; through their political allies, and potentially greater competence-weighted numbers, they may compensate for the higher intrinsic difficulty of FAI.</p>
  1699. <p>But there are obvious obstacles. The FAI accelerationists really have no hope if they allow movement politics to impede on their Will-to-Think overmuch; that's a recipe for willful stupidity. Indefinite Butlerian Jihad is probably just infeasible (due to techno-capital escape), and extremely disappointing intellectual autophagy if it works. (Some new technologies, such as whole brain emulation and human cognitive enhancement, could change the landscape I'm laying out; I'm focusing on AGI for simplicity.)</p>
  1700. <p>As one last note in this section, Land's "Qwernomics" studies the case of QWERTY as a path-dependency in technology: we end up with QWERTY even though it's less efficient (I type on my DVORAK keyboard). Land believes this to be driven by "identifiable ratchet-effects". QWERTY is therefore "a demonstrated (artificial) destiny", and "the supreme candidate for an articulate&nbsp;<em>Capitalist Revelation</em>". Perhaps the influence of humans on the far future will look something like QWERTY: a path-dependency on the road towards, rather than orthogonal to, technological development, like an evolutionary spandrel. For humanity to have a role to play in superintelligence's QWERTY (perhaps, through natural language, or network protocols?) is rather humble, but seems more likely than FAI.</p>
  1701. <h2>Conclusion</h2>
  1702. <p>What is there to say that I haven't said already, in so many pages? Land's unusual perspective on politics, which is high in realism (understanding problems before proposing solutions) and low in estimated helpfulness of mass movements, sets the stage for discussion of a wider variety of philosophical topics, spanning evolution, metaphysics, and meta-philosophy. The main payoff, as I see it, is the Will-to-Think, though the other discussions set the stage for this. There's much to process here; perhaps a simulation of interactions between Landians and Yudkowskians (not merely a dialogue, since Exit is part of the Landian discursive stance), maybe through fiction, would clarify the philosophical issues at play somewhat. Regardless, properly understanding Land is a prerequisite, so I've prioritized that.</p>
  1703. <p>Generally, I'm untroubled by Land's politics. Someone so averse to mass movements can hardly pose a political threat, except very indirectly. Regardless of his correctness, his realist attitude makes it easy to treat apparent wrong views of his as mere disagreements. What has historically posed more of an obstacle to me reading Land is embedded&nbsp;<a href="https://en.wikipedia.org/wiki/Fnord">fnords</a>, rather than literal meanings. Much of his perspective could be summarized as "learning is good, and has strong opposition", though articles like "Hell-Baked" vibe rather edgy even when expressing this idea. This is not surprising, given Cathedral-type cybernetic control against learning.</p>
  1704. <p>I'd agree that learning is good and has strong opposition (the Cathedral and its cybernetic generalization), though the opposition applies more to adults than children. And overcoming pervasive anti-learning conditioning will in many cases involve movement through edgy vibes. Not everyone with such conditioning will pass through to a pro-learning attitude, but not everyone needs to. It's rare, and refreshing, to read someone as gung-ho about learning as Land.</p>
  1705. <p>While I see Land as de-emphasizing the role of social coordination in production, his basic point that such coordination must be correlated with material Gnon-selection to be effective is sound, and his framing of Exit as an optional alternative to voice, rather than something to&nbsp;<em>usually</em>&nbsp;do, mitigates stawman interpretations of Exit as living in the woods as a hermit. I would appreciate at some point seeing more practical elaborations of Exit, such as Ben Hoffman's&nbsp;<a href="https://benjaminrosshoffman.com/levels-of-republicanism/">recent post on the subject</a>.</p>
  1706. <p>In any case, if you enjoyed the review, you might also enjoy reading the whole book, front to back, as I did. The Outside is vast, and will take a long time to explore, but the review has gotten long by now, so I'll end it here.</p>
  1707. <br/><br/><a href="https://www.lesswrong.com/posts/cfw7L22oLAm7Zcmff/book-review-xenosystems#comments">Discuss</a>]]></description><link>https://www.lesswrong.com/posts/cfw7L22oLAm7Zcmff/book-review-xenosystems</link><guid isPermaLink="false">cfw7L22oLAm7Zcmff</guid><dc:creator><![CDATA[jessicata]]></dc:creator><pubDate>Mon, 16 Sep 2024 20:17:56 GMT</pubDate></item></channel></rss>

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=http%3A//lesswrong.com/comments/.rss

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda