Congratulations!

[Valid Atom 1.0] This is a valid Atom 1.0 feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: http://plasmasturm.org/feeds/plasmasturm/

  1. <?xml version="1.0" encoding="utf-8"?>
  2. <feed xmlns="http://www.w3.org/2005/Atom" xml:base="http://plasmasturm.org/feed"><title>plasmasturm.org</title><subtitle>musings in human and machine language</subtitle><author><name>Aristotle Pagaltzis</name><email>pagaltzis@gmx.de</email></author><link href="http://plasmasturm.org/"/><id>urn:uuid:41632386-0f0d-11da-9fcb-dd680b0526e0</id><icon>/favicon.ico</icon><updated>2024-02-12T01:38:42+01:00</updated><entry><title>How to eject external hard disks used for APFS TimeMachine backups (I’m not making this up)</title><link href="/log/apfs-timemachine-killall-mds/"/><id>urn:uuid:2e108a89-8e31-4641-bfca-1fea292c101c</id><published>2023-01-20T00:32:44+01:00</published><updated>2024-02-12T01:38:42+01:00</updated><content xml:base="/log/apfs-timemachine-killall-mds/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  3.  <p>Ever since upgrading to a recent Mac that came with the disk formatted with AFPS, a perennial irritation has been Time Machine. I use a USB hard drive for backups, which of course needs unplugging when I want to take the machine with me somewhere. There are long stretches of time when I don’t even think about this because it works just fine. And then there are the other stretches of time when this has been impossible: clicking the eject button in Finder does nothing for a few ponderous moments and then shows a force eject dialog. (And of course the command line tools and other methods all equally fail.)</p>
  4.  <center><p><img src="/log/apfs-timemachine-killall-mds/force-eject-dialog.png" width="507" height="173" alt=""/></p></center>
  5.  <p>I could of course forcibly eject the disk, as the dialog offers. And maybe I would, if it this was just a USB stick I was using to shuffle around some files. But doing this with my backup disk seems rather less clever. This disk I want to unmount in good order.</p>
  6.  <p>Unfortunately when this happens, there is no help for it: even closing all applications does not stop the mystery program from using it. So what is the program which is using the disk? The Spotlight indexer, it turns out.</p>
  7. <pre><samp><b>$ sudo lsof +d /Volumes/TimeMachine\ XXXXXX/</b>
  8. COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
  9. mds     1234 root    5r   DIR   1,24      160    2 /Volumes/TimeMachine XXXXXX
  10. mds     1234 root    6r   DIR   1,24      160    2 /Volumes/TimeMachine XXXXXX
  11. mds     1234 root    7r   DIR   1,24      160    2 /Volumes/TimeMachine XXXXXX</samp></pre>
  12.  <p>How do you ask this to stop?</p>
  13.  <p>Beats me. I have not found any documented, official way of doing so. Not the Spotlight privacy settings, not removing the disk from the list of backup disks in the Time Machine settings, not the combination of those, no conceivable variation of using <code>tmutil</code> on the command line, not a number of other things – nothing. Even <code>killall -HUP mds</code> does not help: obviously the Spotlight service notices, and just respawns the processes.</p>
  14.  <p>And this state will persist for hours and days – literally. On one occasion, I wanted but didn’t <em>need</em> the machine with me, so I left it to its own devices out of curiosity. It took over 2 days before ejecting the Time Machine volume worked again.</p>
  15.  <p>For a purportedly <em>portable</em> computer, this is… you know… a bit of a showstopper.</p>
  16.  <p>So after suffering this issue long enough, I finally tried something stupid the other day – and whaddayaknow, it works:</p>
  17.  <pre><samp>$ sudo killall -HUP mds ; sudo umount /Volumes/TimeMachine\ XXXXXX/</samp></pre>
  18.  <p>This will not always work the first time, it may need a repeat or two. But sooner rather than later it does take. Evidently the <tt>mds</tt> process respawn is not so quick that it wouldn’t leave a window during which the disk can be unmounted properly.</p>
  19.  <p>And so I put the following in <tt>~/bin/macos/unmount-despite-mds</tt> and made it executable:</p>
  20.  <pre><code style="ft-sh">#!/bin/bash
  21. if (( $# != 1 )) ; then echo "usage: $0 &lt;mountpoint&gt;" 1&gt;&amp;2 ; exit 1 ; fi
  22. parent_dev=$( stat -f %d "$1"/.. ) || exit $?
  23. while [[ -d $1 ]] &amp;&amp; [[ "$( stat -qf %d "$1" )" != $parent_dev ]] &amp;&amp; ! umount "$1" ; do
  24.  killall -HUP mds
  25.  lsof +d "$1"
  26. done</code></pre>
  27.  <p>Now I can invoke it at any time from the terminal like so:</p>
  28.  <pre><samp>$ sudo ~/bin/macos/unmount-despite-mds /Volumes/TimeMachine\ XXXXXX/</samp></pre>
  29.  <p>What this does is check whether the given path, if it is a mount point, fails to unmount. If so, it sends a signal to terminate the Spotlight indexer processes and immediately retries. In a loop.</p>
  30.  <p>Or to put it more colloquially, it machineguns Spotlight until <tt>umount</tt> can slip in under the suppressing fire and pull the disk out from under it.</p>
  31.  <p>This is not a solution. It is the bluntest of instruments. But this is what works. And as far as I have been able to find, this is the only thing that works.</p>
  32.  <p>Three cheers for Apple software quality, I guess.</p>
  33.  <ins datetime="2024-02-12T01:38:42+01:00"><p><b>Update</b>: Sending <code>HUP</code> instead of <code>TERM</code> is a better idea. Both cause the <code>mds</code> processes to shut down, but <code>TERM</code> seems to leave Spotlight in a bad state: for some time after, some search results take longer to show up and search result order gets messed up, which painfully disrupts my muscle memory. With <code>HUP</code> I have not observed the same issue. I have added it to all the examples in the article.</p></ins>
  34. </div></content></entry><entry><title>Low-brow clipboard integration with remote X</title><link href="/log/ssh-xclip/"/><id>urn:uuid:a2a78dee-1593-4cf4-8e62-cf8b2c6c38ab</id><published>2024-02-11T21:15:11+01:00</published><updated>2024-02-11T21:15:11+01:00</updated><content xml:base="/log/ssh-xclip/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  35.  <p>This is another one of those small-lightbulb moments like <a href="/log/dtach-twice/" title="Doubly detached IRC">this one</a> was. The setup for this one is that I run some X programs on my home server for remote use from the machine I actually work on – but via <abbr>VNC</abbr>, where I never got clipboard integration working.<sup><a href="#fn-ssh-xclip-1" id="fn-ssh-xclip-1-ref">1</a></sup> I only rarely need to paste in X, so I’ve just been living with this… until the obvious recently occcured to me:</p>
  36.  <p>The low-brow but perfectly serviceable solution is to just use the clipboard commands of both systems, strung together with a pipe over a <abbr title="Secure Shell">SSH</abbr> connection. Of course this needs to be done manually to transfer the clipboard every single time I copy something on one side and want to paste it on the other – the low-brow bit. But it is far more convenient than pasting into terminals like I was doing before, to the point I won’t feel the lack any more.</p>
  37.  <p>Only… it didn’t quite work right.</p>
  38.  <p>I searched the web for a fix and unsurprisingly found that plenty of people have had the same idea:</p>
  39.  <pre><code style="ft-sh"># copy in X, run this, then paste locally<br/>ssh server.home xclip -selection clipboard -out | pbcopy</code></pre>
  40.  <p>(If you’re not using a Mac locally, just replace <a href="https://ss64.com/mac/pbcopy.html"><code>pbcopy</code></a> (and <a href="https://ss64.com/mac/pbpaste.html"><code>pbpaste</code></a>) with your system’s equivalent.)</p>
  41.  <p>The trouble is, I was looking for the other direction – and far from novel though the idea may be, I didn’t find a command written up anywhere. There turns out to be a minor trick to it (and maybe that’s why), which I ultimately had to figure out for myself:</p>
  42.  <pre><code style="ft-sh"># copy locally, run this, then paste in X<br/>pbpaste | exec ssh server.home 'exec xclip -selection clipboard &amp;&gt; /dev/null'</code></pre>
  43.  <p>Namely, this won’t work as desired without the “<code>&amp;&gt; /dev/null</code>” bit.</p>
  44.  <p>It will work, except without returning to the prompt. It just hangs. This comes down to the way that selections work in X: the program the selected content came from must grab that selection and then answer requests to paste it – no program, no paste. So the only way <code>xclip</code> can work is to stick around as a background process after putting something on the clipboard – until something else is copied, then it can exit. And because <code>xclip</code> doesn’t close stdout and stderr, <code>ssh</code> won’t know to quit any sooner than that, so it sits there waiting. To create a compound command that immediately returns to the prompt after shipping over the local clipboard contents, it is therefore necessary to close stdout and stderr on <code>xclip</code> explicitly.</p>
  45.  <p>With that, I have a solution I can happily live with.<sup><a href="#fn-ssh-xclip-2" id="fn-ssh-xclip-2-ref">2</a></sup></p>
  46.  <hr/>
  47.  <ol>
  48.    <li id="fn-ssh-xclip-1">
  49.      <p>This is after trying all the usual suggestions (like running <code>vncconfig</code> on the server). Presumably they didn’t work because I am using the Screen Sharing app that comes with MacOS, which apparently is not actually a <abbr>VNC</abbr> client but just uses that protocol for most of its functionality.</p>
  50.      <p>The backstory to that is that I used to use <a href="https://xpra.org">Xpra</a> for remote X because it makes that rather neat: individual windows on the server are displayed remotely as individual windows on the client, complete with native local windowing UI, so there is none of the ungainly window-in-window hassle and no need for an X window manager. The catch is that <a href="https://xpra.org">Xpra</a> requires reasonably matching versions on server and client, and I have at times fallen well behind running the latest <abbr title="Operating System">OS</abbr> version on either side, which on a few occasions has made them tricky to align. A while ago I failed to find any working constellation at all, at which point I decided I was tired of doing that and would switch to something less bespoke. Now I no longer need to install anything on the Mac and have multiple highly compatible options on the server. <a href="#fn-ssh-xclip-1-ref">↩</a></p>
  51.    </li>
  52.    <li id="fn-ssh-xclip-2"><p>Or maybe I’ll go back to Xpra, who knows… <a href="#fn-ssh-xclip-2-ref">↩</a></p></li>
  53.  </ol>
  54. </div></content></entry><entry><title>Closing for trouble in the land of streams</title><summary>Closing stderr/stdout/stdin outright is a bad idea</summary><link href="/log/stdclose/"/><id>urn:uuid:fcaa9710-02f6-4ee6-909e-0f4f9973b6df</id><published>2024-02-08T17:22:22+01:00</published><updated>2024-02-08T17:22:22+01:00</updated><content xml:base="/log/stdclose/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  55.  <p>Recently I learned of a mistake I had been making without realizing it. <cite><a href="https://www.jwz.org/blog/2023/11/how-did-apple-manage-to-break-redirects-on-all-versions-of-bash/#comment-245047">Zygo</a></cite>:</p>
  56.  <blockquote cite="https://www.jwz.org/blog/2023/11/how-did-apple-manage-to-break-redirects-on-all-versions-of-bash/#comment-245047">
  57.    <p>Closing FD 2, without simultaneously opening something else in its place (like a new TTY, if you’re trying to shed a controlling TTY or drop the last references to your old namespace/<code>chroot</code> parent), is weird and occasionally dangerous.</p>
  58.    <p>It’s asking for trouble because libraries are chatty and they write noise to <code>stderr</code> and it’s nearly impossible to stop them. Libraries also like to make their own open file descriptors, and they tend not to notice when the thing they’ve opened/connected to and expect to exclusively control/have a private conversation with happens to also be <code>stderr</code> and full of noise from random library functions. Chaos follows when those patterns collide.</p>
  59.  </blockquote>
  60.  <p><cite><a href="https://www.jwz.org/blog/2023/11/how-did-apple-manage-to-break-redirects-on-all-versions-of-bash/#comment-245050">Jamie Zawinski</a></cite>:</p>
  61.  <blockquote cite="https://www.jwz.org/blog/2023/11/how-did-apple-manage-to-break-redirects-on-all-versions-of-bash/#comment-245050">
  62.    <p>XScreenSaver has had code in it since the beginning to re-open FDs 1 and 2 as <code>/dev/null</code> lest <code>XOpenDisplay</code> put the connection to the X server on FD 2 with hilarious results.</p>
  63.  </blockquote>
  64.  <p>In other words, closing one of the standard streams might lead to flooding somewhere else. You must ensure drainage, even if only to the void.</p>
  65. </div></content></entry><entry><title type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Doubly detached <abbr title="Internet Relay Chat">IRC</abbr></div></title><summary>Reaping Mosh’s full benefits for long-running terminal sessions</summary><id>urn:uuid:aaf421c6-fe58-4433-aba5-289ea30ed4b1</id><published>2018-02-08T05:13:25+01:00</published><updated>2023-02-12T11:47:11+01:00</updated><link href="/log/dtach-twice/"/><content xml:base="/log/dtach-twice/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  66.  <p>For long-running terminal sessions such as <abbr title="Internet Relay Chat">IRC</abbr>, I’ve long used <a href="http://dtach.sourceforge.net/">dtach</a> (or <a href="https://www.gnu.org/software/screen/">Screen</a>, or <a href="https://tmux.github.io/">tmux</a>, it doesn’t matter) to be able to leave them running on a server without having to be connected to it… just like everyone else does. But recently I put together some simple facts in a retrospectively obvious way that I haven’t heard of anyone else doing, and thus achieved a little quality of life improvement.</p>
  67.  <p>I’ve written before about <a href="/log/mosh/" title="Endorsing Mosh">my love for Mosh during times of need</a>. And when I had the insight I am writing about, I was in a situation where I seriously needed it. (If you don’t know what <a href="https://mosh.org/">Mosh</a> is, read about it first, because ⓐ you’re missing out and ⓑ the rest of this article won’t make much sense otherwise.)</p>
  68.  <p>Here’s the thing about Mosh, though: it still has to go through regular <abbr title="Secure Shell">SSH</abbr> in order to bring up a Mosh session. And on a severely packet-lossy connection, that alone can be hell.</p>
  69.  <p>The real beauty of Mosh comes to the fore only if you keep the session around once it’s set up. As long as it’s up, then no matter how bad the packet loss, you get decent interactivity. But to fully benefit from that, you have to avoid tearing the session down.</p>
  70.  <p>My problem is that try as I might, I have never been able to break with my compulsion to close terminal windows once I am done with them. For <abbr title="Internet Relay Chat">IRC</abbr> that means sooner or later I’ll want to detach from a session I’m not actively chatting in. And because I use Mosh to run dtach on the remote end, detaching from <abbr title="Internet Relay Chat">IRC</abbr> means that the dtach client exits on the remote end… which tears down the Mosh session.</p>
  71.  <p>The simple fact that suddenly occurred to me is that I can <em>also</em> use dtach on <em>my</em> end of the connection, <em>in front</em> of Mosh:</p>
  72.  <pre><code class="ft-sh">dtach -A ~/.dtach/irssi mosh <i>hostname</i> dtach -A ~/.dtach/irssi irssi</code></pre>
  73.  <p>Now when I detach, it is only from my local dtach session, not the one on the server. So the Mosh session behind it sticks around – without me having to keep the terminal window open.</p>
  74.  <p>The upshot is a dtach ↔ Mosh ↔ dtach sandwich which gives me the full benefits of Mosh.</p>
  75.  <hr/>
  76.  <p>Should you want to use this yourself, you will need the last piece of the puzzle, namely how to bring down the Mosh session while keeping the IRC session around.</p>
  77.  <ins datetime="2023-02-12T11:47:11+01:00"><p><b>Update</b>: The obvious answer is of course to just quit the Mosh session using <kbd>Ctrl</kbd><kbd>^</kbd> <kbd>.</kbd> so the following is only of academic interest.</p></ins>
  78.  <p>To do that you can detach on the remote end, and the simplest way of doing that is this:</p>
  79.  <pre><code class="ft-sh">dtach -a ~/.dtach/irssi -E # and then press the detach shortcut</code></pre>
  80.  <p>The <code>-E</code> switch disables the keyboard shortcut for detaching in the local dtach client. This means when you press the shortcut it gets sent to the remote dtach client.</p>
  81.  <p>(What follows then is exactly the same chain of events as always when you detach from the remote dtach session: the remote dtach client exits, so the remote Mosh session ends, so the local Mosh client exits – and therefore the local dtach session ends as well. Detaching from the remote end thus brings the whole edifice down.)</p>
  82. </div></content></entry><entry><title>Comparing the contents of gzipped tarballs</title><link href="/log/gzcmp/"/><id>urn:uuid:f528bb2c-892a-4a0f-804b-c49d947e71ed</id><published>2022-12-18T18:12:51+01:00</published><updated>2022-12-18T18:12:51+01:00</updated><content xml:base="/log/gzcmp/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  83.  <p>Some time ago I had a pile of tarballs which were created periodically by a cron job on a machine, regardless of whether anything had changed between runs, and they were eating up all the storage space. To free up space I wanted to get rid of the redundant ones so I needed a quick way to identify which of them had (no) changes relative to the respective preceding tarball.</p>
  84.  <p>I was hoping I could just compare the tarballs themselves rather than doing anything more complicated that involved actually extracting their contents (and then, I don’t know, doing some kind of fingerprinting on top). Unfortunately some naïve attempts using <code>cmp</code> failed and seemed to indicate that I was going to have to take the kind of more complicated approach I was hoping to avoid.</p>
  85.  <p>Now, there is quite a bit of discussion online about <a href="https://reproducible-builds.org/docs/archives/" title="Reproducible Builds: Archive metadata">how to make Tar generate reproducible (so in a sense, canonical) tarballs</a>. Of course that isn’t much use in hindsight (such as in my case), when a pile of tarballs is already sitting around on disk.</p>
  86.  <p>But in my case, the part of the filesystem that all of the tarballs were made from was an area where, in case of the tarballs that were redundant, absolutely nothing would have happened. By that I don’t just mean logical non-changes like creating and then deleting a temporary file. I mean that nothing was writing to that part of the filesystem in any capacity. Therefore Tar should encounter files and directories in the exact same order each time it was iterating that directory tree. Why then should it ever generate non-identical tarballs? Exasperated, I dug into the question of archive reproducibility for quite a while.</p>
  87.  <p>Spoiler: that was a waste of time.</p>
  88.  <p>I finally discovered that Tar is not the culprit at all…</p>
  89.  <p>Gzip is! Namely, the Gzip file header includes a timestamp.</p>
  90.  <p>So my instincts were right: Tar <em>should</em> have been creating the exact same archive over and over, and in fact it <em>was</em>. I just hadn’t thought to suspect Gzip at all.</p>
  91.  <p>Luckily, the timestamp is found at a fixed location in a gzipped file: it is the 32-bit value at offset 4. And handily, <code>cmp</code> has a switch to tell it to seek past the start of the file(s) it is comparing.</p>
  92.  <p>So to make a long story short:</p>
  93.  <pre><code>cmp -i8 file1.tar.gz file2.tar.gz</code></pre>
  94.  <p>(This entry is brought to you by the hope to not have to figure this all out a third time in my life. Some time after the events described above, it came back to me that I had already figured this out years before but lost all memory of it by the next occasion to use the knowledge.)</p>
  95. </div></content></entry><entry><title>The Programmers’ Credo</title><summary>Maciej Cegłowski identifies the programmer’s attitude</summary><link href="/log/programmers-credo/"/><id>urn:uuid:1d74dcbd-32ac-489c-aa7d-c39876426d2e</id><published>2022-12-18T17:25:18+01:00</published><updated>2022-12-18T17:25:18+01:00</updated><content xml:base="/log/programmers-credo/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  96.  <p><cite><a href="https://twitter.com/pinboard/status/761656824202276864">Maciej Cegłowski</a></cite>:</p>
  97.  <blockquote cite="https://twitter.com/pinboard/status/761656824202276864">
  98.    <p>The Programmers’ Credo:</p>
  99.    <p>We do these things not because they are easy,<br/>but because we thought they were going to be easy.</p>
  100.  </blockquote>
  101. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>“CAPTCHA”</title><link href="/log/philosophical-captcha/"/><id>urn:uuid:2cee765f-6585-4151-ba71-09f87824a667</id><published>2022-12-06T14:41:57+01:00</published><updated>2022-12-18T17:18:11+01:00</updated><content xml:base="/log/philosophical-captcha/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  102.  <p><cite><a href="https://www.schneier.com/blog/archives/2022/12/captcha.html" title="CAPTCHA">Bruce Schneier</a></cite>:</p>
  103.  <blockquote cite="https://www.schneier.com/blog/archives/2022/12/captcha.html">
  104.    <p>This is an actual CAPTCHA I was shown when trying to log into PayPal.</p>
  105.    <p>[…]</p>
  106.    <p>As an actual human and not a bot, I had no idea how to answer. Is this a joke? (Seems not.) […] I stared at the screen, paralyzed, for way too long.</p>
  107.  </blockquote>
  108.  <ins datetime="2022-12-18T17:18:11+01:00"><p><b>Update</b>: <a href="https://www.schneier.com/blog/archives/2022/12/as-long-as-were-on-the-subject-of-captchas.html" title="As Long as We’re on the Subject of CAPTCHAs">some follow-up humour</a>.</p></ins>
  109. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>“POSIX hardlink heartache”</title><link href="/log/hardlinksafety/"/><link rel="via" href="https://rachelbythebay.com/w/2022/03/15/link/" title="rachelbythebay: Dumb things you can sometimes do with hard links"/><id>urn:uuid:e80e5c24-5c08-44b4-b3ca-7823f12154eb</id><published>2022-04-03T15:53:04+02:00</published><updated>2022-04-03T15:53:04+02:00</updated><content xml:base="/log/hardlinksafety/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  110.  <p><cite><a href="http://michael.orlitzky.com/articles/posix_hardlink_heartache.xhtml" title="POSIX hardlink heartache">Michael Orlitzky</a></cite>:</p>
  111.  <blockquote cite="http://michael.orlitzky.com/articles/posix_hardlink_heartache.xhtml">
  112.    <p>It follows that, on POSIX systems without any non-standard protections, it’s unsafe for anyone (but in particular, root) to do anything sensitive in a directory that is writable by another user. Cross-platform programs designed to do so are simply flawed.</p>
  113.  </blockquote>
  114. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>“Learning to Read, Again”</title><summary>David Kolb’s reflections on how reading and its meaning has evolved</summary><link href="/log/toreadagain/"/><link rel="via" href="https://markbernstein.org/Mar22/LearningToReadAgain.html" title="Mark Bernstein"/><id>urn:uuid:ea915248-5e13-4a16-99a6-8729d2591bb0</id><published>2022-03-27T15:37:48+02:00</published><updated>2022-03-27T15:37:48+02:00</updated><content xml:base="/log/toreadagain/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  115.  <p><cite><a href="https://dkolb.org/wp-content/uploads/2022/03/Learning-to-Read-Again-v2.pdf" title="Learning to Read, Again">David Kolb</a></cite>:</p>
  116.  <blockquote cite="https://dkolb.org/wp-content/uploads/2022/03/Learning-to-Read-Again-v2.pdf">
  117.    <p>I picked my own case because the focus on words and print makes various stages of learning to navigate a media world easily identifiable. […]</p>
  118.    <p>When I moved to Bates College in Maine in the late ’70s, the computer revolution was just beginning. I rented a standalone word processing device the size of a small refrigerator from Digital Equipment Corporation and used it to begin a book. A few years later I bought my first computer. The fledgling Internet was gradually coming up and everything I knew about reading, research, and paper was about to be challenged.</p>
  119.    <p>At first I didn't notice how much was changing[.]</p>
  120.  </blockquote>
  121.  <p>His summary of the developments is succint:</p>
  122.  <blockquote cite="https://dkolb.org/wp-content/uploads/2022/03/Learning-to-Read-Again-v2.pdf">
  123.    <p>The media have become predatory, grasping; you’re not in control. […]</p>
  124.    <p>Recall the old advertising tactic: stimulate people into a constant state of low level unfulfilled sexual excitement, and into that gap you can pour a infinity of products. That still continues, but now add another tactic: stimulate people into a constant state of unsatisfied anger and resentment, and out of that gap you can pull an infinity of votes and cash contributions. So we get exposes, fake news, endless conspiracy theories, all with urgent appeals.</p>
  125.    <p>And, as the salesmen say, “but wait, there’s more.” Aside from all those predatory grasping Internet manipulators there is something diffuse and deceptive going on, a general hyping and intensifying of our everyday encounters.</p>
  126.  </blockquote>
  127.  <p>His thoughts on what answer must be are that wall-building isolation cannot possibly work. And so he proposes something else.</p>
  128. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>A proposed adequate definition of a data breach</title><link href="/log/databreach-defined/"/><id>urn:uuid:dd2963b4-3b7a-4db3-8f08-312e13e91c81</id><published>2021-12-18T14:05:46+01:00</published><updated>2021-12-18T14:05:46+01:00</updated><content xml:base="/log/databreach-defined/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  129.  <p><cite><a href="https://www.troyhunt.com/when-is-a-scrape-a-breach/" title="When is a Scrape a Breach?">Troy Hunt</a></cite>:</p>
  130.  <blockquote cite="https://www.troyhunt.com/when-is-a-scrape-a-breach/">
  131.    <p>A data breach occurs when information is obtained by an unauthorised party in a fashion in which it was not intended to be made available.</p>
  132.  </blockquote>
  133.  <p>Eminently sensible; the definition shouldn’t hinge on technicalities about how the data got away. (I wonder if that means we should be using a different word than “breach”?)</p>
  134. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>Stop The Humour</title><summary>Straight talk from Jeff Johnson</summary><link href="/log/lapcat-abbr-not-acronym/"/><id>urn:uuid:649986ea-31a9-4e47-a086-383a57c300a1</id><published>2021-04-10T21:49:07+02:00</published><updated>2021-04-10T21:49:07+02:00</updated><content xml:base="/log/lapcat-abbr-not-acronym/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  135.  <p><cite><a href="https://lapcatsoftware.com/articles/NSURL.html" title="NSURL is a bad host">Jeff Johnson</a></cite>:</p>
  136.  <blockquote cite="https://lapcatsoftware.com/articles/NSURL.html">
  137.    <p>To be more specific, NSURL is based on an obsolete RFC. (Note: RFC is an acronym for Read the F-ing Commandment, while NS is an acronym for No Swift.)</p>
  138.  </blockquote>
  139. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>Early Signs</title><summary>An anniversary reminder from Jamie Zawinski</summary><link href="/log/1993-09-10k/"/><id>urn:uuid:13ab0872-7dfa-46a9-aaa2-91974bc7e0c8</id><published>2021-01-17T16:15:45+01:00</published><updated>2021-01-17T16:15:45+01:00</updated><content xml:base="/log/1993-09-10k/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  140.  <p><cite><a href="https://www.jwz.org/blog/2021/01/10k-september/" title="10K September">Jamie Zawinski</a></cite>:</p>
  141.  <blockquote cite="https://www.jwz.org/blog/2021/01/10k-september/">
  142.    <p>Today is day 10,000 of The September That Never Ended.</p>
  143.    <p>The Internet: Mistakes Were Made.™</p>
  144.  </blockquote>
  145. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Useful GitHub Issues overviews</div></title><link href="/log/myghissues/"/><id>urn:uuid:25d510b2-e34e-432a-8109-48a829234ccf</id><published>2016-04-24T14:06:46+02:00</published><updated>2020-10-02T08:09:29+02:00</updated><content xml:base="/log/myghissues/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  146.  <p>I’ve always found the default, easily available views of GitHub Issues inadequate for my purposes. I want to separate issues by the kind of action I’ll want to take, but the interface is fundamentally oriented around a single list of issues, and by default that is just a big dump of every issue that involves you in some way. Luckily all the buttons are just <abbr title="User Interface">UI</abbr> over a query language, and the query language turns out to be just barely powerful enough to allow me to get the overviews I really want.</p>
  147.  <p>So here are the queries I’ve arrived at. Together they approximate a basic dashboard. Unfortunately there is not, to my knowledge, a keyword in the query language to refer to “whoever the currently logged in user is”, so I cannot demonstrate them as effectively as I’d like: you will have to manually edit them to subsitute your username for mine.</p>
  148.  <ul>
  149.    <li>
  150.      <h3><a href="https://github.com/issues?q=user%3Aap+-author:ap+is%3Aopen+sort%3Aupdated-desc"><code>user:ap -author:ap</code></a></h3>
  151.      <p>This shows all issues filed by others against my own repositories.</p>
  152.      <p>Semantically, this one is “stuff waiting for me to fix”.</p>
  153.    </li>
  154.    <li>
  155.      <h3><a href="https://github.com/issues?q=user%3Aap+author:ap+is%3Aopen+sort%3Aupdated-desc"><code>user:ap author:ap</code></a></h3>
  156.      <p>This shows all issues I have filed on my own repositories.</p>
  157.      <p>Semantically, this one is “my personal todo list”.</p>
  158.    </li>
  159.    <li>
  160.      <h3><a href="https://github.com/issues?q=author%3Aap+-user%3Aap+is%3Aopen+sort%3Aupdated-desc"><code>author:ap -user:ap</code></a></h3>
  161.      <p>This shows all issues I have filed against repositores I do not own.</p>
  162.      <p>Semantically, this one is “stuff I need to keep bugging others about”.</p>
  163.    </li>
  164.    <li>
  165.      <h3><a href="https://github.com/issues?q=commenter%3Aap+-author%3Aap+-user%3Aap+is%3Aopen+sort%3Aupdated-desc"><code>commenter:ap -author:ap -user:ap</code></a></h3>
  166.      <p>This shows all issues filed by others against repositories I do not own, which I have nevertheless commented on.</p>
  167.      <p>Semantically, this one is “stuff I care about as a bystander”.</p>
  168.    </li>
  169.    <li>
  170.      <h3><a href="https://github.com/issues?q=involves%3Aap+-commenter%3Aap+-author%3Aap+-user%3Aap+is%3Aopen+sort%3Aupdated-desc"><code>involves:ap -commenter:ap -author:ap -user:ap</code></a></h3>
  171.      <p>This shows all issues filed against repositories I do not own, which I have been mentioned in but have not commented on. There can be dross in here; I have a short username, and people importing content into GitHub sometimes trigger bogus mentions by having <code>@ap</code> somewhere in it. By isolating the things passively attached to me, I gain more use of the other queries.</p>
  172.      <p>Semantically, this one is “stuff someone considers me relevant to (or maybe spam)”.</p>
  173.    </li>
  174.    <li>
  175.      <h3><a href="https://github.com/notifications/subscriptions?reason=manual">Manual Subscriptions</a></h3>
  176.      <p>This one is not a GitHub Issues search query, but is useful to include in this context.</p>
  177.      <p>Obviously this is stuff I’m not involved with but want to stay informed about.</p>
  178.    </li>
  179.  </ul>
  180.  <p>That collection gives me a reasonable handle on everything I need to take care of one way or another, which I could not get from GitHub’s own built in views.</p>
  181.  <ins datetime="2017-08-28T16:00:42+02:00"><p><b>Update</b>: I’ve split the last query, “<code>involves:ap -author:ap -user:ap</code>”, in two. Now it is divided on the source of my involvement as a bystander: myself or others.</p></ins>
  182.  <ins datetime="2020-10-02T08:09:29+02:00"><p><b>Update</b>: I’ve split the first query, “<code>user:ap</code>”, in two, to divide it on the origin of the issue: others or myself. I have also added a link to the issue subscriptions page.</p></ins>
  183. </div></content></entry><entry><title type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><code>rename</code> 1.601</div></title><link href="/log/rename1601/"/><id>urn:uuid:7ec79c77-de02-4988-bc9a-030fe1797022</id><published>2019-08-30T20:35:32+02:00</published><updated>2019-08-30T20:35:32+02:00</updated><content xml:base="/log/rename1601/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  184.  <p>Over 6 years have gone by since I cut the last release of <a href="/code/rename/" title="rename">rename</a> because no new features have been needed in the time since. Unfortunately a number of small bugfixes and documentation additions have therefore sat unreleased in the repository for years. You have <a href="https://github.com/ap/rename/issues/6">a bug report about a long-fixed issue</a> to thank for me finally noticing. I am hereby rectifying this situation:</p>
  185.  <p><a href="/code/rename/" title="rename">Share and enjoy</a>.</p>
  186. </div></content></entry><entry><title>Canonical Log Lines</title><summary>A simple pattern to simplify harnessing the power of logs</summary><link href="/log/canonloglines/"/><id>urn:uuid:80cb5906-dd12-448e-971b-542bb0848e35</id><published>2019-08-09T01:35:14+02:00</published><updated>2019-08-09T01:35:14+02:00</updated><content xml:base="/log/canonloglines/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  187.  <p><cite><a href="https://stripe.com/gb/blog/canonical-log-lines" title="Fast and flexible observability with canonical log lines">Brandur Leach</a></cite>:</p>
  188.  <blockquote cite="https://stripe.com/gb/blog/canonical-log-lines">
  189.    <p>Although logs offer additional flexibility in the examples above, we’re still left in a difficult situation if we want to query information <em>across</em> the lines in a trace. [… At Stripe, we] use <strong>canonical log lines</strong> to help address this. They’re a simple idea: in addition to their normal [<a href="https://brandur.org/logfmt">logfmt</a>-structured] log traces, requests […] also emit one long log line at the end that pulls all its key telemetry into one place. [… They] are a simple enough idea that implementing them tends to be straightforward regardless of the tech stack in use. […] Over the years our implementation has been hardened to maximize the chance that canonical log lines are emitted for <em>every</em> request, even if an internal failure or other unexpected condition occurred.</p>
  190.  </blockquote>
  191. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>Glimmer of hope</title><summary>Fraser Speirs on the long-term effect of the tablet</summary><link href="/log/tabletimpact/"/><id>urn:uuid:d840d2d4-540b-47cb-91ef-d795e84f7651</id><published>2019-07-31T12:23:59+02:00</published><updated>2019-07-31T12:23:59+02:00</updated><content xml:base="/log/tabletimpact/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  192.  <p><cite><a href="https://www.speirs.org/blog/2019/7/27/on-switching-from-ipad-to-chromebook-in-school" title="On Switching from iPad to Chromebook in School">Fraser Speirs</a></cite>:</p>
  193.  <blockquote cite="https://www.speirs.org/blog/2019/7/27/on-switching-from-ipad-to-chromebook-in-school">
  194.    <p>It’s also worth noting the significant impact that the rise of tablets has had on the design and capability of laptops. In 2010, laptops weighed four-plus pounds – not including a weighty charger – and got 3–4 hours of battery life. Today, they’ve halved in weight and more than doubled in battery life while getting faster, more robust and more flexible. In the final analysis, I think that the long-term effect of tablets will be that they forced laptops to get better.</p>
  195.  </blockquote>
  196. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">“When was this stored” <abbr title="versus">vs</abbr> “when did it happen”</div></title><summary>Metadata about storage entities does not apply to their content</summary><link href="/log/metaconflation/"/><id>urn:uuid:f32043e6-30ce-41e6-ae93-4c770a6fccd3</id><published>2018-11-04T21:15:12+01:00</published><updated>2018-11-04T21:15:12+01:00</updated><content xml:base="/log/metaconflation/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  197.  <p>Over on his weblog, <a href="https://utcc.utoronto.ca/~cks/space/blog/web/FileBasedMetadataInVCS" title="Metadata that you can’t commit into a VCS is a mistake (for file based websites)">Chris Siebenmann talks about metadata in file based websites</a>:</p>
  198.  <blockquote cite="https://utcc.utoronto.ca/~cks/space/blog/web/FileBasedMetadataInVCS"><p>In a file based website engine, any form of metadata that you can’t usefully commit into common version control systems is a mistake.</p></blockquote>
  199.  <p>This is part of a class of mistake I’ve variously made or run into in a number of contexts.</p>
  200.  <p>A prominent example that comes to mind is an application I work on which tracks records of some kind, and in one place, users enter “number of X for the thing recorded here” as data points over time. Originally the point in time used for a data point was the time of entry of the data point. At some point I realised that this was a mistake: the point in time at which the number of X was Y is not the same as the point in time at which that fact was recorded in our system. Among other things, treating them the same means you cannot enter “number of X” data points after the fact (such as when newly capturing a record retroactively). One is metadata about a real-world event, the other is metadata about the usage of our application to record that real-world event.</p>
  201.  <p>So the generalised principle goes maybe something like this:</p>
  202.  <p><b>Metadata about a storage entity should not be confused with metadata about whatever the payload of that storage entity represents.</b></p>
  203.  <p>These may (obviously) coincide but are nevertheless not the same thing. If an article is stored in a file, the creation date of the file just says when that article was saved into that file – not when the article was written. As long as the “writing an article” and “saving it into a file” actions are strongly bound to each other, these metadata will coincide so that it may seem attractive to treat one as the other. But once you introduce ways for them to diverge – such as adding a <abbr title="Version Control System">VCS</abbr> to the mix, which creates/removes/modifies files without a corresponding article-writing/editing action – you have a problem.</p>
  204. </div></content></entry><entry><title type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">The <abbr>HTTPS</abbr> divide</div></title><summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Eric Meyer on <abbr>HTTPS</abbr> under low development</div></summary><link href="/log/https-divide/"/><id>urn:uuid:6e7e0b5b-139c-4a2a-a6ad-a10a0f284052</id><published>2018-08-11T00:17:15+02:00</published><updated>2018-08-11T00:17:15+02:00</updated><content xml:base="/log/https-divide/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  205.  <p><cite><a href="https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-them-less-accessible/" title="Securing Web Sites Made Them Less Accessible">Eric Meyer</a></cite>:</p>
  206.  <blockquote cite="https://meyerweb.com/eric/thoughts/2018/08/07/securing-sites-made-them-less-accessible/">
  207.    <p>I saw a piece that claimed, “Investing in <abbr>HTTPS</abbr> makes it faster, cheaper, and easier for everyone.” If you define “everyone” as people with gigabit fiber access, sure. Maybe it’s even true for most of those whose last mile is copper. But for people beyond the reach of glass and wire, every word of that claim was wrong.</p>
  208.  </blockquote>
  209.  <p><a href="/log/privconf/" title="Privacy vs confidentiality in protocols">Someone who goes by Roy called this quite a while ago</a>: an <abbr>HTTPS</abbr>-only web is a web without intermediaries, built on a costlier protocol, and that has real costs. <em>This</em> (as opposed to the likes of Dave Winer’s <a href="http://this.how/googleAndHttp/" title="Google and HTTP">barely-sensical</a> <a href="http://scripting.com/liveblog/users/davewiner/2016/01/29/0946.html" title="Questioning Google’s motives re the push to HTTPS">paranoia</a>) is why I am skeptical about the move to <abbr>HTTPS</abbr>, albeit acknowleding the necessity given the lack of better solutions in the near term. Non-benign network operators and universal surveillance are real problems that need to be addressed. We are not in a great place right now.</p>
  210.  <p>There <em>are</em> better options beyond the horizon, though. Don’t miss Eric’s comment section: several people mention ideas and proposals currently in the works. There is hope for… someday.</p>
  211. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Who knows an <abbr title="Extensible Markup Language">XML</abbr> document from a hole in the ground?</div></title><summary>Namespace droppings and bozotic aggregator developers</summary><link href="/log/376/"/><id>urn:uuid:6ac67b4a-862e-11da-9fcb-dd680b0526e0</id><published>2006-01-16T02:21:27+01:00</published><updated>2018-05-22T17:32:26+02:00</updated><content xml:base="/log/376/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  212.  <p>Hello back, everyone. (With a tip o’ the titular hat to <a href="http://weblog.philringnalda.com/2005/12/18/who-knows-a-title-from-a-hole-in-the-ground" title="Phil Ringnalda: Who knows a &lt;title&gt; from a hole in the ground?">Phil Ringnalda</a>.) Some of you who’re seeing this will now be scampering to catch up on 20 of my entries, posted since the beginning of last December.</p>
  213.  <p>Reticent a person though I am, in general, I am not <em>that</em> reticent. For those of you who wondered why it’s gotten so silent around here, the reason is simple: because you didn’t notice that you need to file a bug report against the aggregator of your choice.</p>
  214.  <p>Yes, that’s right. The software you use is broken. And make no mistake, you’re not alone. There is a wide range of choices among bug trackers (or customer support forms), and their associated software, to file reports against.</p>
  215.  <p>This all happened because I tried to be a clever and good citizen and, in the process, save a bit of space in a smart way. But let’s backtrack a bit, and let me tell you what happened, and what the effect has been.</p>
  216.  <p>Until the beginning of December, the structure of my feed looked like this (and from today on looks like so again):</p>
  217.  <pre>&lt;feed xmlns="http://www.w3.org/2005/Atom"&gt;
  218.  &lt;title&gt;plasmasturm.org&lt;/title&gt;
  219.  &lt;!-- additional feed metadata elided --&gt;
  220.  &lt;!-- things such as subtitle, author etc --&gt;
  221.  &lt;entry&gt;
  222.    &lt;title type="xhtml"&gt;
  223.      &lt;div xmlns="http://www.w3.org/1999/xhtml"&gt;Foo&lt;/div&gt;
  224.    &lt;/title&gt;
  225.    &lt;summary type="xhtml"&gt;
  226.      &lt;div xmlns="http://www.w3.org/1999/xhtml"&gt;Bar&lt;/div&gt;
  227.    &lt;/summary&gt;
  228.    &lt;content type="xhtml"&gt;
  229.      &lt;div xmlns="http://www.w3.org/1999/xhtml"&gt;&lt;p&gt;Baz&lt;/p&gt;&lt;/div&gt;
  230.    &lt;/content&gt;
  231.    &lt;!-- additional entry metadata elided --&gt;
  232.  &lt;/entry&gt;
  233.  &lt;!-- more entries follow --&gt;
  234. &lt;/feed&gt;</pre>
  235.  <p>You can see that there are <code>xmlns="http://www.w3.org/1999/xhtml"</code> bits strewn everywhere; in practice the effect is less drastic than it appears here where I’m eliding a bunch of Atom tags and any sensible content.</p>
  236.  <p>This setup means that in the document at large, the default namespace is <code>http://www.w3.org/2005/Atom</code> – so tags like <code>&lt;feed&gt;</code> are to be interpreted according to <a href="http://tools.ietf.org/html/rfc4287" title="The Atom Syndication Format"><abbr title="Request for Comments">RFC</abbr> 4287</a> –, but for the <code>&lt;div&gt;</code> tags and their contents, the default namespace is <code>http://www.w3.org/1999/xhtml</code> – so that the tags are to be interpreted according to the <a href="http://www.w3.org/TR/html/"><abbr title="Extensible Hypertext Markup Language">XHTML</abbr> Recommendation</a>.</p>
  237.  <p>This works perfectly in everything that claims support for Atom.</p>
  238.  <p>Then, in the beginning of December, I read an enlightening piece on <abbr title="Extensible Markup Language">XML</abbr> citizenship by <a href="http://www.flightlab.com/~joe/">Joe English</a>, titled <a href="http://lists.xml.org/archives/xml-dev/200204/msg00170.html">A plea for Sanity</a>, whose focus is how to structure <abbr title="Extensible Markup Language">XML</abbr> documents with regards to namespaces such that software to process them can be kept simple, and which sets forth a few definitions on documents that are not sane. According to his definitions, my feed was <i>neurotic</i>: two different namespaces are mapped to the same prefix (<abbr title="that is,">i.e.</abbr> the null prefix) at different points in the document.</p>
  239.  <p>I chose the neurotic structure I outlined above because it seemed logical to choose the Atom namespace as the default namespace for an Atom document at large. Since I author my weblog by editing a master feed (which contains the entire archive of the log) by hand, however, I definitely want the <abbr title="Extensible Hypertext Markup Language">XHTML</abbr> namespace to be the default for the sections of the document which contain <abbr title="Hypertext Markup Language">HTML</abbr>: having to write namespace prefixes in every tag would make the already tiresome experience of hand-written markup truly painful.</p>
  240.  <p>An added bonus is that despite the frequent repetition of the default namespace declaration, it actually saves space on declaring a prefix to map to the namespace once at the top of the feed and then having to write <code>&lt;h:p&gt;Foo&lt;/h:p&gt;</code> everywhere. Across the whole feed, these two characters per tag easily add up to much more than the fixed cost of a namespace declaration in every text/content construct.</p>
  241.  <p>However, reading the plea for sanity inspired me to try something counterintuitive: declare the <abbr title="Extensible Hypertext Markup Language">XHTML</abbr> namespace as the default <em>for the entire document</em> and instead declare a prefix for the Atom namespace. This leads to a structure like so:</p>
  242.  <pre>&lt;a:feed xmlns:a="http://www.w3.org/2005/Atom" xmlns="http://www.w3.org/1999/xhtml"&gt;
  243.  &lt;a:title&gt;plasmasturm.org&lt;/a:title&gt;
  244.  &lt;!-- additional feed metadata elided --&gt;
  245.  &lt;!-- things such as subtitle, author etc --&gt;
  246.  &lt;a:entry&gt;
  247.    &lt;a:title type="xhtml"&gt;&lt;div&gt;Foo&lt;/div&gt;&lt;/a:title&gt;
  248.    &lt;a:summary type="xhtml"&gt;&lt;div&gt;Bar&lt;/div&gt;&lt;/a:summary&gt;
  249.    &lt;a:content type="xhtml"&gt;&lt;div&gt;&lt;p&gt;Baz&lt;/p&gt;&lt;/div&gt;&lt;/a:content&gt;
  250.    &lt;!-- additional entry metadata elided --&gt;
  251.  &lt;/a:entry&gt;
  252.  &lt;!-- more entries follow --&gt;
  253. &lt;/a:feed&gt;</pre>
  254.  <p>This is a <i>sane</i> <abbr title="Extensible Markup Language">XML</abbr> document. And according to <abbr title="Extensible Markup Language">XML</abbr> specifications, both this form of the document and the previous one are semantically exactly identical. They mean the exact same thing, and any compliant software which can process one of them will produce the exact same results given the other.</p>
  255.  <p>It’s also worth noting that the number of Atom tags within an Atom entry is small and varies very little. So even though I’m having to prefix every Atom tag, this actually saves space on redeclaring the <abbr title="Extensible Hypertext Markup Language">XHTML</abbr> namespace over and over. (Right now, the master feed, even though the savings are particularly diminished in it because it contains the entire archive and thus has a much higher <abbr title="Hypertext Markup Language">HTML</abbr>-to-Atom ratio than the newsfeed on the site, is about 1.2% smaller in its sane version than in the neurotic one. For the on-site feed, the ratio is a tad larger; not much, but with frequently requested documents like newsfeeds, a penny saved is a penny got.)</p>
  256.  <p>I was very pleased with myself for figuring out a way to improve my feed while reducing its size in the process.</p>
  257.  <p>Too pleased.</p>
  258.  <p>As a matter of fact, within a few days, Scott Arrington got in touch on <abbr title="Internet Relay Chat">IRC</abbr> and informed me that my feed was suddenly throwing an error in Safari: <q>Safari can’t open the page “feed://plasmasturm.org/feeds/plasmasturm/”</q>, it said, and misleadingly continued: <q>The error was: “unknown error” (NSURLErrorDomain:-1)</q> (after all, the problem has nothing to do with a domain in the internet sense; though maybe <i>error domain</i> is Apple framework lingo for <i>error type</i> or <i>error class</i>).</p>
  259.  <p>I shrugged. Surely, this was an outlier. Certainly, breakage like that wouldn’t go unnoticed. Likely, it would be fixed with the next batch of updates.</p>
  260.  <p>As a matter of fact, yes, it <em>is</em> nice to live with the illusion that the basics of <abbr title="Extensible Markup Language">XML</abbr> are at least moderately well understood, in this year 2006 of the Lord. Thanks for asking.</p>
  261.  <p>Until tonight delivered a rude awakening from Jagath Narayan to my inbox. He informed me that my feed failed to parse in a number of aggregators. So I got on <abbr title="Internet Relay Chat">IRC</abbr> and asked Scott’s assistance to test drive a few more Mac aggregators with the feed.</p>
  262.  <p>Here’s the list of known broken aggregators as of this writing:</p>
  263.  <ul>
  264.    <li>Safari 2.0.3</li>
  265.    <li>Firefox 1.5</li>
  266.    <li>Thunderbird 1.5</li>
  267.    <li><a href="http://www.newsfirerss.com/">NewsFire</a> 1.2 (v45)</li>
  268.    <li><a href="http://feedreader.com/features.php">Feedreader</a> 2.90</li>
  269.    <li><a href="http://www.cincomsmalltalk.com/main/community/product-portal/contributed/bottomfeeder/">BottomFeeder</a> 4.1</li>
  270.    <li><a href="https://en.wikipedia.org/wiki/Bloglines">Bloglines</a> (big surprise…)</li>
  271.  </ul>
  272.  <p>What about the remaining major browser? Opera 8.51 is not broken – because it doesn’t support Atom 1.0 at all. (The upcoming version 9.0 will.)</p>
  273.  <p><strong>None</strong> of the major browsers with feed support are compliant in their latest version. How depressing.</p>
  274.  <p>Here’s a list of known working aggregators:</p>
  275.  <ul>
  276.    <li><a href="https://lzone.de/liferea/">Liferea</a></li>
  277.    <li><a href="http://netnewswireapp.com/">NetNewsWire</a></li>
  278.    <li><a href="https://en.wikipedia.org/wiki/Google_Reader">Google Reader</a></li>
  279.    <li><a href="https://pythonhosted.org/feedparser/">Universal Feed Parser</a></li>
  280.  </ul>
  281.  <p>And here’s <a href="/attic/atom-tests/nondefaultnamespace.atom" type="application/atom+xml">a test case</a>. Please <a href="mailto:pagaltzis@gmx.de">mail me</a> further results, and of course, file bugs avidly.</p>
  282.  <p>So now my feed is back to its old, neurotic form, and works for a lot more people. It feels good to know I’m back, even though I never knew I was gone.</p>
  283. </div></content></entry><entry><title>Mutually assured intel</title><summary>A soundbite on the anti-parochial complexity of high-tech</summary><link href="/log/e-n-trust/"/><id>urn:uuid:84888867-eebd-4bbb-a80c-6235eb03b7ae</id><published>2018-05-10T19:18:44+02:00</published><updated>2018-05-10T19:18:44+02:00</updated><content xml:base="/log/e-n-trust/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  284.  <p><cite><a href="https://www.schneier.com/blog/archives/2018/05/supply-chain_se.html" title="Supply-Chain Security">Bruce Schneier</a></cite>:</p>
  285.  <blockquote cite="https://www.schneier.com/blog/archives/2018/05/supply-chain_se.html">
  286.    <p>Supply-chain security is an incredibly complex problem. [National]-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that.</p>
  287.    <p>We can’t trust anyone, yet we have no choice but to trust everyone.</p>
  288.  </blockquote>
  289. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>Don’t be a problem-solver</title><summary>Resist tools and abstractions</summary><link href="/log/begets-complexity/"/><id>urn:uuid:a04e8b14-228b-4f3f-a7fa-de41b907cbe2</id><published>2017-12-24T12:45:28+01:00</published><updated>2017-12-24T12:45:28+01:00</updated><content xml:base="/log/begets-complexity/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  290.  <p><cite><a href="https://twitter.com/halvarflake/status/943836378797928449">halvarflake</a></cite>:</p>
  291.  <blockquote cite="https://twitter.com/halvarflake/status/943836378797928449"><p>The one rule of thumb is: If you allow complexity into a place that should be simple, more complexity will follow.</p></blockquote>
  292. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry><entry><title>Backoff</title><summary>A simple, robust algorithm for backoff</summary><link href="/log/backoff/"/><id>urn:uuid:3011239c-7f24-43f9-b0bc-af63ff8ab2cd</id><published>2017-09-15T03:12:06+02:00</published><updated>2017-09-19T09:11:58+02:00</updated><content xml:base="/log/backoff/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  293.  <p>Some time ago I had occasion to implement (mostly exponential) back-off in an application for the first time. This is not a hard problem, but at the outset I expected it to be one of those annoying cases where the code is only clear to read when you are immersed in the problem it solves.</p>
  294.  <p>Not so. It turns out there is a trivially simple algorithm if you pick the right form of stored state – namely, a pair of timestamps: <code>(last_success, next_retry)</code>. The essential form of the algorithm goes like this:</p>
  295.  <pre><code>if succeeded { last_success = now }
  296. next_retry = last_success, i = 0
  297. until next_retry &gt; now {
  298.  next_retry += step_size(i)
  299.  i += 1
  300. }</code></pre>
  301.  <p>Because this recalculates the scheduling for the next retry by starting over from the previous success, every time, it is totally resilient against all fluctuations in the environment. Belated or missing retry attempts have no effect on its output. Even swapping the <code>step_size</code> function for an entirely different one mid-flight just works!</p>
  302.  <p>At the same time, it is trivial to reason out and verify that this algorithm works correctly.</p>
  303.  <p>I was quite pleased.</p>
  304.  <hr/>
  305.  <p>(In practice you will likely not have a separate <code>step_size</code> function and <code>i</code> counter but rather some kind of <code>step</code> variable iterated along with <code>next_retry</code>. But here I wanted to abstract away from the specific formula used.)</p>
  306.  <ins datetime="2017-09-19T09:11:58+02:00">
  307.    <p><b>Update</b>: As prompted by a question from Matthew Persico, let me clarify that my use case is scheduling polls that succeed only intermittently, meaning that I always want to wait at least once between attempts, which is why I used “<code>until next_retry &gt; now</code>”.</p>
  308.    <p>If instead you want to add backoff to an operation that only <em>fails</em> intermittently (e.g. draining a buffer to I/O) then you’ll want to use “<code>while next_retry &lt; now</code>” for your loop, so you can have zero-delay back-to-back attempts.</p>
  309.  </ins>
  310. </div></content></entry><entry><title>Six Stages of Debugging</title><link href="/log/6debug/"/><id>urn:uuid:7dda2d30-7650-11e1-aae4-f2348ec7b0b6</id><published>2012-03-25T09:59:47+02:00</published><updated>2017-08-23T11:41:08+02:00</updated><content xml:base="/log/6debug/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  311.  <style type="text/css">
  312.  #sixdebug { margin-left: 0 }
  313.  #sixdebug li { font-size: 1.5em; font-weight: bold; margin-left: -.7em }
  314.  #sixdebug li p { font-size: 0.766667em /* 1.15/1.5 */; font-weight: normal }
  315.  </style>
  316.  <ol id="sixdebug">
  317.    <li><p>That can’t happen.</p></li>
  318.    <li><p>That doesn’t happen on my machine.</p></li>
  319.    <li><p>That shouldn’t happen.</p></li>
  320.    <li><p>Why does that happen?</p></li>
  321.    <li><p>Oh, I see.</p></li>
  322.    <li><p>How did that ever work?</p></li>
  323.  </ol>
  324.  <p>[<i>This is not mine. I posted it in the interest of personal archival because the oldest mention I could track down on the web appeared <a href="http://web.archive.org/web/20051027173148/http://www.68k.org/~jrc/old-blog/archives/000198.html" title="Hard core debugging">on a now-defunct weblog</a>. In the meantime, Mike W. Cremer (who bills himself </i>The Newton™ Scapegoat <span class="smiley">☺</span><i>) <a href="https://news.ycombinator.com/item?id=6477752">has claimed credit</a> for coining it <q cite="https://news.ycombinator.com/item?id=6477752">after a particularly frustrating <abbr title="Direct Memory Access">DMA</abbr> debugging session</q> <q cite="http://mwcremer.blogspot.com/2007/06/six-stages-of-debugging.html">while slaving away on Dante</q> (Newton <abbr title="Operating System">OS</abbr> 2.0). <a href="http://mwcremer.blogspot.com/2007/06/six-stages-of-debugging.html" title="The Six Stages of Debugging ">According to his account</a>, this took place in Apple’s building at 5 Infinite Loop (nicknamed <abbr>RD5</abbr> or <abbr>IL5</abbr>). The list was later to be found taped to Mike Engber’s door in <abbr>IL2</abbr>.</i>]</p>
  325. </div></content></entry><entry><title>Fixing a Google Chrome failure to save passwords</title><link href="/log/chromepwstore/"/><id>urn:uuid:5a088add-b78a-48db-bc02-bfd9c9470753</id><published>2017-07-02T00:33:41+02:00</published><updated>2017-07-02T00:33:41+02:00</updated><content xml:base="/log/chromepwstore/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  326.  <p>This is a kind of post that people used to write back in the heady early days of blogging and a more communal web: putting something out there to help Google help other people.</p>
  327.  <h2>The problem</h2>
  328.  <p>For some time I had been having an irritating persistent failure with Google Chrome that I could not find an answer for:</p>
  329.  <ul>
  330.    <li>After logging into some website, it would offer me to save the password, as usual.</li>
  331.    <li>I would click on the save button.</li>
  332.    <li>Chrome would not show any kind of error.</li>
  333.    <li>But the password would not be saved:</li>
  334.    <li>It was not filled in automatically next time I went to the same site.</li>
  335.    <li>No password at all showed up in the list on <code>chrome://settings/passwords</code> – the list just stayed blank no matter what I did.</li>
  336.  </ul>
  337.  <p>Mysteriously, a handful of passwords did get stored, somehow, somewhere. Chrome could fill those in, even as it woudn’t list them on the settings screen. I checked the <abbr>MacOS</abbr> keychain and did not find them there, so they had to be stored by Chrome, even though it refused to show them to me.</p>
  338.  <h2>The quest</h2>
  339.  <p>Searching the web about my problems, almost all answers I could find related to the case of people who are logged into Google within Chrome and use its password syncing service… which I don’t. I simply want my passwords saved locally.</p>
  340.  <p>The few answers I did find that seemed to relate to my situation invariably suggested resetting one’s profile. Now, that approach does appear not to be mere superstition: of the people I found who had this problem, the ones who reported back all wrote that resetting their profile fixed the problem. So I had a way of making the problem go away – but I also have a lot of data in my profiles. It’s not just my bookmarks. I have tweaked many of the settings, individually for each profile (the whole point of using profiles, after all), and I also use a number of extensions, many of which themselves have extensive configurations. Recreating that all is a big task.</p>
  341.  <p><strong>I want password auto-fill fixed while keeping my profiles intact. I am only willing to lose my stored passwords.</strong> (Because I save them in a password manager first anyway.)</p>
  342.  <p>So I went poking around in the directories where Chrome stores its profiles and other user data. There’s no need to look far: there’s a file called <code>Login Data</code>. This is an <abbr>SQ</abbr>Lite database (like most of Chrome’s user data files). It can be opened using the <code>sqlite3</code> command line utility and examined using <abbr title="Structured Query Language">SQL</abbr> queries. I did that, and there they were, my mysteriously saved passwords… plus a bunch more.</p>
  343.  <p>I also discovered that some of the data in those tables is scrambled in some form. Presumably there are several separatedly stored pieces of data required to unscramble that data, and some of those pieces somehow become mismatched on my system – I don’t know how nor why, and was too lazy to research. All I cared was that this looked like the right vicinity.</p>
  344.  <p>As an experiment, I moved the <code>Login Data</code> file and its <code>Login Data-journal</code> pair out of one profile… and bingo, password auto-fill started working there as expected. After saving a password, Chrome would subsequently successfully auto-fill it as well as list it on the saved passwords screen.</p>
  345.  <p>Good enough for me.</p>
  346.  <h2>The solution</h2>
  347.  <p>Deleting the files <code>Login Data</code> and <code>Login Data-journal</code> from a profile fixes password saving in that profile – without affecting any other data in it. A full profile reset is not necessary – you can reset just the password storage by deleting just the files that it uses.</p>
  348.  <p>This does mean you lose any passwords you had stored previously, unfortunately. But since you cannot really access them any more anyway, that data loss has effectively already happened by the time you delete the files.</p>
  349.  <h2>Instructions</h2>
  350.  <ol>
  351.    <li><p>Quit Chrome.</p></li>
  352.    <li>
  353.      <p>Go to the directory where Chrome stores its user-specific data, below your user home directory:</p>
  354.      <dl style="display: grid; grid-auto-columns: min-content 1fr; grid-auto-flow: column">
  355.        <dt style="grid-column: 1">Mac</dt><dd><code>~/Library/Application Support/Google/Chrome</code></dd>
  356.        <dt style="grid-column: 1">Linux</dt><dd><code>~/.config/google-chrome</code></dd>
  357.        <dt style="grid-column: 1">Windows</dt><dd><code>%UserProfile%\AppData\Local\Google\Chrome\User Data</code></dd>
  358.      </dl>
  359.    </li>
  360.    <li><p>From there, go into the directory called <code>Default</code> if you want to fix your main profile, or into <code>Profile 1</code> or <code>Profile 2</code> <abbr title="et cetera">etc.</abbr> to fix one of your extra profiles.</p></li>
  361.    <li><p>Delete the files <code>Login Data</code> and <code>Login Data-journal</code>.</p></li>
  362.    <li><p>Repeat for other profiles as necessary.</p></li>
  363.  </ol>
  364. </div></content></entry><entry><title>Drunk driving on the information superhighway</title><summary>Metaphor of the year, by Ev Williams</summary><link href="/log/webdui/"/><id>urn:uuid:6a624c03-fcb9-4d67-8471-e2a5270158c8</id><published>2017-05-23T21:12:44+02:00</published><updated>2017-05-23T21:12:44+02:00</updated><content xml:base="/log/webdui/" type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">
  365.  <p><cite><a href="https://www.nytimes.com/2017/05/20/technology/evan-williams-medium-twitter-internet.html" title="‘The Internet Is Broken’: @ev Is Trying to Salvage It">The New York Times</a></cite>:</p>
  366.  <blockquote cite="https://www.nytimes.com/2017/05/20/technology/evan-williams-medium-twitter-internet.html"><p>The trouble with the internet, Mr. [Evan] Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them.</p></blockquote>
  367. </div></content><category scheme="http://plasmasturm.org/" term="seen" label="Seen"/></entry></feed>

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid Atom 1.0" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=http%3A//plasmasturm.org/feeds/plasmasturm/

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda