This is a valid RSS feed.
This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
<a href="https://genart.social/@hamoid/115125620138280715" rel="external noo ...
line 42, column 0: (9 occurrences) [help]
the next curve makes a jump, which is why they separate and reunite.</p><p>E ...
line 44, column 0: (5 occurrences) [help]
that we can generate art that looks nice without any manual steps.</p><h1 id ...
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="https://nedbatchelder.com/rssfull2html.xslt" media="screen" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/">
<channel rdf:about="https://nedbatchelder.com/blog">
<title>Ned Batchelder's blog</title>
<link>https://nedbatchelder.com/blog</link>
<description>Ned Batchelder's personal blog.</description>
<dc:language>en-US</dc:language>
<image rdf:resource="https://nedbatchelder.com/pix/rss-banner.gif"/>
<items>
<rdf:Seq>
<rdf:li resource="https://nedbatchelder.com/blog/202509/hobby_hilbert_simplex.html"/><rdf:li resource="https://nedbatchelder.com/blog/202509/testing_is_better_than_dsa.html"/><rdf:li resource="https://nedbatchelder.com/blog/202508/finding_unneeded_pragmas.html"/><rdf:li resource="https://nedbatchelder.com/blog/202508/starting_with_pytests_parametrize.html"/><rdf:li resource="https://nedbatchelder.com/blog/202507/coveragepy_regex_pragmas.html"/><rdf:li resource="https://nedbatchelder.com/blog/202507/coverage_7100_patch.html"/><rdf:li resource="https://nedbatchelder.com/blog/202507/2048_iterators_and_iterables.html"/><rdf:li resource="https://nedbatchelder.com/blog/202506/math_factoid_of_the_day_63.html"/><rdf:li resource="https://nedbatchelder.com/blog/202506/digital_equipment_corporation_no_more.html"/><rdf:li resource="https://nedbatchelder.com/blog/202505/pycon_summer_camp.html"/>
</rdf:Seq>
</items>
</channel>
<image rdf:about="https://nedbatchelder.com/pix/rss-banner.gif">
<title>Ned Batchelder's blog</title>
<link>https://nedbatchelder.com/blog</link>
<url>https://nedbatchelder.com/pix/rss-banner.gif</url>
</image>
<item rdf:about="https://nedbatchelder.com/blog/202509/hobby_hilbert_simplex.html">
<title>Hobby Hilbert Simplex</title>
<link>https://nedbatchelder.com/blog/202509/hobby_hilbert_simplex.html</link>
<dc:date>2025-09-26T08:14:04-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>I saw a generative art piece I liked and wanted to learn how it was made.
Starting with the artist’s Kotlin code, I dug into three new algorithms, hacked
together some Python code, experimented with alternatives, and learned a lot.
Now I can explain it to you.</p><p>It all started with this post by
<a href="https://genart.social/@hamoid/115125620138280715" rel="external noopener">aBe on Mastodon</a>:</p><blockquote class="mastodon-post" lang="en" cite="https://genart.social/@hamoid/115125620138280715" data-source="fediverse">
<p>I love how these lines separate and reunite. And the fact that I can express this idea in 3 or 4 lines of code.</p><p>For me they’re lives represented by closed paths that end where they started, spending part of the journey together, separating while we go in different directions and maybe reconnecting again in the future.</p><p><a href="https://genart.social/tags/CreativeCoding" rel="nofollow noopener" class="mention hashtag" target="_blank">#<span>CreativeCoding</span></a> <a href="https://genart.social/tags/algorithmicart" rel="nofollow noopener" class="mention hashtag" target="_blank">#<span>algorithmicart</span></a> <a href="https://genart.social/tags/proceduralArt" rel="nofollow noopener" class="mention hashtag" target="_blank">#<span>proceduralArt</span></a> <a href="https://genart.social/tags/OPENRNDR" rel="nofollow noopener" class="mention hashtag" target="_blank">#<span>OPENRNDR</span></a> <a href="https://genart.social/tags/Kotlin" rel="nofollow noopener" class="mention hashtag" target="_blank">#<span>Kotlin</span></a></p>
<figure><figure><img src="https://media.hachyderm.io/cache/media_attachments/files/115/125/620/285/265/947/small/5a73d40e6a4a81c1.png" alt="80 wobbly black hobby curves with low opacity. In some places the curves travel together, but sometimes they split in 2 or 3 groups and later reunite. Due to the low opacity, depending on how many curves overlap the result is brighter or darker." width="480" height="480"></figure></figure>
<footer>
— aBe (@hamoid@genart.social) <a href="https://genart.social/@hamoid/115125620138280715" rel="external noopener"><time datetime="2025-08-31T21:59:13.000Z">8/31/2025, 5:59:13 PM</time></a>
</footer>
</blockquote><p>The drawing is made by choosing 10 random points, drawing a curve through
those points, then slightly scooching the points and drawing another curve.
There are 40 curves, each slightly different than the last. Occasionally
the next curve makes a jump, which is why they separate and reunite.</p><p>Eventually I made something similar:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/repro_139.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/repro_139.png" alt="" width="600" height="600" class="hairline"></picture></figure></div><p>Along the way I had to learn about three techniques I got from the Kotlin
code: Hobby curves, Hilbert sorting, and simplex noise.</p><p>Each of these algorithms tries to do something “natural” automatically, so
that we can generate art that looks nice without any manual steps.</p><h1 id="h_hobby_curves">Hobby curves<a class="headerlink" aria-label="Link to this header" href="#h_hobby_curves"></a></h1><p>To draw swoopy curves through our random points, we use an algorithm
developed by John Hobby as part of Donald Knuth’s Metafont type design system.
Jake Low has a <a rel="external noopener" href="https://www.jakelow.com/blog/hobby-curves">great interactive page for playing with Hobby
curves</a>, you should try it.</p><p>Here are three examples of Hobby curves through ten random points:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/hobby_unsorted.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/hobby_unsorted.png" alt="" width="600" height="200" class="hairline"></picture></figure></div><p>The curves are nice, but kind of a scribble, because we’re joining points
together in the order we generated them (shown by the green lines). If you
asked a person to connect random points, they wouldn’t jump back and forth
across the canvas like this. They would find a nearby point to use next,
producing a more natural tour of the set.</p><p>We’re generating everything automatically, so we can’t manually intervene
to choose a natural order for the points. Instead we use Hilbert sorting.</p><h1 id="h_hilbert_sorting">Hilbert sorting<a class="headerlink" aria-label="Link to this header" href="#h_hilbert_sorting"></a></h1><p>The Hilbert space-filling fractal visits every square in a 2D grid.
<a rel="external noopener" href="https://doc.cgal.org/latest/Spatial_sorting/index.html">Hilbert sorting</a> uses a Hilbert fractal traversing
the canvas, and sorts the points by when their square is visited by the fractal.
This gives a tour of the points that corresponds more closely to what people
expect. Points that are close together in space are likely (but not guaranteed)
to be close in the ordering.</p><p>If we sort the points using Hilbert sorting, we get much nicer curves. Here
are the same points as last time:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/hobby_sorted.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/hobby_sorted.png" alt="" width="600" height="200" class="hairline"></picture></figure></div><p>Here are pairs of the same points, unsorted and sorted side-by-side:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/hilbert_compared.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/hilbert_compared.png" alt="" width="400" height="800" class="hairline"></picture></figure></div><p>If you compare closely, the points in each pair are the same, but the sorted
points are connected in a better order, producing nicer curves.</p><h1 id="h_simplex_noise">Simplex noise<a class="headerlink" aria-label="Link to this header" href="#h_simplex_noise"></a></h1><p>Choosing random points would be easy to do with a random number generator,
but we want the points to move in interesting graceful ways. To do that, we use
simplex noise. This is a 2D function (let’s call the inputs u and v) that
produces a value from -1 to 1. The important thing is the function is
continuous: if you sample it at two (u,v) coordinates that are close together,
the results will be close together. But it’s also random: the continuous curves
you get are wavy in unpredictable ways. Think of the simplex noise function as
a smooth hilly landscape.</p><p>To get an (x,y) point for our drawing, we choose a (u,v) coordinate to
produce an x value and a completely different (u,v) coordinate for the y. To
get the next (x,y) point, we keep the u values the same and change the v values by
just a tiny bit. That makes the (x,y) points move smoothly but interestingly.</p><p>Here are the trails of four points taking 50 steps using this scheme:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/point_motion.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/point_motion.png" alt="" width="400" height="400" class="hairline"></picture></figure></div><p>If we use seven points taking five steps, and draw curves through the seven
points at each step, we get examples like this:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/small_runs.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/small_runs.png" alt="" width="600" height="300" class="hairline"></picture></figure></div><p>I’ve left the points visible, and given them large steps so the lines are
very widely spaced to show the motion. Taking out the points and drawing more
lines with smaller steps gives us this:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/large_runs.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/large_runs.png" alt="" width="600" height="300" class="hairline"></picture></figure></div><p>With 40 lines drawn wider with some transparency, we start to see the smoky
fluidity:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/larger_runs.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/larger_runs.png" alt="" width="600" height="300" class="hairline"></picture></figure></div><h1 id="h_jumps">Jumps<a class="headerlink" aria-label="Link to this header" href="#h_jumps"></a></h1><p>In his Mastodon post, aBe commented on the separating of the lines as one of
the things he liked about this. But why do they do that? If we are moving the
points in small increments, why do the curves sometimes make large jumps?</p><p>The first reason is because of Hobby curves. They do a great job drawing a
curve through a set of points as a person might. But a downside of the
algorithm is sometimes changing a point a small amount makes the entire curve
take a different route. If you play around with the interactive examples on
<a rel="external noopener" href="https://www.jakelow.com/blog/hobby-curves">Jake Low’s page</a> you will see the curve can unexpectedly
take a different shape.</p><p>As we inch our points along, sometimes the Hobby curve jumps.</p><p>The second reason is due to Hilbert sorting. Each of our lines is sorted
independently of how the previous line was sorted. If a point’s small motion
moves it into a different grid square, it can change the sorting order, which
changes the Hobby curve even more.</p><p>If we sort the first line, and then keep that order of points for all the
lines, the result has fewer jumps, but the Hobby curves still act
unpredictably:</p><div class="figurep"><figure><picture><source type="image/webp" srcset="https://nedbatchelder.com/iv/webp/pix/fluidity/first_line_runs.png.webp"><img src="https://nedbatchelder.com/pix/fluidity/first_line_runs.png" alt="" width="600" height="300" class="hairline"></picture></figure></div><h1 id="h_colophon">Colophon<a class="headerlink" aria-label="Link to this header" href="#h_colophon"></a></h1><p>This was all done with Python, using other people’s implementations of the
hard parts:
<a href="https://github.com/ltrujello/Hobby_Curve_Algorithm/blob/main/python/hobby.py" rel="external noopener">hobby.py</a>,
<a href="https://pypi.org/project/hilbertcurve/" rel="external noopener">hilbertcurve</a>, and
<a href="https://pypi.org/project/super-simplex/" rel="external noopener">super-simplex</a>. My code
is on GitHub
(<a href="https://github.com/nedbat/fluidity" rel="external noopener">nedbat/fluidity</a>), but it’s a
mess. Think of it as a woodworking studio with half-finished pieces and wood
chips strewn everywhere.</p><p>A lot of the learning and experimentation was in
<a href="https://github.com/nedbat/fluidity/blob/main/play.ipynb" rel="external noopener">my Jupyter
notebook</a>. Part of the process for work like this is playing around with
different values of tweakable parameters and seeds for the random numbers to get
the effect you want, either artistic or pedagogical. The notebook shows some of
the thumbnail galleries I used to pick the examples to show.</p><p>I went on to play with animations, which led to other learnings, but those
will have to wait for another blog post.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202509/testing_is_better_than_dsa.html">
<title>Testing is better than DSA</title>
<link>https://nedbatchelder.com/blog/202509/testing_is_better_than_dsa.html</link>
<dc:date>2025-09-22T12:04:08-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>I see new learners asking about “DSA” a lot. Data Structures and Algorithms
are of course important: considered broadly, they are the two ingredients that
make up all programs. But in my opinion, “DSA” as an abstract field of study
is over-emphasized.</p><p>I understand why people focus on DSA: it’s a concrete thing to learn about,
there are web sites devoted to testing you on it, and most importantly, because
job interviews often involve DSA coding questions.</p><p>Before I get to other opinions, let me make clear that anything you can do to
help you get a job is a good thing to do. If grinding
<a rel="external noopener" href="https://leetcode.com/">leetcode</a> will land you a position, then do it.</p><p>But I hope companies hiring entry-level engineers aren’t asking them to
reverse linked lists or balance trees. Asking about techniques that can be
memorized ahead of time won’t tell them anything about how well you can work.
The stated purpose of those interviews is to see how well you can figure out
solutions, in which case memorization will defeat the point.</p><p>The thing new learners don’t understand about DSA is that actual software
engineering almost never involves implementing the kinds of algorithms that
“DSA” teaches you. Sure, it can be helpful to work through some of these
puzzles and see how they are solved, but writing real code just doesn’t involve
writing that kind of code.</p><p>Here is what I think in-the-trenches software engineers should know about
data structures and algorithms:</p><ul>
<li>Data structures are ways to organize data. Learn some of the basics: linked
list, array, hash table, tree. By “learn” I mean understand what it does
and why you might want to use one.</li>
<li>Different data structures can be used to organize the same data in different
ways. Learn some of the trade-offs between structures that are similar.</li>
<li>Algorithms are ways of manipulating data. I don’t mean named algorithms
like Quicksort, but algorithms as any chunk of code that works on data and
does something with it.</li>
<li>How you organize data affects what algorithms you can use to work with the
data. Some data structures will be slow for some operations where another
structure will be fast.</li>
<li>Algorithms have a “time complexity” (Big O): <a rel="external noopener" href="/text/bigo.html">how the code
slows as the data grows</a>. Get a sense of what this means.</li>
<li>Python has a number of built-in data structures. Learn how they work, and
the time complexity of their operations.</li>
<li>Learn how to think about your code to understand its time complexity.</li>
<li>Read a little about more esoteric things like <a rel="external noopener" href="https://systemdesign.one/bloom-filters-explained/">Bloom
filters</a>, so you can find them later in the unlikely case you need them.</li>
</ul><p>Here are some things you don’t need to learn:</p><ul>
<li>The details of a dozen different sorting algorithms. Look at two to see
different ways of approaching the same problem, then move on.</li>
<li>The names of “important” algorithms. Those have all been implemented for
you.</li>
<li>The answers to all N problems on some quiz web site. You won’t be asked
these exact questions, and they won’t come up in your real work. Again: try a
few to get a feel for how some algorithms work. The exact answers are not what
you need.</li>
</ul><p>Of course some engineers need to implement hash tables, or sorting algorithms
or whatever. We love those engineers: they write libraries we can use off the
shelf so we don’t have to implement them ourselves.</p><p>There have been times when I implemented something that felt like An
Algorithm (for example, <a href="https://nedbatchelder.com/blog/201707/finding_fuzzy_floats.html">Finding fuzzy floats</a>), but it was
more about considering another perspective on my data, looking at the time
complexity, and moving operations around to avoid quadratic behavior. It wasn’t
opening a textbook to find the famous algorithm that would solve my problem.</p><p>Again: if it will help you get a job, deep-study DSA. But don’t be
disappointed when you don’t use it on the job.</p><p>If you want to prepare yourself for a career, and also stand out in job
interviews, learn how to write tests:</p><ul>
<li>This will be a skill you use constantly. Real-world software means writing
tests much more than school teaches you to.</li>
<li>In a job search, testing experience will stand out more than DSA depth. It
shows you’ve thought about what it takes to write high-quality software instead
of just academic exercises.</li>
<li>It’s not obvious how to test code well. It’s a puzzle and a problem to
solve. If you like figuring out solutions to tricky questions, focus on how to
write code so that it can be tested, and how to test it.</li>
<li>Testing not only gives you more confidence in your code, it helps you write
better code in the first place.</li>
<li>Testing applies everywhere, from tiny bits of code to entire architectures,
assisting you in design and implementation at all scales.</li>
<li>If pursued diligently, testing is an engineering discipline in its own
right, with a fascinating array of tools and techniques.</li>
</ul><p>Less DSA, more testing.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202508/finding_unneeded_pragmas.html">
<title>Finding unneeded pragmas</title>
<link>https://nedbatchelder.com/blog/202508/finding_unneeded_pragmas.html</link>
<dc:date>2025-08-24T17:28:12-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>To answer a <a rel="external noopener" href="https://github.com/nedbat/coveragepy/issues/251">long-standing coverage.py feature request</a>, I
threw together an experiment: a tool to identify lines that have been excluded
from coverage, but which were actually executed.</p><p>The program is a standalone file in the coverage.py repo. It is unsupported.
I’d like people to try it to see what they think of the idea. Later we can
decide what to do with it.</p><p>To try it: copy <a rel="external noopener" href="https://github.com/nedbat/coveragepy/blob/master/lab/warn_executed.py">warn_executed.py</a> from
GitHub. Create a .toml file that looks something like this:</p><blockquote class="code"><pre class="toml"><div class="source"><span class="c1"># Regexes that identify excluded lines:</span>
<br><span class="n">warn-executed</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span>
<br><span class="w">    </span><span class="s2">"pragma: no cover"</span><span class="p">,</span>
<br><span class="w">    </span><span class="s2">"raise AssertionError"</span><span class="p">,</span>
<br><span class="w">    </span><span class="s2">"pragma: cant happen"</span><span class="p">,</span>
<br><span class="w">    </span><span class="s2">"pragma: never called"</span><span class="p">,</span>
<br><span class="w">    </span><span class="p">]</span>
<br>
<br><span class="c1"># Regexes that identify partial branch lines:</span>
<br><span class="n">warn-not-partial</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span>
<br><span class="w">    </span><span class="s2">"pragma: no branch"</span><span class="p">,</span>
<br><span class="w">    </span><span class="p">]</span>
<br></div>
</pre></blockquote><p>These are exclusion regexes that you’ve used in your coverage runs. The
program will print out any line identified by a pattern and that ran during your
tests. It might be that you don’t need to exclude the line, because it ran.</p><p>In this file, none of your coverage settings or the default regexes are
assumed: you need to explicitly specify all the patterns you want flagged.</p><p>Run the program with Python 3.11 or higher, giving the name of the coverage
data file and the name of your new TOML configuration file. It will print the
lines that might not need excluding:</p><blockquote class="code"><pre class="shell"><div class="source">$<span class="w"> </span>python3.12<span class="w"> </span>warn_executed.py<span class="w"> </span>.coverage<span class="w"> </span><span>warn.toml</span>
<br></div>
</pre></blockquote><p>The reason for a new list of patterns instead of just reading the existing
coverage settings is that some exclusions are “don’t care” rather than “this
will never happen.” For example, I exclude “def __repr__” because some
__repr__’s are just to make my debugging easier. I don’t care if the test suite
runs them or not. It might run them, so I don’t want it to be a warning that
they actually ran.</p><p>This tool is not perfect. For example, I exclude “if TYPE_CHECKING:” because
I want that entire clause excluded. But the if-line itself is actually run. If
I include that pattern in the warn-executed list, it will flag all of those
lines. Maybe I’m forgetting a way to do this: it would be good to have a way to
exclude the body of the if clause while understanding that the if-line itself is
executed.</p><p>Give <a rel="external noopener" href="https://github.com/nedbat/coveragepy/blob/master/lab/warn_executed.py">warn_executed.py</a> a try and comment on
<a rel="external noopener" href="https://github.com/nedbat/coveragepy/issues/251">the issue</a> about what you think of it.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202508/starting_with_pytests_parametrize.html">
<title>Starting with pytest’s parametrize</title>
<link>https://nedbatchelder.com/blog/202508/starting_with_pytests_parametrize.html</link>
<dc:date>2025-08-13T06:14:46-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>Writing tests can be difficult and repetitive. Pytest has a feature called
parametrize that can make it reduce duplication, but it can be hard to
understand if you are new to the testing world. It’s not as complicated as it
seems.</p><p>Let’s say you have a function called <code>add_nums()</code> that adds up a list of
numbers, and you want to write tests for it. Your tests might look like
this:</p><blockquote class="code"><pre class="python"><div class="source"><span class="k">def</span><span class="w"> </span><span class="nf">test_123</span><span class="p">():</span>
<br>    <span class="k">assert</span> <span class="n">add_nums</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span> <span class="o">==</span> <span class="mi">6</span>
<br>
<br><span class="k">def</span><span class="w"> </span><span class="nf">test_negatives</span><span class="p">():</span>
<br>    <span class="k">assert</span> <span class="n">add_nums</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">3</span><span class="p">])</span> <span class="o">==</span> <span class="mi">0</span>
<br>
<br><span class="k">def</span><span class="w"> </span><span class="nf">test_empty</span><span class="p">():</span>
<br>    <span class="k">assert</span> <span class="n">add_nums</span><span class="p">([])</span> <span class="o">==</span> <span class="mi">0</span>
<br></div>
</pre></blockquote><p>This is great: you’ve tested some behaviors of your <code>add_nums()</code>
function. But it’s getting tedious to write out more test cases. The names of the
function have to be different from each other, and they don’t mean anything, so
it’s extra work for no benefit. The test functions all have the same structure,
so you’re repeating uninteresting details. You want to add more cases but it
feels like there’s friction that you want to avoid.</p><p>If we look at these functions, they are very similar. In any software, when
we have functions that are similar in structure, but differ in some details, we
can refactor them to be one function with parameters for the differences. We can
do the same for our test functions.</p><p>Here the functions all have the same structure: call <code>add_nums()</code> and
assert what the return value should be. The differences are the list we pass to
<code>add_nums()</code> and the value we expect it to return. So we can turn those
into two parameters in our refactored function:</p><blockquote class="code"><pre class="python"><div class="source"><span class="k">def</span><span class="w"> </span><span class="nf">test_add_nums</span><span class="p">(</span><span class="n">nums</span><span class="p">,</span> <span class="n">expected_total</span><span class="p">):</span>
<br>    <span class="k">assert</span> <span class="n">add_nums</span><span class="p">(</span><span class="n">nums</span><span class="p">)</span> <span class="o">==</span> <span class="n">expected_total</span>
<br></div>
</pre></blockquote><p>Unfortunately, tests aren’t run like regular functions. We write the test
functions, but we don’t call them ourselves. That’s the reason the names of the
test functions don’t matter. The test runner (pytest) finds functions named
<code>test_*</code> and calls them for us. When they have no parameters, pytest can
call them directly. But now that our test function has two parameters, we have
to give pytest instructions about how to call it.</p><p>To do that, we use the <code>@pytest.mark.parametrize</code> decorator. Using it
looks like this:</p><blockquote class="code"><pre class="python"><div class="source"><span class="kn">import</span><span class="w"> </span><span class="nn">pytest</span>
<br>
<br><span class="nd">@pytest</span><span class="o">.</span><span class="n">mark</span><span class="o">.</span><span class="n">parametrize</span><span class="p">(</span>
<br>    <span class="s2">"nums, expected_total"</span><span class="p">,</span>
<br>    <span class="p">[</span>
<br>        <span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="mi">6</span><span class="p">),</span>
<br>        <span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">3</span><span class="p">],</span> <span class="mi">0</span><span class="p">),</span>
<br>        <span class="p">([],</span> <span class="mi">0</span><span class="p">),</span>
<br>    <span class="p">]</span>
<br><span class="p">)</span>
<br><span class="k">def</span><span class="w"> </span><span class="nf">test_add_nums</span><span class="p">(</span><span class="n">nums</span><span class="p">,</span> <span class="n">expected_total</span><span class="p">):</span>
<br>    <span class="k">assert</span> <span class="n">add_nums</span><span class="p">(</span><span class="n">nums</span><span class="p">)</span> <span class="o">==</span> <span class="n">expected_total</span>
<br></div>
</pre></blockquote><p>There’s a lot going on here, so let’s take it step by step.</p><p>If you haven’t seen a decorator before, it starts with <code>@</code> and is like a
prologue to a function definition. It can affect how the function is defined or
provide information about the function.</p><p>The parametrize decorator is itself a function call that takes two arguments.
The first is a string (“nums, expected_total”) that names the two arguments to
the test function. Here the decorator is instructing pytest, “when you call
<code>test_add_nums</code>, you will need to provide values for its <code>nums and</code>
<code>expected_total parameters</code>.”</p><p>The second argument to <code>parametrize</code> is a list of the values to supply
as the arguments. Each element of the list will become one call to our test
function. In this example, the list has three tuples, so pytest will call our
test function three times. Since we have two parameters to provide, each
element of the list is a tuple of two values.</p><p>The first tuple is <code>([1, 2, 3], 6)</code>, so the first time pytest calls
test_add_nums, it will call it as test_add_nums([1, 2, 3], 6). All together,
pytest will call us three times, like this:</p><blockquote class="code"><pre class="python"><div class="source"><span class="n">test_add_nums</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="mi">6</span><span class="p">)</span>
<br><span class="n">test_add_nums</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="o">-</span><span class="mi">3</span><span class="p">],</span> <span class="mi">0</span><span class="p">)</span>
<br><span class="n">test_add_nums</span><span class="p">([],</span> <span class="mi">0</span><span class="p">)</span>
<br></div>
</pre></blockquote><p>This will all happen automatically. With our original test functions, when
we ran pytest, it showed the results as three passing tests because we had three
separate test functions. Now even though we only have one function, it still
shows as three passing tests! Each set of values is considered a separate test
that can pass or fail independently. This is the main advantage of using
parametrize instead of writing three separate assert lines in the body of a
simple test function.</p><p>What have we gained?</p><ul>
<li>We don’t have to write three separate functions with different names.</li>
<li>We don’t have to repeat the same details in each function (<code>assert</code>,
<code>add_nums()</code>, <code>==</code>).</li>
<li>The differences between the tests (the actual data) are written succinctly
all in one place.</li>
<li>Adding another test case is as simple as adding another line of data to the
decorator.</li>
</ul>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202507/coveragepy_regex_pragmas.html">
<title>Coverage.py regex pragmas</title>
<link>https://nedbatchelder.com/blog/202507/coveragepy_regex_pragmas.html</link>
<dc:date>2025-07-28T12:04:12-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p><a rel="external noopener" href="https://coverage.readthedocs.io/">Coverage.py</a> lets you indicate code to exclude from
measurement by adding comments to your Python files. But coverage implements
them differently than other similar tools. Rather than having fixed syntax for
these comments, they are defined using regexes that you can change or add to.
This has been surprisingly powerful.</p><p>The basic behavior: coverage finds lines in your source files that match the
regexes. These lines are excluded from measurement, that is, it’s OK if they
aren’t executed. If a matched line is part of a multi-line statement the
whole multi-line statement is excluded. If a matched line introduces a block of
code the entire block is excluded.</p><p>At first, these regexes were just to make it easier to implement the basic
“here’s the comment you use” behavior for pragma comments. But it also enabled
pragma-less exclusions. You could decide (for example) that you didn’t care to
test any <code>__repr__</code> methods. By adding <code>def __repr__</code> as an exclusion
regex, all of those methods were automatically excluded from coverage
measurement without having to add a comment to each one. Very nice.</p><p>Not only did this let people add custom exclusions in their projects, but
it enabled third-party plugins that could configure regexes in other interesting
ways:</p><ul>
<li><a href="https://pypi.org/project/covdefaults/" rel="external noopener">covdefaults</a> adds a
bunch of default exclusions, and also platform- and version-specific comment
syntaxes.</li>
<li><a href="https://pypi.org/project/coverage-conditional-plugin/" rel="external noopener">coverage-conditional-plugin</a>
gives you a way to create comment syntaxes for entire files, for whether other
packages are installed, and so on.</li>
</ul><p>Then about a year ago, <a rel="external noopener" href="https://github.com/nedbat/coveragepy/pull/1807">Daniel Diniz contributed a
change</a> that amped up the power: regexes could match multi-line patterns.
This sounds like not that large a change, but it enabled much more powerful
exclusions. As a sign, it made it possible to support <a rel="external noopener" href="https://coverage.readthedocs.io/en/latest/changes.html#version-7-6-0-2024-07-11">four
different feature requests</a>.</p><p>To make it work, Daniel changed the matching code. Originally, it was a loop
over the lines in the source file, checking each line for a match against the
regexes. The new code uses the entire source file as the target string, and
loops over the matches against that text. Each match is converted into a set of
line numbers and added to the results.</p><p>The power comes from being able to use one pattern to match many lines. For
example, one of the four feature requests was <a rel="external noopener" href="https://github.com/nedbat/coveragepy/issues/118">how to exclude an
entire file</a>. With configurable multi-line regex patterns, you can do this
yourself:</p><blockquote class="code"><pre>\A(?s:.*# pragma: exclude file.*)\Z<br></pre></blockquote><p>With this regex, if you put the comment “# pragma: exclude file” in your
source file, the entire file will be excluded. The <code>\A</code> and <code>\Z</code>
match the start and end of the target text, which remember is the entire file.
The <code>(?s:...)</code> means the <a rel="external noopener" href="https://docs.python.org/3/library/re.html#re.S">s/DOTALL</a> flag is in
effect, so <code>.</code> can match newlines. This pattern matches the entire source
file if the desired pragma is somewhere in the file.</p><p>Another requested feature was <a rel="external noopener" href="https://github.com/nedbat/coveragepy/issues/1803">excluding code between two
lines</a>. We can use “# no cover: start” and “# no cover: end” as delimiters
with this regex:</p><blockquote class="code"><pre># no cover: start(?s:.*?)# no cover: stop<br></pre></blockquote><p>Here <code>(?s:.*?)</code> means any number of any character at all, but as few as
possible. A star in regexes means as many as possible, but star-question-mark
means as few as possible. We need the minimal match so that we don’t match from
the start of one pair of comments all the way through to the end of a different
pair of comments.</p><p>This regex approach is powerful, but is still fairly shallow. For example,
either of these two examples would get the wrong lines if you had a string
literal with the pragma text in it. There isn’t a regex that skips easily over
string literals.</p><p>This kind of difficulty hit home when I added a new default pattern to
exclude empty placeholder methods like this:</p><blockquote class="code"><pre class="python"><div class="source"><span class="k">def</span><span class="w"> </span><span class="nf">not_yet</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span> <span class="o">...</span>
<br>
<br><span class="k">def</span><span class="w"> </span><span class="nf">also_not_this</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<br>    <span class="o">...</span>
<br>
<br><span class="k">async</span> <span class="k">def</span><span class="w"> </span><span class="nf">definitely_not_this</span><span class="p">(</span>
<br>    <span class="bp">self</span><span class="p">,</span>
<br>    <span class="n">arg1</span><span class="p">,</span>
<br><span class="p">):</span>
<br>    <span class="o">...</span>
<br></div>
</pre></blockquote><p>We can’t just match three dots, because ellipses can be used in other places
than empty function bodies. We need to be more delicate. I ended up with:</p><blockquote class="code"><pre>^\s*(((async )?def .*?)?\)(\s*->.*?)?:\s*)?\.\.\.\s*(#|$)<br></pre></blockquote><p>This craziness ensures the ellipsis is part of an (async) def, that the
ellipsis appears first in the body (but no docstring allowed, doh!), allows for
a comment on the line, and so on. And even with a pattern this complex, it
would incorrectly match this contrived line:</p><blockquote class="code"><pre class="python"><div class="source"><span class="k">def</span><span class="w"> </span><span class="nf">f</span><span class="p">():</span> <span class="nb">print</span><span class="p">(</span><span class="s2">"(well): ... #2 false positive!"</span><span class="p">)</span>
<br></div>
</pre></blockquote><p>So regexes aren’t perfect, but they’re a pretty good balance: flexible and
powerful, and will work great on real code even if we can invent weird edge
cases where they fail.</p><p>What started as a simple implementation expediency has turned into a powerful
configuration option that has done more than I would have thought.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202507/coverage_7100_patch.html">
<title>Coverage 7.10.0: patch</title>
<link>https://nedbatchelder.com/blog/202507/coverage_7100_patch.html</link>
<dc:date>2025-07-24T19:03:27-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>Years ago I greeted a friend returning from vacation and asked how it had
been. She answered, “It was good, I got a lot done!” I understand that feeling.
I just had a long vacation myself, and used the time to clean up some old issues
and add some new features in <a rel="external noopener" href="https://pypi.org/project/coverage/">coverage.py v7.10</a>.</p><p>The major new feature is a configuration option,
<a rel="external noopener" href="https://coverage.readthedocs.io/en/latest/config.html#run-patch"><code>[run] patch</code></a>. With it, you specify named
patches that coverage can use to monkey-patch some behavior that gets in the way
of coverage measurement.</p><p>The first is <code>subprocess</code>. Coverage works great when you start your
program with coverage measurement, but has long had the problem of how to also
measure the coverage of sub-processes that your program created. The existing
solution had been a complicated two-step process of creating obscure .pth files
and setting environment variables. Whole projects appeared on PyPI to handle
this for you.</p><p>Now, <code>patch = subprocess</code> will do this for you automatically, and clean
itself up when the program ends. It handles sub-processes created by the
<a rel="external noopener" href="https://docs.python.org/3/library/subprocess.html#module-subprocess">subprocess</a> module, the
<a rel="external noopener" href="https://docs.python.org/3/library/os.html#os.system">os.system()</a> function, and any of the
<a rel="external noopener" href="https://docs.python.org/3/library/os.html#os.execl">execv</a> or <a rel="external noopener" href="https://docs.python.org/3/library/os.html#os.spawnl">spawnv</a> families of
functions.</p><p>This alone has spurred <a rel="external noopener" href="https://bsky.app/profile/did:plc:yj4vzsbzzkpswr7x5yagzhhx/post/3luqfffiiqk27">one user to exclaim</a>,</p><blockquote><div><p>The latest release of Coverage feels like a Christmas present!
The native support for Python subprocesses is so good!</p></div></blockquote><p>Another patch is <code>_exit</code>. This patches
<a rel="external noopener" href="https://docs.python.org/3/library/os.html#os._exit">os._exit()</a> so that coverage saves its data before
exiting. The os._exit() function is an immediate and abrupt termination of the
program, skipping all kinds of registered clean up code. This patch makes it
possible to collect coverage data from programs that end this way.</p><p>The third patch is <code>execv</code>. The <a rel="external noopener" href="https://docs.python.org/3/library/os.html#os.execl">execv</a> functions
end the current program and replace it with a new program in the same process.
The <code>execv</code> patch arranges for coverage to save its data before the
current program is ended.</p><p>Now that these patches are available, it seems silly that it’s taken so long.
They (mostly) weren’t difficult. I guess it took looking at the old issues,
realizing the friction they caused, and thinking up a new way to let users
control the patching. Monkey-patching is a bit invasive, so I’ve never wanted
to do it implicitly. The patch option gives the user an explicit way to request
what they need without having to get into the dirty details themselves.</p><p>Another process-oriented feature was contributed by Arkady Gilinsky: with
<code>--save-signal=USR1</code> you can specify a user signal that coverage will
attend to. When you send the signal to your running coverage process, it will
save the collected data to disk. This gives a way to measure coverage in a
long-running process without having to end the process.</p><p>There were some other fixes and features along the way, like better HTML
coloring of multi-line statements, and more default exclusions
(<code>if TYPE_CHECKING:</code> and <code>...</code>).</p><p>It feels good to finally address some of these pain points. I also closed
some stale issues and pull requests. There is more to do, always more to do,
but this feels like a real step forward. Give <a rel="external noopener" href="https://coverage.readthedocs.io/en/7.10.0/changes.html#version-7-10-0-2025-07-24">coverage
7.10.0</a> a try and let me know how it works for you.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202507/2048_iterators_and_iterables.html">
<title>2048: iterators and iterables</title>
<link>https://nedbatchelder.com/blog/202507/2048_iterators_and_iterables.html</link>
<dc:date>2025-07-15T06:52:29-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>I wrote a <a rel="external noopener" href="https://github.com/nedbat/odds/blob/master/2048/2048.py">low-tech terminal-based version</a> of the
classic <a rel="external noopener" href="https://play2048.co/">2048 game</a> and had some interesting difficulties
with iterators along the way.</p><p>2048 has a 4<span class="times">×</span>4 grid with sliding tiles. Because the tiles can slide
left or right and up or down, sometimes we want to loop over the rows and
columns from 0 to 3, and sometimes from 3 to 0. My first attempt looked like
this:</p><blockquote class="code"><pre class="python"><div class="source"><span class="n">N</span> <span class="o">=</span> <span class="mi">4</span>
<br><span class="k">if</span> <span class="n">sliding_right</span><span class="p">:</span>
<br>    <span class="n">cols</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>   <span class="c1"># 3 2 1 0</span>
<br><span class="k">else</span><span class="p">:</span>
<br>    <span class="n">cols</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span>             <span class="c1"># 0 1 2 3</span>
<br>
<br><span class="k">if</span> <span class="n">sliding_down</span><span class="p">:</span>
<br>    <span class="n">rows</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>   <span class="c1"># 3 2 1 0</span>
<br><span class="k">else</span><span class="p">:</span>
<br>    <span class="n">rows</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span>             <span class="c1"># 0 1 2 3</span>
<br>
<br><span class="k">for</span> <span class="n">row</span> <span class="ow">in</span> <span class="n">rows</span><span class="p">:</span>
<br>    <span class="k">for</span> <span class="n">col</span> <span class="ow">in</span> <span class="n">cols</span><span class="p">:</span>
<br>        <span class="o">...</span>
<br></div>
</pre></blockquote><p>This worked, but those counting-down ranges are ugly. Let’s make it
nicer:</p><blockquote class="code"><pre class="python"><div class="source"><span class="n">cols</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span>                 <span class="c1"># 0 1 2 3</span>
<br><span class="k">if</span> <span class="n">sliding_right</span><span class="p">:</span>
<br>    <span class="n">cols</span> <span class="o">=</span> <span class="nb">reversed</span><span class="p">(</span><span class="n">cols</span><span class="p">)</span>       <span class="c1"># 3 2 1 0</span>
<br>
<br><span class="n">rows</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span>                 <span class="c1"># 0 1 2 3</span>
<br><span class="k">if</span> <span class="n">sliding_down</span><span class="p">:</span>
<br>    <span class="n">rows</span> <span class="o">=</span> <span class="nb">reversed</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span>       <span class="c1"># 3 2 1 0</span>
<br>
<br><span class="k">for</span> <span class="n">row</span> <span class="ow">in</span> <span class="n">rows</span><span class="p">:</span>
<br>    <span class="k">for</span> <span class="n">col</span> <span class="ow">in</span> <span class="n">cols</span><span class="p">:</span>
<br>        <span class="o">...</span>
<br></div>
</pre></blockquote><p>Looks cleaner, but it doesn’t work! Can you see why? It took me a bit of
debugging to see the light.</p><p><code>range()</code> produces an iterable: something that can be iterated over.
Similar but different is that <code>reversed()</code> produces an iterator: something
that is already iterating. Some iterables (like ranges) can be used more than
once, creating a new iterator each time. But once an iterator like
<code>reversed()</code> has been consumed, it is done. Iterating it again will
produce no values.</p><p>If “iterable” vs “iterator” is already confusing here’s a quick definition:
an iterable is something that can be iterated, that can produce values in a
particular order. An iterator tracks the state of an iteration in progress. An
analogy: the pages of a book are iterable; a bookmark is an iterator. The
English hints at it: an iter-able is able to be iterated at some point, an
iterator is actively iterating.</p><p>The outer loop of my double loop was iterating only once over the rows, so
the row iteration was fine whether it was going forward or backward. But the
columns were being iterated again for each row. If the columns were going
forward, they were a range, a reusable iterable, and everything worked fine.</p><p>But if the columns were meant to go backward, they were a one-use-only
iterator made by <code>reversed()</code>. The first row would get all the columns,
but the other rows would try to iterate using a fully consumed iterator and get
nothing.</p><p>The simple fix was to use <code>list()</code> to turn my iterator into a reusable
iterable:</p><blockquote class="code"><pre class="python"><div class="source"><span class="n">cols</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">reversed</span><span class="p">(</span><span class="n">cols</span><span class="p">))</span>
<br></div>
</pre></blockquote><p>The code was slightly less nice, but it worked. An even better fix
was to change my doubly nested loop into a single loop:</p><blockquote class="code"><pre class="python"><div class="source"><span class="k">for</span> <span class="n">row</span><span class="p">,</span> <span class="n">col</span> <span class="ow">in</span> <span class="n">itertools</span><span class="o">.</span><span class="n">product</span><span class="p">(</span><span class="n">rows</span><span class="p">,</span> <span class="n">cols</span><span class="p">):</span>
<br></div>
</pre></blockquote><p>That also takes care of the original iterator/iterable problem, so I can get
rid of that first fix:</p><blockquote class="code"><pre class="python"><div class="source"><span class="n">cols</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span>
<br><span class="k">if</span> <span class="n">sliding_right</span><span class="p">:</span>
<br>    <span class="n">cols</span> <span class="o">=</span> <span class="nb">reversed</span><span class="p">(</span><span class="n">cols</span><span class="p">)</span>
<br>
<br><span class="n">rows</span> <span class="o">=</span> <span class="nb">range</span><span class="p">(</span><span class="n">N</span><span class="p">)</span>
<br><span class="k">if</span> <span class="n">sliding_down</span><span class="p">:</span>
<br>    <span class="n">rows</span> <span class="o">=</span> <span class="nb">reversed</span><span class="p">(</span><span class="n">rows</span><span class="p">)</span>
<br>
<br><span class="k">for</span> <span class="n">row</span><span class="p">,</span> <span class="n">col</span> <span class="ow">in</span> <span class="n">itertools</span><span class="o">.</span><span class="n">product</span><span class="p">(</span><span class="n">rows</span><span class="p">,</span> <span class="n">cols</span><span class="p">):</span>
<br>    <span class="o">...</span>
<br></div>
</pre></blockquote><p>Once I had this working, I wondered why <code>product()</code> solved the
iterator/iterable problem. The <a rel="external noopener" href="https://docs.python.org/3/library/itertools.html#itertools.product">docs have a sample Python
implementation</a> that shows why: internally, <code>product()</code> is doing just
what my <code>list()</code> call did: it makes an explicit iterable from each of the
iterables it was passed, then picks values from them to make the pairs. This
lets <code>product()</code> accept iterators (like my reversed range) rather than
forcing the caller to always pass iterables.</p><p>If your head is spinning from all this iterable / iterator / iteration talk,
I don’t blame you. Just now I said, “it makes an explicit iterable from each of
the iterables it was passed.” How does that make sense? Well, an iterator is an
iterable. So <code>product()</code> can take either a reusable iterable (like a range
or a list) or it can take a use-once iterator (like a reversed range). Either
way, it populates its own reusable iterables internally.</p><p>Python’s iteration features are powerful but sometimes require careful
thinking to get right. Don’t overlook the tools in itertools, and mind your
iterators and iterables!</p><p class="bulletsep">• • •</p><p>Some more notes:</p><p>1: Another way to reverse a range: you can slice them!</p><blockquote class="code"><pre class="python"><div class="source"><span class="o">>>></span> <span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">)</span>
<br><span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">4</span><span class="p">)</span>
<br><span class="o">>>></span> <span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">)[::</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<br><span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>
<br><span class="o">>>></span> <span class="nb">reversed</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">4</span><span class="p">))</span>
<br><span class="o"><</span><span class="n">range_iterator</span> <span class="nb">object</span> <span class="n">at</span> <span class="mh">0x10307cba0</span><span class="o">></span>
<br></div>
</pre></blockquote><p>It didn’t occur to me to reverse-slice the range, since <code>reversed</code> is
right there, but the slice gives you a new reusable range object while reversing
the range gives you a use-once iterator.</p><p>2: Why did <code>product()</code> explicitly store the values it would need but
<code>reversed</code> did not? Two reasons: first, <code>reversed()</code> depends on the
<code>__reversed__</code> dunder method, so it’s up to the original object to decide
how to implement it. Ranges know how to produce their values in backward order,
so they don’t need to store them all. Second, <code>product()</code> is going to need
to use the values from each iterable many times and can’t depend on the
iterables being reusable.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202506/math_factoid_of_the_day_63.html">
<title>Math factoid of the day: 63</title>
<link>https://nedbatchelder.com/blog/202506/math_factoid_of_the_day_63.html</link>
<dc:date>2025-06-16T00:00:00-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>63 is a <a rel="external noopener" href="https://en.wikipedia.org/wiki/Centered_octahedral_number">centered octahedral number</a>. That means if you
build an approximation of an octahedron with cubes, one size of octahedron will
have 63 cubes.</p><p>In the late 1700’s <a rel="external noopener" href="https://en.wikipedia.org/wiki/Ren%C3%A9_Just_Ha%C3%BCy">René Just Haüy</a> developed a theory
about how crystals formed: successive layers of fundamental primitives in
orderly arrangements. One of those arrangements was stacking cubes together to
make an octahedron.</p><p>Start with one cube:</p><div class="figurep"><img src="https://nedbatchelder.com/code/diagrams/hauy/0.svg" alt="Just one lonely cube"></div><p>Add six more cubes around it, one on each face. Now we have seven:</p><div class="figurep"><img src="https://nedbatchelder.com/code/diagrams/hauy/1.svg" alt="Seven cubes as a crude octahedron"></div><p>Add another layer, adding a cube to touch each visible cube, making 25:</p><div class="figurep"><img src="https://nedbatchelder.com/code/diagrams/hauy/2.svg" alt="25 cubes arranged like an octahedron five cubes wide"></div><p>One more layer and we have a total of 63:</p><div class="figurep"><img src="https://nedbatchelder.com/code/diagrams/hauy/3.svg" alt="63 cubes arranged like an octahedron seven cubes wide"></div><p>The remaining numbers in <a href="https://oeis.org/A001845" rel="external noopener">the sequence</a>
less than 10,000 are 129, 231, 377, 575, 833, 1159, 1561, 2047, 2625, 3303,
4089, 4991, 6017, 7175, 8473, 9919.</p><p>63 also shows up in the <a rel="external noopener" href="https://en.wikipedia.org/wiki/Delannoy_number">Delannoy numbers</a>: the
number of ways to traverse a grid from the lower left corner to upper right
using only steps north, east, or northeast. Here are the 63 ways of moving on a
3<span class="times">×</span>3 grid:</p><div class="figurep"><img src="https://nedbatchelder.com/code/diagrams/delannoy3.svg" alt="63 different ways to traverse a 3x3 grid"></div><p>(Diagram from <a href="https://en.wikipedia.org/wiki/File:Delannoy3x3.svg" rel="external noopener">Wikipedia</a>)</p><p>In fact, the number of cubes in a Haüy octahedron with N layers is the same
as the number of Delannoy steps on a 3<span class="times">×</span>N grid!</p><p>Since the two ideas are both geometric and fairly simple, I would love to
find a geometric explanation for the correspondence. The octahedron is
three-dimensional, and the Delannoy grids have that tantalizing 3 in them. It
seems like there should be a way to convert Haüy coordinates to Delannoy
coordinates to show how they relate. But I haven’t found one...</p><p class="bulletsep">• • •</p><p>Colophon: I made the octahedron diagrams by asking Claude to write a
<a href="https://nedbatchelder.com/code/diagrams/hauy/hauy_oct.py">Python program</a> to do it.
It wasn’t a fast process because it took pushing and prodding to get the
diagrams to come out the way I liked. But Claude was very competent, and I
could think about the results rather than about projections or color spaces. I
could dip into it for 10 minutes at a time over a number of days without having
to somehow reconstruct a mental context.</p><p>This kind of casual hobby programming is perfect for AI assistance. I don’t
need the code to be perfect or even good, I just want the diagrams to be nice.
I don’t have the focus time to learn how to write the program, so I can leave it
to an imperfect assistant.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202506/digital_equipment_corporation_no_more.html">
<title>Digital Equipment Corporation no more</title>
<link>https://nedbatchelder.com/blog/202506/digital_equipment_corporation_no_more.html</link>
<dc:date>2025-06-09T09:43:53-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>Today is the 39-year anniversary of my first day working for
<a rel="external noopener" href="https://en.wikipedia.org/wiki/Digital_Equipment_Corporation">Digital Equipment Corporation</a>. It was my first real job in
the tech world, two years out of college. <a href="https://nedbatchelder.com/blog/200606/digital_equipment_corporation.html">I wrote
about it 19 years ago</a>, but it’s on my mind again.</p><p>More and more, I find that people have never heard of Digital (as we called
it) or DEC (as they preferred we didn’t call it but everyone did). It’s
something I’ve had to get used to. I try to relate a story from that time, and
I find that even experienced engineers with deep knowledge of technologies don’t
know of the company.</p><p>I mention this not in a crabby “kids these days” kind of way. It does
surprise me, but I’m taking it as a learning opportunity. If there’s a lesson
to learn, it is this:</p><blockquote><div><p>This too shall pass.</p></div></blockquote><p>I am now working for Netflix, and one of the great things about it is that
everyone has heard of Netflix. I can mention my job to anyone and they are
impressed in some way. Techies know it as one of the FAANG companies, and
“civilians” know it for the entertainment it produces and delivers.</p><p>When I joined Digital in 1986, at least among tech people, it was similar.
Everyone knew about Digital and what they had done: the creation of the
minicomputer, the genesis of Unix and C, the ubiquitous VT100. Many foundations
of the software world flowed directly and famously from Digital.</p><p>These days Digital isn’t quite yet a footnote to history, but it is more and
more unknown even among the most tech-involved. And the tech world carries
on!</p><p>My small team at Netflix has a number of young engineers, less than two years
out of college, and even an intern still in college. I’m sure they felt
incredibly excited to join a company as well-known and influential as Netflix.
In 39 years when they tell a story from the early days of their career will they
start with, “Have you heard of Netflix?” and have to adjust to the blank stares
they get in return?</p><p>This too shall pass.</p>
]]></description>
</item>
<item rdf:about="https://nedbatchelder.com/blog/202505/pycon_summer_camp.html">
<title>PyCon summer camp</title>
<link>https://nedbatchelder.com/blog/202505/pycon_summer_camp.html</link>
<dc:date>2025-05-15T07:05:20-04:00</dc:date>
<dc:creator>Ned Batchelder</dc:creator>
<description><![CDATA[<p>I’m headed to PyCon today, and I’m reminded about how it feels like summer
camp, in mostly good ways, but also in a tricky way.</p><p>You take some time off from your “real” life, you go somewhere else, you hang
out with old friends and meet some new friends. You do different things than in
your real life, some are playful, some take real work. These are all good ways
it’s like summer camp.</p><p>Here’s the tricky thing to watch out for: like summer camp, you can make
connections to people or projects that are intense and feel like they could last
forever. You make friends at summer camp, or even have semi-romantic crushes on
people. You promise to stay in touch, you think it’s the “real thing.” When you
get home, you write an email or two, maybe a phone call, but it fades away. The
excitement of the summer is overtaken by your autumnal real life again.</p><p>PyCon can be the same way, either with people or projects. Not a romance,
but the exciting feeling that you want to keep doing the project you started at
PyCon, or be a member of some community you hung out with for those days. You
want to keep talking about that exciting thing with that person. These are
great feelings, but it’s easy to emotionally over-commit to those efforts and
then have it fade away once PyCon is over.</p><p>How do you know what projects are just crushes, and which are permanent
relationships? Maybe it doesn’t matter, and we should just get excited about
things.</p><p>I know I started at least one effort last year that I thought would be done
in a few months, but has since stalled. Now I am headed back to PyCon. Will I
become attached to yet more things this time? Is that bad? Should I temper my
enthusiasm, or is it fine to light a few fires and accept that some will peter
out?</p>
]]></description>
</item>
</rdf:RDF>
If you would like to create a banner that links to this page (i.e. this validation result), do the following:
Download the "valid RSS" banner.
Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)
Add this HTML to your page (change the image src
attribute if necessary):
If you would like to create a text link instead, here is the URL you can use:
http://www.feedvalidator.org/check.cgi?url=https%3A//nedbatchelder.com/blog/rss.xml