Congratulations!

[Valid Atom 1.0] This is a valid Atom 1.0 feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: http://www.cafeconleche.org/today.atom

  1. <?xml version="1.0"?>
  2. <feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US" xml:base="http://www.cafeconleche.org/today.atom">
  3.  <logo>/images/cup.gif</logo>
  4.  <icon>/favicon.ico</icon>
  5.  <updated>2014-04-17T13:00:02-04:00</updated>
  6.  <id>http://www.cafeconleche.org/</id>
  7.  <title>Cafe con Leche XML News and Resources</title>
  8.  <link rel="self" type="application/atom+xml" href="/today.atom"/>
  9.  <subtitle>Cafe con Leche is the preeminent independent source of XML information on the net. Cafe con Leche is neither beholden to specific companies nor to advertisers. At Cafe con Leche you'll find many resources to help you develop your XML skills here including daily news summaries, examples, book reviews, mailing lists and more.</subtitle>
  10.  <rights>Copyright 2014 Elliotte Rusty Harold</rights>
  11.  <author>
  12.    <name>Elliotte Rusty Harold</name>
  13.  </author>
  14.  <entry>
  15.    <title>Quote of the Day</title>
  16.    <content type="xhtml">
  17.      <div xmlns="http://www.w3.org/1999/xhtml">
  18.        <blockquote cite="http://www.boingboing.net/2010/04/02/why-i-wont-buy-an-ipad-and-think-you-shouldnt-either.html">
  19.          <div>
  20. <p>
  21. I remember the early days of the web -- and the last days of CD ROM -- when there was this mainstream consensus that the web and PCs were too durned geeky and difficult and unpredictable for "my mom" (it's amazing how many tech people have an incredibly low opinion of their mothers). If I had a share of AOL for every time someone told me that the web would die because AOL was so easy and the web was full of garbage, I'd have a lot of AOL shares.
  22. </p>
  23.  
  24. <p>
  25. And they wouldn't be worth much.
  26. </p>
  27.  
  28. </div>
  29.        </blockquote>
  30.        <p>--Cory Doctorow <br class="empty" clear="none"/>
  31. Read the rest in <a href="http://www.boingboing.net/2010/04/02/why-i-wont-buy-an-ipad-and-think-you-shouldnt-either.html" shape="rect">Why I won't buy an iPad (and think you shouldn't, either) </a></p>
  32.      </div>
  33.    </content>
  34.    <author>
  35.      <name>Cory Doctorow</name>
  36.    </author>
  37.    <link rel="alternate" href="http://www.cafeconleche.org/quotes2014.html#quote2010April6"/>
  38.    <link rel="permalink" href="http://www.cafeconleche.org/quotes2014.html#quote2010April6"/>
  39.    <id>http://www.cafeconleche.org/quotes2014.html#quote2010April6</id>
  40.    <updated>2010-04-06T08:45:24Z</updated>
  41.  </entry>
  42.  <entry>
  43.    <title>I've released XOM 1.2.5, my free-as-in-speech (LGPL) dual streaming/tree-based API for processing XML with Java.
  44.          </title>
  45.    <content type="xhtml">
  46.      <div xmlns="http://www.w3.org/1999/xhtml" id="April_6_2010_25616" class="2010-04-06T07:07:56Z">
  47. <p>
  48. I've released <a href="http://www.xom.nu/" shape="rect">XOM 1.2.5</a>,  my free-as-in-speech (LGPL)
  49. dual streaming/tree-based API for processing XML with Java.
  50. 1.2.5 is a very minor release. The only visible change is that Builder.build((Reader) null) now throws a NullPointerException instead of a confusing MalformedURLException.
  51. I've also added support for Maven 2, and hope to get the packages uploaded to the central repository in a week or two.
  52. </p>
  53. </div>
  54.    </content>
  55.    <link rel="alternate" href="http://www.cafeconleche.org/#April_6_2010_25616"/>
  56.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010April6.html#April_6_2010_25616"/>
  57.    <id>http://www.cafeconleche.org/#April_6_2010_25616</id>
  58.    <updated>2010-04-06T07:07:56Z</updated>
  59.  </entry>
  60.  <entry>
  61.    <title>In other news, I have had very little time to work on this site lately.
  62.          </title>
  63.    <content type="xhtml">
  64.      <div xmlns="http://www.w3.org/1999/xhtml" id="April_6_2010_25791" class="2010-04-06T07:10:51Z">
  65. <p>
  66. In other news, I have had very little time to work on this site lately.
  67. In order to have any time to work on other projects including XOM and Jaxen, I've had to let this site slide.
  68. I expect to have more news about that soon.
  69. </p>
  70. </div>
  71.    </content>
  72.    <link rel="alternate" href="http://www.cafeconleche.org/#April_6_2010_25791"/>
  73.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010April6.html#April_6_2010_25791"/>
  74.    <id>http://www.cafeconleche.org/#April_6_2010_25791</id>
  75.    <updated>2010-04-06T07:10:51Z</updated>
  76.  </entry>
  77.  <entry>
  78.    <title>Also, speaking of Jaxen, I noticed that the website has been a little out of date for a while now because I neglected to update the releases page when 1.1.2 was released in 2008.
  79.          </title>
  80.    <content type="xhtml">
  81.      <div xmlns="http://www.w3.org/1999/xhtml" id="April_6_2010_26099" class="2010-04-06T07:15:59Z">
  82. <p>
  83. Also, speaking of Jaxen, I noticed that the <a href="http://jaxen.codehaus.org/" shape="rect">website</a> has been a little out of date for a while now because I neglected to update the <a href="http://jaxen.codehaus.org/releases.html" shape="rect">releases page</a> when 1.1.2 was released in 2008. Consequently, a lot of folks have been missing out on the latest bug fixes and optimizations. If you're still using Jaxen 1.1.1 or earlier, please upgrade when you get a minute. Also, note that the official site is http://jaxen.codehaus.org/. jaxen.org is a domain name spammer. I'm not sure who let that one slide, but we'll have to see about grabbing it back one of these days.
  84. </p>
  85. </div>
  86.    </content>
  87.    <link rel="alternate" href="http://www.cafeconleche.org/#April_6_2010_26099"/>
  88.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010April6.html#April_6_2010_26099"/>
  89.    <id>http://www.cafeconleche.org/#April_6_2010_26099</id>
  90.    <updated>2010-04-06T07:15:59Z</updated>
  91.  </entry>
  92.  <entry>
  93.    <title>Yesterday I figured out how to process form input.
  94.          </title>
  95.    <content type="xhtml">
  96.      <div xmlns="http://www.w3.org/1999/xhtml" id="February_12_2010_24187" class="2010-02-12T07:43:07Z">
  97. <p>
  98. Yesterday I figured out how to process form input.
  99. Today I figured out how to parse strings into nodes in eXist. This is very eXist specific, but briefly:
  100. </p>
  101.  
  102. <pre xml:space="preserve">let $doc := "&lt;html xmlns='http://www.w3.org/1999/xhtml'&gt;
  103. &lt;div&gt;
  104. foo
  105. &lt;/div&gt;
  106. &lt;/html&gt;"
  107.  
  108. let $list := util:catch('*',
  109.            (util:parse($doc)),
  110.            ($util:exception-message))
  111. return
  112. $list</pre>
  113.  
  114. <p>
  115. I'll need this for posts and comments. There's also a <code>parse-html</code>
  116. function but it's based on the flaky NekoHTML instead of the m,orereliable TagSoup.
  117. </p>
  118.  
  119. </div>
  120.    </content>
  121.    <link rel="alternate" href="http://www.cafeconleche.org/#February_12_2010_24187"/>
  122.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010February12.html#February_12_2010_24187"/>
  123.    <id>http://www.cafeconleche.org/#February_12_2010_24187</id>
  124.    <updated>2010-02-12T07:43:07Z</updated>
  125.  </entry>
  126.  <entry>
  127.    <title>I'm slowly continuing to work on the new backend.
  128.          </title>
  129.    <content type="xhtml">
  130.      <div xmlns="http://www.w3.org/1999/xhtml" id="February_10_2010_20915" class="2010-02-10T06:49:35Z">
  131. <p>
  132. I'm slowly continuing to work on the new backend. I've finally gotten indexing to work. It turns out that eXist's namespace hanlding for index configuration files is broken in 1.4.0, but that shouldn be fixed in the release. I've also manged to
  133. get the source built and most of the tests to run so I can contribute patches back.
  134. Next up I'm looking into the supoprt for the Atom Publishing Protocol.
  135. </p>
  136. </div>
  137.    </content>
  138.    <link rel="alternate" href="http://www.cafeconleche.org/#February_10_2010_20915"/>
  139.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010February10.html#February_10_2010_20915"/>
  140.    <id>http://www.cafeconleche.org/#February_10_2010_20915</id>
  141.    <updated>2010-02-10T06:49:35Z</updated>
  142.  </entry>
  143.  <entry>
  144.    <title>I spent a morning debugging a problem that I have now boiled down to this test case.
  145.          </title>
  146.    <content type="xhtml">
  147.      <div xmlns="http://www.w3.org/1999/xhtml" id="February_3_2010_24223" class="2010-02-05T07:44:43Z">
  148. <p>
  149. I spent a morning debugging a problem that
  150. I have now boiled down to this test case. The following query
  151. prints 3097:</p>
  152.  
  153. <pre xml:space="preserve">
  154. &lt;html&gt;
  155. {
  156. let $num := count(collection("/db/quotes")/quote)
  157. return $num
  158. }
  159. &lt;/html&gt;
  160. </pre>
  161.  
  162. and this query prints 0:
  163.  
  164. <pre xml:space="preserve">
  165. &lt;html xmlns="http://www.w3.org/1999/xhtml"&gt;
  166. {
  167. let $num := count(collection("/db/quotes")/quote)
  168. return $num
  169. }
  170. &lt;/html&gt;
  171. </pre>
  172.  
  173. <p>
  174. The only difference is the default namespace declaration. In the
  175. documents being queried the <code>quote</code> elements are indeed in no namespace.
  176. Much to my surprise XQuery has broken the
  177. semantics of XPath 1.0 by
  178. applying default namespaces to
  179. unqualified names in path expressions.
  180. Who thought it would
  181. be a good idea to break practice with XSLT, every single XPath
  182. implementation on the planet, and years of experience and
  183. documentation?
  184. There's an argument to be made for default namespaces applying in path
  185. expressions, but the time for that argument to be made was 1998. Once
  186. the choice was made, the cost of switching was far higher than any
  187. incremental improvement you might make. Stare decisis isn't just for
  188. the supreme court.</p>
  189. </div>
  190.    </content>
  191.    <link rel="alternate" href="http://www.cafeconleche.org/#February_3_2010_24223"/>
  192.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010February3.html#February_3_2010_24223"/>
  193.    <id>http://www.cafeconleche.org/#February_3_2010_24223</id>
  194.    <updated>2010-02-05T07:44:43Z</updated>
  195.  </entry>
  196.  <entry>
  197.    <title>XQuery executing for about an hour now.
  198.          </title>
  199.    <content type="xhtml">
  200.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_30_2010_35023" class="2010-01-30T10:44:43Z">
  201. <p>
  202. XQuery executing for about an hour now. O(N^2) algorithm perhaps?
  203. Maybe I should learn about indexes? Or is eXist just hung?
  204. </p>
  205.  
  206. <pre xml:space="preserve"><code>declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  207. declare namespace html="http://www.w3.org/1999/xhtml";
  208. declare namespace xs="http://www.w3.org/2001/XMLSchema";
  209. declare namespace atom="http://www.w3.org/2005/Atom";
  210.  
  211. for $date in distinct-values(
  212.    for $updated in collection("/db/news")/atom:entry/atom:updated
  213.    order by $updated descending
  214.    return xs:date(xs:dateTime($updated)))
  215.    
  216. let $entries := collection("/db/news")/atom:entry[xs:date(xs:dateTime(atom:updated)) = $date]
  217. return &lt;div&gt;
  218.  for $entry in $entries
  219.  return $entry/atom:title
  220.  &lt;hr /&gt;
  221. &lt;/div&gt;</code></pre>
  222. </div>
  223.    </content>
  224.    <link rel="alternate" href="http://www.cafeconleche.org/#January_30_2010_35023"/>
  225.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January30.html#January_30_2010_35023"/>
  226.    <id>http://www.cafeconleche.org/#January_30_2010_35023</id>
  227.    <updated>2010-01-30T10:44:43Z</updated>
  228.  </entry>
  229.  <entry>
  230.    <title>I've got a lot of the old data loaded into eXist (news and quotes; readings and other pages I still have to think about).
  231.          </title>
  232.    <content type="xhtml">
  233.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_29_2010_24216" class="2010-01-29T07:44:36Z">
  234. <p>
  235. I've got a lot of the old data loaded into eXist (news and quotes; readings and other pages I still have to think about). I'm now focusing on how to
  236. get it back out again and put it in web pages. Once that's done, the remaining piece is setting up some system for putting new data in. It will probably be a fairly simple HTML form, but some sort of markdown support might be nice. Perhaps I can hack something together that will insert paragraphs if there are no existing paragraphs, and otherwise leave the markup alone. I'm also divided on the subject of whether to store the raw text, the XHTML converted text, or both. This will be even more critical when I add comment support.
  237. </p>
  238. </div>
  239.    </content>
  240.    <link rel="alternate" href="http://www.cafeconleche.org/#January_29_2010_24216"/>
  241.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January29.html#January_29_2010_24216"/>
  242.    <id>http://www.cafeconleche.org/#January_29_2010_24216</id>
  243.    <updated>2010-01-29T07:44:36Z</updated>
  244.  </entry>
  245.  <entry>
  246.    <title>I've more or less completed the script that converts the old news into Atom entry documents.</title>
  247.    <content type="xhtml">
  248.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_26_2010_25980" class="2010-01-26T07:13:00Z">
  249. <p>
  250. I've more or less completed the script that converts the old
  251. news into Atom entry documents:
  252. </p>
  253.  
  254. <pre xml:space="preserve"><code>xquery version "1.0";
  255.  
  256. declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  257. declare namespace html="http://www.w3.org/1999/xhtml";
  258. declare namespace xs="http://www.w3.org/2001/XMLSchema";
  259. declare namespace atom="http://www.w3.org/2005/Atom";
  260. declare namespace text="http://exist-db.org/xquery/text";
  261.  
  262. declare function local:leading-zero($n as xs:decimal) as xs:string {
  263.    let $result := if ($n &gt;= 10)
  264.    then string($n)
  265.    else concat("0", string($n))
  266.   return $result
  267. };
  268.  
  269. declare function local:parse-date($date as xs:string) as xs:string {
  270.    let $day := normalize-space(substring-before($date, ","))
  271.    let $string-date := normalize-space(substring-after($date, ","))
  272.    let $y1 := normalize-space(substring-after($string-date, ","))
  273.    (: strip permalink :)
  274.    let $year := if (contains($y1, "("))
  275.                 then normalize-space(substring-before($y1, "("))
  276.                 else $y1
  277.    
  278.    let $month-day := normalize-space(substring-before($string-date, ","))
  279.    let $months := ("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
  280.    
  281.    let $month := substring-before($month-day, " ")
  282.    let $day-of-month := local:leading-zero(xs:decimal(substring-after($month-day, " ")))
  283.    let $monthnum := local:leading-zero(index-of($months,$month))
  284.    (: I don't necessarily know the time so I'll pick something vaguely plausible. :)
  285.    return concat($year, "-", $monthnum, "-", $day-of-month, "T07:00:31-05:00")
  286. };
  287.  
  288.  
  289. declare function local:first-sentence($text as xs:string) as xs:string {
  290.    let $r0 := normalize-space($text)
  291.    let $r1 := substring-before($text, '. ')
  292.    let $penultimate := substring($r1, string-length($r1)-1, 1)
  293.    let $sentence := if ($penultimate != " " or not(contains($r1, ' ')))
  294.                   then concat($r1, ".")
  295.                   else concat($r1, ". ", local:first-sentence($r1))
  296.    return $sentence
  297. };
  298.  
  299. declare function local:make-id($date as xs:string, $position as xs:integer) as xs:string {
  300.    let $day := normalize-space(substring-before($date, ","))
  301.    let $string-date := normalize-space(substring-after($date, ","))
  302.    let $y1 := normalize-space(substring-after($string-date, ","))
  303.    (: strip permalink :)
  304.    let $year := if (contains($y1, "("))
  305.                 then normalize-space(substring-before($y1, "("))
  306.                 else $y1
  307.    let $month-day := normalize-space(substring-before($string-date, ","))
  308.    let $months := ("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
  309.    
  310.    let $month := substring-before($month-day, " ")
  311.    let $day-of-month := local:leading-zero(xs:decimal(substring-after($month-day, " ")))
  312.    let $monthnum := local:leading-zero(index-of($months,$month))
  313.    return concat($month, "_", $day-of-month, "_", $year, "_", $position)
  314. };
  315.  
  316.  
  317. declare function local:permalink-date($date as xs:string) as xs:string {
  318.    let $day := normalize-space(substring-before($date, ","))
  319.    let $string-date := normalize-space(substring-after($date, ","))
  320.    let $y1 := normalize-space(substring-after($string-date, ","))
  321.    (: strip permalink :)
  322.    let $year := if (contains($y1, "("))
  323.                 then normalize-space(substring-before($y1, "("))
  324.                 else $y1
  325.    let $month-day := normalize-space(substring-before($string-date, ","))
  326.    let $month := substring-before($month-day, " ")
  327.    let $day-of-month := xs:decimal(substring-after($month-day, " "))
  328.    return concat($year, $month, $day-of-month)
  329. };
  330.  
  331. for $newsyear in (1998 to 2009)
  332. return
  333. for $dt in doc(concat("file:///Users/elharo/cafe%20con%20Leche/news", $newsyear ,".html"))/html:html/html:body/html:dl/html:dt
  334. let $dd := $dt/following-sibling::html:dd[1]
  335. let $date := string($dt)
  336. let $itemstoday := count($dd/html:div)
  337.  
  338. return
  339.    for $item at $count in $dd/html:div
  340.    let $sequence := $itemstoday - $count + 1
  341.    let $id := if ($item/@id)
  342.               then string($item/@id)
  343.               else local:make-id($date, $sequence)      
  344.              
  345.    let $published := if ($item/@class)
  346.                 then string($item/@class)
  347.                 else local:parse-date($date)
  348.    let $link := concat("http://www.cafeconleche.org/#", $id)
  349.    let $permalink := if ($item/@id)
  350.                      then concat("http://www.cafeconleche.org/oldnews/news", local:permalink-date($date), ".html#", $item/@id)
  351.                      else concat("http://www.cafeconleche.org/oldnews/news", local:permalink-date($date), ".html")
  352.    return
  353.    &lt;atom:entry xml:id="{$id}"&gt;
  354.        &lt;atom:author&gt;
  355.         &lt;atom:name&gt;Elliotte Rusty Harold&lt;/atom:name&gt;
  356.         &lt;atom:uri&gt;http://www.elharo.com/&lt;/atom:uri&gt;
  357.       &lt;/atom:author&gt;
  358.       &lt;atom:id&gt;{$link}&lt;/atom:id&gt;
  359.       &lt;atom:title&gt;{local:first-sentence(string($item))}&lt;/atom:title&gt;
  360.       &lt;atom:updated&gt;{$published}&lt;/atom:updated&gt;
  361.       &lt;atom:content type="xhtml" xml:lang="en"
  362.           xml:base="http://www.cafeconleche.org/"
  363.           xmlns="http://www.w3.org/1999/xhtml"&gt;{$item/node()}&lt;/atom:content&gt;
  364.       &lt;link rel="alternate" href="{$link}"/&gt;
  365.       &lt;link rel="permalink" href="{$permalink}"/&gt;
  366.    &lt;/atom:entry&gt;
  367. </code></pre>
  368.  
  369. <p>
  370. I should probably figure out how to remove some of the duplicate date parsing code, but it's basically a one-off migration script so I may not bother.
  371. </p>
  372.  
  373. <p>
  374. I think I have enough in place now that I can start setting up the templates for the
  375. main index.html page and the quote and news archives. Then I can start exploring
  376. the authoring half of the equation.
  377. </p>
  378. </div>
  379.    </content>
  380.    <link rel="alternate" href="http://www.cafeconleche.org/#January_26_2010_25980"/>
  381.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January26.html#January_26_2010_25980"/>
  382.    <id>http://www.cafeconleche.org/#January_26_2010_25980</id>
  383.    <updated>2010-01-26T07:13:00Z</updated>
  384.  </entry>
  385.  <entry>
  386.    <title>I'm beginning to seriously hate the runtime error handling (or lack thereof) in XQuery.
  387.          </title>
  388.    <content type="xhtml">
  389.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_25_2010_23081" class="2010-01-25T06:25:41Z">
  390. <p>
  391. I'm beginning to seriously hate the runtime
  392. error handling (or lack thereof) in XQuery.
  393. It's just too damn hard to debug what's going wrong where compared to Java.
  394. You can't see where the bad data is coming from, and there's no try-catch facility to help you out.
  395. Now that I think about it, I had very similar problems with Haskell last year. I wonder if this is a common issue with functional languages?  
  396. </p>
  397. </div>
  398.    </content>
  399.    <link rel="alternate" href="http://www.cafeconleche.org/#January_25_2010_23081"/>
  400.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January25.html#January_25_2010_23081"/>
  401.    <id>http://www.cafeconleche.org/#January_25_2010_23081</id>
  402.    <updated>2010-01-25T06:25:41Z</updated>
  403.  </entry>
  404.  <entry>
  405.    <title>I've just about finished importing all the old quotes into eXist.
  406.          </title>
  407.    <content type="xhtml">
  408.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_21_2010_24565" class="2010-01-21T07:49:25Z">
  409. <p>
  410. I've just about finished importing all the old quotes into eXist.
  411. (There was quite a bit of cleanup work going back 12 years. The format changed solowly over time.) Next up is the news.
  412. </p>
  413.  
  414. <p>
  415. I am wondering if maybe this is backwards. Perhaps first I should build the forms and backend for posting new content, and then import the old data? After all, it's the new content people are interested in. There's not that much call for breaking XML news from 1998. :-)
  416. </p>
  417. </div>
  418.    </content>
  419.    <link rel="alternate" href="http://www.cafeconleche.org/#January_21_2010_24565"/>
  420.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January21.html#January_21_2010_24565"/>
  421.    <id>http://www.cafeconleche.org/#January_21_2010_24565</id>
  422.    <updated>2010-01-21T07:49:25Z</updated>
  423.  </entry>
  424.  <entry>
  425.    <title>Parsing a date in the form "Wednesday, January 20, 2010" in XQuery.</title>
  426.    <content type="xhtml">
  427.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_20_2010_18085" class="2010-01-20T05:01:25Z">
  428. <p>Parsing a date in the form "Wednesday, January 20, 2010" in XQuery:</p>
  429.  
  430. <pre xml:space="preserve">xquery version "1.0";
  431.  
  432. declare function local:leading-zero($n as xs:decimal) as xs:string {
  433.    let $result := if ($n &gt;= 10)
  434.    then string($n)
  435.    else concat("0", string($n))
  436.   return $result
  437. };
  438.  
  439. declare function local:parse-date($date as xs:string) as element() {
  440.    let $day := normalize-space(substring-before($date, ","))
  441.    let $string-date := normalize-space(substring-after($date, ","))
  442.    let $year := normalize-space(substring-after($string-date, ","))
  443.    let $month-day := normalize-space(substring-before($string-date, ","))
  444.    let $months := ("January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December")
  445.    
  446.    let $month := substring-before($month-day, " ")
  447.    let $day-of-month := number(substring-after($month-day, " "))
  448.    
  449.    return
  450.      &lt;postdate&gt;
  451.        &lt;day&gt;{$day}&lt;/day&gt;
  452.        &lt;date&gt;{$year}-{local:leading-zero(index-of($months,$month))}-{local:leading-zero($day-of-month)}&lt;/date&gt;
  453.      &lt;/postdate&gt;
  454. };
  455.  
  456.  
  457. local:parse-date("Monday, April 27, 2009")</pre>
  458.  
  459. </div>
  460.    </content>
  461.    <link rel="alternate" href="http://www.cafeconleche.org/#January_20_2010_18085"/>
  462.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January20.html#January_20_2010_18085"/>
  463.    <id>http://www.cafeconleche.org/#January_20_2010_18085</id>
  464.    <updated>2010-01-20T05:01:25Z</updated>
  465.  </entry>
  466.  <entry>
  467.    <title>Today I went from merely splitting the quotes files apart into indiviodual quotes to actually storing them back into the database.</title>
  468.    <content type="xhtml">
  469.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_19_2010_24487" class="2010-01-19T07:48:07Z">
  470. <p>
  471. Today I went from merely splitting the quotes files apart into indiviodual quotes to actually storing them back into the database:
  472. </p>
  473.  
  474. <pre xml:space="preserve">xquery version "1.0";
  475. declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  476. declare namespace html="http://www.w3.org/1999/xhtml";
  477.  
  478. for $dt in doc("/db/quoteshtml/quotes2009.html")/html:html/html:body/html:dl/html:dt
  479. let $id := string($dt/@id)
  480. let $date := string($dt)
  481. let $dd := $dt/following-sibling::html:dd[1]
  482. let $quote := $dd/html:blockquote
  483. let $cite := string($quote/@cite)
  484. let $source := $quote/following-sibling::*
  485. let $sourcetext := normalize-space(substring-after($source, "--"))
  486. let $author := if (contains($sourcetext, "Read the"))
  487.               then substring-before($sourcetext, "Read")
  488.               else substring-before($sourcetext, "on the")
  489. let $location := if ($source/html:a)
  490.               then $source/html:a
  491.               else substring-after($sourcetext, "on the")
  492. let $quotedate := if (contains($sourcetext, "list,"))
  493.               then  normalize-space(substring-after($sourcetext, "list,"))
  494.               else ""
  495. let $justlocation := if (contains($location, "list,"))
  496.               then  normalize-space(substring-after(substring-before($sourcetext, ","), "on the"))
  497.               else $location
  498. let $singlequote := &lt;quote&gt;
  499.   &lt;id&gt;{$id}&lt;/id&gt;
  500.   &lt;postdate&gt;{$date}&lt;/postdate&gt;
  501.   &lt;content&gt;{$quote}&lt;/content&gt;
  502.   &lt;cite&gt;{$cite}&lt;/cite&gt;
  503.   &lt;author&gt;{$author}&lt;/author&gt;
  504.   &lt;location&gt;{$justlocation}&lt;/location&gt;
  505.   {
  506.     if ($quotedate)
  507.     then &lt;quotedate&gt;{$quotedate}&lt;/quotedate&gt;
  508.     else ""
  509.   }
  510. &lt;/quote&gt;
  511.  
  512. let $name := concat("quote_", $id)
  513.  
  514. let $store-return := xmldb:store("quotes", $name, $singlequote)
  515.  
  516. return
  517. &lt;store-result&gt;
  518.   &lt;store&gt;{$store-return}&lt;/store&gt;
  519.   &lt;documentname&gt;{$name}&lt;/documentname&gt;
  520. &lt;/store-result&gt;</pre>
  521.  
  522. <p>
  523. I suspect the next thing I should do is work on iomproving the dates somewhat since I'll likely want to sort and query by them. Right now they're human reabale but not so easy to process. E.g.
  524. </p>
  525.  
  526.    <p><code>&lt;postdate&gt;Monday, April 27, 2009&lt;/postdate&gt;</code></p>
  527.  
  528. <p>
  529. I should try to turn this into
  530. </p>
  531.  
  532.    <pre xml:space="preserve"><code>&lt;postdate&gt;
  533.  &lt;day&gt;Monday&lt;/day&gt;
  534.  &lt;date&gt;2009-04-27&lt;/date&gt;
  535. &lt;/postdate&gt;</code></pre>
  536.  
  537.  
  538. <p>
  539. Time to read up on the <a href="http://www.w3.org/TR/xpath-functions/#durations-dates-times" shape="rect">XQuery date and time functions</a>. Hmm, looks like it's going to be regular expressions after all.
  540. </p>
  541.  
  542. </div>
  543.    </content>
  544.    <link rel="alternate" href="http://www.cafeconleche.org/#January_19_2010_24487"/>
  545.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January19.html#January_19_2010_24487"/>
  546.    <id>http://www.cafeconleche.org/#January_19_2010_24487</id>
  547.    <updated>2010-01-19T07:48:07Z</updated>
  548.  </entry>
  549.  <entry>
  550.    <title>I've converted all the old quotes archives to well-formed (though not necessarily valid) XHTML and uploaded them into eXist.
  551.          </title>
  552.    <content type="xhtml">
  553.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_15_2010_19676" class="2010-01-15T05:28:56Z">
  554. <p>
  555. I've converted all the old quotes archives to well-formed (though not necessarily valid) XHTML and uploaded them into eXist. Now I have to come up with an XQuery that breaks them up into individual quotes. This is proving trickier than expected (and I expected it to be pretty tricky, especially since a lot of the old quotes aren't
  556. in perfectly consistent formats.</p>
  557.  
  558.  
  559. <p> Maybe it's time to try out Oxygen's <a href="http://www.oxygenxml.com/xquery_debugger.html" shape="rect">XQuery debugger</a> since they sent me a freebie? If only the interface weren't such a horrow show. They say they have a debugger but I can't find it, and the buttons they're using in the screencast don't seem to be present in the latest version. In the meantime, can anyone see the syntax error in this code?
  560. </p>
  561.  
  562.  
  563. <pre xml:space="preserve">
  564. xquery version "1.0";
  565. declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  566. declare namespace html="http://www.w3.org/1999/xhtml";
  567.  
  568.     for $dt in doc("/db/quoteshtml/quotes2010.html")/html:html/html:body/html:dl/html:dt
  569.        let $id := string($dt/@id)
  570.        let $date := string($dt)
  571.        let $dd := $dt/following-sibling::html:dd
  572.        let $quote := $dd/html:blockquote
  573.        let $cite := string($quote/@cite)
  574.        let $source := $quote/following-sibling::html:p
  575.        let $author := normalize-space(substring-after($source/*[1], "--"))
  576.     return
  577.        &lt;quote&gt;
  578.           &lt;id&gt;{$id}&lt;/id&gt;
  579.           &lt;date&gt;{$date}&lt;/date&gt;
  580.           &lt;quote&gt;{$quote}&lt;/quote&gt;
  581.           &lt;cite&gt;{$cite}&lt;/cite&gt;
  582.           &lt;source&gt;{$quote}&lt;/source&gt;
  583.           &lt;author&gt;{$author}&lt;/author&gt;
  584.        &lt;/quote&gt;
  585. </pre>
  586.  
  587. <p>The error message from exist is "The actual cardinality for parameter 1 does not match the cardinality declared in the function's signature: string($arg as item()?) xs:string. Expected cardinality: zero or one, got 4."</p>
  588.  
  589. <p>
  590. Found the bug: the debugger wasn't very helpful (once I found it--apparently Author and Oxygen are not the same thing), but Saxon had much better error messages than eXist.
  591. I needed to change <code>let $dd := $dt/following-sibling::html:dd</code> to
  592. <code>let $dd := $dt/following-sibling::html:dd[1]</code>.
  593. eXist didn't tell me which line had the problem so I was looking in the wrong place. Saxon pointed me straight to it. Score 1 for Saxon.
  594. </p>
  595.  
  596. <p>
  597. Here's the finished script. It works for at least the lasy couple of years.
  598. I still have to test it out on some of the older files:
  599. </p>
  600.  
  601.  
  602. <pre xml:space="preserve">xquery version "1.0";
  603. declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  604. declare namespace html="http://www.w3.org/1999/xhtml";
  605.  
  606. for $dt in doc("/db/quoteshtml/quotes2009.html")/html:html/html:body/html:dl/html:dt
  607.    let $id := string($dt/@id)
  608.    let $date := string($dt)
  609.    let $dd := $dt/following-sibling::html:dd[1]
  610.    let $quote := $dd/html:blockquote
  611.    let $cite := string($quote/@cite)
  612.    let $source := $quote/following-sibling::*
  613.    let $sourcetext := normalize-space(substring-after($source, "--"))
  614.    let $author := if (contains($sourcetext, "Read the"))
  615.                   then substring-before($sourcetext, "Read")
  616.                   else substring-before($sourcetext, "on the")
  617.    let $location := if ($source/html:a)
  618.                   then $source/html:a
  619.                   else substring-after($sourcetext, "on the")
  620.    let $quotedate := if (contains($sourcetext, "list,"))
  621.                   then  normalize-space(substring-after($sourcetext, "list,"))
  622.                   else ""
  623.    let $justlocation := if (contains($location, "list,"))
  624.                   then  normalize-space(substring-after(substring-before($sourcetext, ","), "on the"))
  625.                   else $location
  626. return
  627.    &lt;quote&gt;
  628.       &lt;id&gt;{$id}&lt;/id&gt;
  629.       &lt;postdate&gt;{$date}&lt;/postdate&gt;
  630.       &lt;quote&gt;{$quote}&lt;/quote&gt;
  631.       &lt;cite&gt;{$cite}&lt;/cite&gt;
  632.       &lt;author&gt;{$author}&lt;/author&gt;
  633.       &lt;location&gt;{$justlocation}&lt;/location&gt;
  634.       {
  635.         if ($quotedate)
  636.         then &lt;quotedate&gt;{$quotedate}&lt;/quotedate&gt;
  637.         else ""
  638.       }
  639.    &lt;/quote&gt;
  640. </pre>
  641.  
  642.  
  643. </div>
  644.    </content>
  645.    <link rel="alternate" href="http://www.cafeconleche.org/#January_15_2010_19676"/>
  646.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January15.html#January_15_2010_19676"/>
  647.    <id>http://www.cafeconleche.org/#January_15_2010_19676</id>
  648.    <updated>2010-01-15T05:28:56Z</updated>
  649.  </entry>
  650.  <entry>
  651.    <title>The XQuery work continues to roll along.
  652.          </title>
  653.    <content type="xhtml">
  654.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_14_2010_23563" class="2010-01-14T07:33:43Z">
  655. <p>
  656. The XQuery work continues to roll along. I think I've roughly figured out to configure the server. I found and reported a few more bugs in eXists, none too critical.
  657. I now have eXist serving this entire web site on my local box, though I haven't
  658. changed the server here on IBiblio yet. That's still Apache and PHP. The next step is to convert all the static files from the last 12 years--quotes, news, books, conferences, etc.--into smaller documents in the database. For instance, each quote will be its own document. Then I have to rewrite the pages the as XQuery "templates" that query the database. From that point I can add suppor for new posts, submissions, and comments via a web browser and forms.
  659. </p>
  660. </div>
  661.    </content>
  662.    <link rel="alternate" href="http://www.cafeconleche.org/#January_14_2010_23563"/>
  663.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January14.html#January_14_2010_23563"/>
  664.    <id>http://www.cafeconleche.org/#January_14_2010_23563</id>
  665.    <updated>2010-01-14T07:33:43Z</updated>
  666.  </entry>
  667.  <entry>
  668.    <title>I didn't really like the format of yesterday's Twitter dump so today I opened another can of XQuery ass-kicking to improve it.
  669.          </title>
  670.    <content type="xhtml">
  671.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_8_2010_26761" class="2010-01-08T07:26:01Z">
  672. <p>
  673. I didn't really like the format of yesterday's Twitter dump so today I opened another can of XQuery ass-kicking to improve it. First, let's group by date:
  674. </p>
  675.  
  676. <pre xml:space="preserve">
  677. xquery version "1.0";
  678. declare namespace atom="http://www.w3.org/2005/Atom";
  679.  
  680. let $tweets := for $entry in reverse(document("/db/twitter/elharo")/atom:feed/atom:entry)
  681. return
  682.  &lt;div&gt;&lt;date&gt;{substring-before($entry/atom:updated/text(), "T")}&lt;/date&gt; &lt;p&gt; &lt;span&gt;{substring-before(substring-after($entry/atom:updated/text(), "T"), "+")} UTC&lt;/span&gt; {substring-after($entry/atom:title/text(), "elharo:")}&lt;/p&gt;&lt;/div&gt;
  683.  
  684. return
  685.  for $date in distinct-values($tweets/date)
  686.  return &lt;div&gt;&lt;h3&gt;{$date}&lt;/h3&gt;
  687.   {
  688.   for $tweet in $tweets
  689.   where $tweet/date = $date
  690.   return $tweet/p
  691.  }&lt;/div&gt;
  692. </pre>
  693.  
  694. <p>
  695. Now let's hyperlink the URLs:
  696. </p>
  697.  
  698. <pre xml:space="preserve">xquery version "1.0";
  699. declare namespace atom="http://www.w3.org/2005/Atom";
  700.  
  701. let $tweets := for $entry in reverse(document("/db/twitter/elharo")/atom:feed/atom:entry)
  702. return
  703.  &lt;div&gt;&lt;date&gt;{substring-before($entry/atom:updated/text(), "T")}&lt;/date&gt; &lt;p&gt; &lt;span&gt;{substring-before(substring-after($entry/atom:updated/text(), "T"), "+")} &lt;/span&gt;
  704. {replace(substring-after($entry/atom:title/text(), "elharo:"),
  705. "(http://[^\s]+)",
  706. "&lt;a href='http://$1'&gt;http://$1&lt;/a&gt;")}&lt;/p&gt;&lt;/div&gt;
  707.  
  708. return
  709.  for $date in distinct-values($tweets/date)
  710.  return &lt;div&gt;&lt;h3&gt;{$date}&lt;/h3&gt;
  711.   {
  712.   for $tweet in $tweets
  713.   where $tweet/date = $date
  714.   return $tweet/p
  715.  }&lt;/div&gt;
  716. </pre>
  717.  
  718. <p>
  719. Let's do the same for @names:
  720. </p>
  721.  
  722. <pre xml:space="preserve">
  723. xquery version "1.0";
  724. declare namespace atom="http://www.w3.org/2005/Atom";
  725.  
  726. let $tweets := for $entry in reverse(document("/db/twitter/elharo")/atom:feed/atom:entry)
  727. return
  728.  &lt;div&gt;&lt;date&gt;{substring-before($entry/atom:updated/text(), "T")}&lt;/date&gt; &lt;p&gt; &lt;span&gt;{substring-before(substring-after($entry/atom:updated/text(), "T"), "+")} &lt;/span&gt;
  729. {
  730. replace (
  731.    replace(substring-after($entry/atom:title/text(), "elharo:"),
  732.        "(http://[^\s]+)",
  733.        "&lt;a href='$1'&gt;$1&lt;/a&gt;"),
  734.    " @([a-zA-Z]+)",
  735.    " &lt;a href='http://twitter.com/$1'&gt;@$1&lt;/a&gt;"
  736. )
  737. }&lt;/p&gt;&lt;/div&gt;
  738.  
  739. return
  740.  for $date in distinct-values($tweets/date)
  741.  return &lt;div&gt;&lt;h3&gt;{$date}&lt;/h3&gt;
  742.   {
  743.   for $tweet in $tweets
  744.   where $tweet/date = $date
  745.   return $tweet/p
  746.  }&lt;/div&gt;
  747. </pre>
  748.  
  749.  
  750. <p>
  751. And one more time for hash tags:
  752. </p>
  753.  
  754. <pre xml:space="preserve">
  755. xquery version "1.0";
  756. declare namespace atom="http://www.w3.org/2005/Atom";
  757.  
  758. let $tweets := for $entry in reverse(document("/db/twitter/elharo")/atom:feed/atom:entry)
  759. return
  760.  &lt;div&gt;&lt;date&gt;{substring-before($entry/atom:updated/text(), "T")}&lt;/date&gt; &lt;p&gt; &lt;span&gt;{substring-before(substring-after($entry/atom:updated/text(), "T"), "+")} &lt;/span&gt;
  761. {
  762. replace (
  763.    replace (
  764.        replace(substring-after($entry/atom:title/text(), "elharo:"),
  765.            "(http://[^\s]+)",
  766.            "&lt;a href='$1'&gt;$1&lt;/a&gt;"),
  767.        " @([a-zA-Z]+)",
  768.        " &lt;a href='http://twitter.com/$1'&gt;@$1&lt;/a&gt;"
  769.    ),
  770.    " #([a-zA-Z]+)",
  771.    " &lt;a href='http://twitter.com/search?q=#$1'&gt;#$1&lt;/a&gt;"
  772. )
  773. }&lt;/p&gt;&lt;/div&gt;
  774.  
  775. return
  776.  for $date in distinct-values($tweets/date)
  777.  return &lt;div&gt;&lt;h3&gt;{$date}&lt;/h3&gt;
  778.   {
  779.   for $tweet in $tweets
  780.   where $tweet/date = $date
  781.   return $tweet/p
  782.  }&lt;/div&gt;
  783. </pre>
  784.  
  785. <p>And here's the <a href="http://www.elharo.com/blog/software-development/xml/2010/01/08/all-my-tweets-in-2009/" shape="rect">finished result</a>. </p>
  786.  
  787. </div>
  788.    </content>
  789.    <link rel="alternate" href="http://www.cafeconleche.org/#January_8_2010_26761"/>
  790.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January8.html#January_8_2010_26761"/>
  791.    <id>http://www.cafeconleche.org/#January_8_2010_26761</id>
  792.    <updated>2010-01-08T07:26:01Z</updated>
  793.  </entry>
  794.  <entry>
  795.    <title>This morning a simple practice exercise to get my toes wet.
  796.          </title>
  797.    <content type="xhtml">
  798.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_7_2010_27569" class="2010-01-07T08:39:29Z">
  799. <p>
  800. This morning a simple practice exercise to get my toes wet. First load my Tweets from their Atom feed into eXist:
  801. </p>
  802.  
  803. <pre xml:space="preserve">xquery version "1.0";
  804. declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  805. let $collection := xmldb:create-collection("/db", "twitter")
  806. let $filename := ""
  807. let $URI := xs:anyURI("file:///Users/elharo/backups/elharo_statuses.xml")
  808. let $retcode := xmldb:store($collection, "elharo", $URI)
  809. return $retcode</pre>
  810.  
  811. <p>
  812. Then generate HTML of each tweet:
  813. </p>
  814.  
  815. <pre xml:space="preserve">xquery version "1.0";
  816. declare namespace atom="http://www.w3.org/2005/Atom";
  817. for $entry in document("/db/twitter/elharo")/atom:feed/atom:entry
  818.   return
  819.  &lt;p&gt;{$entry/atom:updated/text()} {substring-after($entry/atom:title/text(), "elharo:")}&lt;/p&gt;</pre>
  820.  
  821. <p>
  822. Can I reverse them so they go forward in time? Yes, easily:
  823. </p>
  824.  
  825. <pre xml:space="preserve">for $entry in reverse(document("/db/twitter/elharo")/atom:feed/atom:entry)</pre>
  826.  
  827. <p>
  828. Now how do I dump that to a file? Maybe something like this?
  829. </p>
  830.  
  831. <pre xml:space="preserve">xquery version "1.0";
  832. declare namespace atom="http://www.w3.org/2005/Atom";
  833. let $tweets := &lt;html&gt; {for $entry in document("/db/twitter/elharo")/atom:feed/atom:entry
  834.   return
  835.  &lt;p&gt;{$entry/atom:updated/text()} {substring-after($entry/atom:title/text(), "elharo:")}&lt;/p&gt;
  836. } &lt;/html&gt;
  837. return  xmldb:store("/db/twitter", "/Users/elharo/tmp/tweets.html", $tweets)</pre>
  838.  
  839. <p>
  840. Oh damn. Almost, but that puts it back into the database instead of the filesystem. Still I can now run a query that grabs just that and copy and paste the result since there's only 1. The first query gave almost 1000 results and the query sandbox only shows one at a time.
  841. </p>
  842.  
  843. <p>
  844. Tomorrow: how do I serve that query as a web page?
  845. </p>
  846.  
  847. </div>
  848.    </content>
  849.    <link rel="alternate" href="http://www.cafeconleche.org/#January_7_2010_27569"/>
  850.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January7.html#January_7_2010_27569"/>
  851.    <id>http://www.cafeconleche.org/#January_7_2010_27569</id>
  852.    <updated>2010-01-07T08:39:29Z</updated>
  853.  </entry>
  854.  <entry>
  855.    <title>What I've learned about eXist so far.</title>
  856.    <content type="xhtml">
  857.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_6_2010_26327" class="2010-01-06T07:19:47Z">
  858. <p>
  859. What I've learned about eXist so far:
  860. </p>
  861.  
  862. <ul>
  863. <li>I can use virtual hosting to run it, either at Rackspace Cloud, Amazon EC2, or right here on IBiblio; and use Jetty as my web server.</li>
  864.  
  865. <li>However I probably should proxy it behind Apache anyway.</li>
  866.  
  867. <li>I can upload files into the repository.</li>
  868.  
  869. <li>I can execute simple XQueries using the XQuery sandbox.</li>
  870. </ul>
  871.  
  872. <p>
  873. What I still don't know:
  874. </p>
  875.  
  876. <ul>
  877. <li>How to address the documents I upload from inside the XQuery sandbox; and in general how to manage and manipulate collections.</li>
  878. </ul>
  879.  
  880. <p>
  881. Partial answer:
  882. </p>
  883.  
  884.  
  885. <pre xml:space="preserve">xquery version "1.0";
  886. declare namespace xmldb="http://exist-db.org/xquery/xmldb";
  887. for $foo in collection("/db/<i>collectionname</i>")
  888. return $foo</pre>
  889.  
  890. </div>
  891.    </content>
  892.    <link rel="alternate" href="http://www.cafeconleche.org/#January_6_2010_26327"/>
  893.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January6.html#January_6_2010_26327"/>
  894.    <id>http://www.cafeconleche.org/#January_6_2010_26327</id>
  895.    <updated>2010-01-06T07:19:47Z</updated>
  896.  </entry>
  897.  <entry>
  898.    <title>First bug filed against exist during this project.</title>
  899.    <content type="xhtml">
  900.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_5_2010_19686" class="2010-01-05T05:28:06Z">
  901. <p>
  902. First bug  filed against exist during this project: <a href="https://sourceforge.net/tracker/?func=detail&amp;aid=2926143&amp;group_id=17691&amp;atid=117691" shape="rect">excessive confirmation</a>, a common UI anti-pattern, especially on Windows though in this case it's cross-platform.
  903. </p>
  904.  
  905. <p>
  906. <a href="https://sourceforge.net/tracker/?func=detail&amp;aid=2926152&amp;group_id=17691&amp;atid=117691" shape="rect">Second bug filed</a>. This one comes with potential for data loss.
  907. </p>
  908.  
  909. <p>
  910. Third bug and I haven't even left the installer yet. Time to check out the source code. (I hope I don't have to fix IzPack too.)
  911. </p>
  912.  
  913. </div>
  914.    </content>
  915.    <link rel="alternate" href="http://www.cafeconleche.org/#January_5_2010_19686"/>
  916.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January5.html#January_5_2010_19686"/>
  917.    <id>http://www.cafeconleche.org/#January_5_2010_19686</id>
  918.    <updated>2010-01-05T05:28:06Z</updated>
  919.  </entry>
  920.  <entry>
  921.    <title>At the tune of the new year and a new decade, I've decided to explore some changes here.
  922.          </title>
  923.    <content type="xhtml">
  924.      <div xmlns="http://www.w3.org/1999/xhtml" id="January_4_2010_22139" class="2010-01-04T06:09:59Z">
  925. <p>
  926. At the tune of the new year and a new decade, I've decided to explore some changes here. Several points are behind this:
  927. </p>
  928.  
  929. <ol>
  930. <li>Since starting to work more as a software developer and less as an author, I don't have as much free time to work on these sites as I once did, nor is it as obviously relevant to my day job. When I was a full-time author, these sites gave me new ideas and  new things to write about. They still do, but I no longer have the time to write about those things. </li>
  931.  
  932. <li>Cafe con Leche  and Cafe au Lait and are some of the oldest blogs on the Web. In fact, I only know a couple that predate Cafe au Lait. When Cafe au Lait started MySQL wasn't open source, and PHP, XML, and XSLT didn't exist yet. In other words, the technology that powers them is <em>old</em>. </li>
  933.  
  934. <li>WordPress  helped me rethink a lot of how I suspect a blog site should work from the user interface side. These sites are a lot more automated and well-formed than they used to be; but it's still basically static HTML driven by some client side AppleScript and XSLT run out of cron jobs. I'd like to do better. I considered just porting them to WordPress; but, as nice as the WordPress frontend is, it has some flaws; the most fundamental of which is that it's trying to stuff triangular pegs into rectangular holes. </li>
  935.  
  936. </ol>
  937.  
  938. <p>
  939. I don't have a lot of spare time these days; and what I do have is mostly occupied with photography and chasing birds, but I've decided that there's not a lot of point to continuing with this site as it is.
  940. </p>
  941.  
  942. <p>
  943. Don't worry though. It's not going away. I'm just going to focus on building a new infrastructure rather than on posting more news. I'm going to dogfood my work right here on Cafe con Leche. (I will keep Cafe au Lait on the old system until I'm happy with the new one.) I've decided to begin by experimenting with bringing the site up on top of existDB. It may go down in flames. It may not work at all. I may have to revert to the old version. It will probably sometimes be unavailable. There will have to be several iterations. But certainly along the way I'll learn a few things about XQuery databases, and just maybe I'll produce something that's more widely useful than a few bits of AppleScript and XSLT. See you on the other side!
  944. </p>
  945.  
  946. </div>
  947.    </content>
  948.    <link rel="alternate" href="http://www.cafeconleche.org/#January_4_2010_22139"/>
  949.    <link rel="permalink" href="http://www.cafeconleche.org/oldnews/news2010January4.html#January_4_2010_22139"/>
  950.    <id>http://www.cafeconleche.org/#January_4_2010_22139</id>
  951.    <updated>2010-01-04T06:09:59Z</updated>
  952.  </entry>
  953. </feed>
  954.  

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid Atom 1.0" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=http%3A//www.cafeconleche.org/today.atom

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda