Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: https://nike-tech.net/feed/

  1. <?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
  2. xmlns:content="http://purl.org/rss/1.0/modules/content/"
  3. xmlns:wfw="http://wellformedweb.org/CommentAPI/"
  4. xmlns:dc="http://purl.org/dc/elements/1.1/"
  5. xmlns:atom="http://www.w3.org/2005/Atom"
  6. xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  7. xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
  8. >
  9.  
  10. <channel>
  11. <title>Nike Tech</title>
  12. <atom:link href="https://nike-tech.net/feed/" rel="self" type="application/rss+xml" />
  13. <link>https://nike-tech.net</link>
  14. <description>The Tech Blog</description>
  15. <lastBuildDate>Tue, 30 Apr 2024 07:00:46 +0000</lastBuildDate>
  16. <language>en-US</language>
  17. <sy:updatePeriod>
  18. hourly </sy:updatePeriod>
  19. <sy:updateFrequency>
  20. 1 </sy:updateFrequency>
  21. <generator>https://wordpress.org/?v=6.5.3</generator>
  22.  
  23. 
  30. <item>
  31. <title>neuroClues wants to put high speed eye tracking tech in the doctor&#8217;s office</title>
  32. <link>https://nike-tech.net/neuroclues-wants-to-put-high-speed-eye-tracking-tech-in-the-doctors-office/</link>
  33. <comments>https://nike-tech.net/neuroclues-wants-to-put-high-speed-eye-tracking-tech-in-the-doctors-office/#comments</comments>
  34. <dc:creator><![CDATA[HBR]]></dc:creator>
  35. <pubDate>Tue, 30 Apr 2024 07:00:46 +0000</pubDate>
  36. <category><![CDATA[AI]]></category>
  37. <guid isPermaLink="false">https://nike-tech.net/neuroclues-wants-to-put-high-speed-eye-tracking-tech-in-the-doctors-office/</guid>
  38.  
  39. <description><![CDATA[<p>The eyes aren’t just a window into the soul; tracking saccades can help doctors pick up a range of brain health issues. That’s why French-Belgian medtech startup neuroClues is building accessible, high-speed eye-tracking technology that incorporates AI-driven analysis. It wants to make it easier for healthcare service providers to use eye tracking to support the [&#8230;]</p>
  40. <p>The post <a rel="nofollow" href="https://nike-tech.net/neuroclues-wants-to-put-high-speed-eye-tracking-tech-in-the-doctors-office/">neuroClues wants to put high speed eye tracking tech in the doctor&#8217;s office</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  41. ]]></description>
  42. <content:encoded><![CDATA[<div>
  43. <p id="speakable-summary">The eyes aren’t just a window into the soul; tracking saccades can help doctors pick up a range of brain health issues. That’s why French-Belgian medtech startup <a href="https://neuroclues.com" target="_blank" rel="noopener">neuroClues</a> is building accessible, high-speed eye-tracking technology that incorporates AI-driven analysis. It wants to make it easier for healthcare service providers to use eye tracking to support the diagnosis of neurodegenerative conditions.</p>
  44. <p>The company is starting with a focus on Parkinson’s disease, which already typically incorporates a test of a patient’s eye movement. Today, a doctor asks a patient to “follow my finger,” but neuroClues wants clinicians to use its proprietary, portable headsets to instead capture eye movements at 800 frames per second, after which they can run an analysis of the data in just a few seconds.</p>
  45. <p>The 3.5-year-old outfit’s co-founders — both neuroscience researchers — point to high rates of misdiagnosis of Parkinson’s as one of the factors informing their decision to focus on the disease first. But their <span style="font-size: 1rem; letter-spacing: -0.1px;">ambitions do pan wider. They paint a picture of the future in which their device becomes a “stethoscope for the brain.” </span>Imagine, for example, if your annual trip to the optician could pack in a quick scan of brain health, and compare you against standard benchmarks for your age. According to the startup, which says it aims to help 10 million patients by 2023, eye tracking protocols could also help test for other diseases and conditions including concussion, Alzheimer’s, MS and stroke.</p>
  46. <p>So how does the device work? Today, a patient looks through the headset and sees a screen where dots appear. A clinician then tells them to follow the dots with their eyes, after which the device extracts data that can be used as disease biomarkers by recording and analyzing their eye movements, measuring things like latency and error rate. It also provides the clinician with<span style="font-size: 1rem; letter-spacing: -0.1px;"> a standard value expected from a healthy population to compare with the patient’s results. </span></p>
  47. <p>“The first scientific paper that is using eye tracking to diagnose patients is 1905,” neuroClues co-founder and CEO Antoine Pouppez told Nike Tech in an exclusive interview, noting the technique was initially used for diagnosing schizophrenia. In the 1960s, when video eye trackers arrived, there was a boom in research into the technique for tracking<span style="font-size: 1rem; letter-spacing: -0.1px;"> neurological disorders. </span><span style="font-size: 1rem; letter-spacing: -0.1px;">But decades of research into the usefulness of eye-tracking as a diagnostic technique has not translated into widespread clinical uptake because the tech wasn’t there yet and/or was too expensive, said Pouppez.</span></p>
  48. <p><span style="font-size: 1rem; letter-spacing: -0.1px;">“That’s where this technology comes from: The frustration of my co-founders to see that eye tracking has a lot of value — that’s been demonstrated in research that has been clinically proven on thousands of patients in research setups — and it’s still not used in clinical practice,” he said. </span><span style="font-size: 1rem; letter-spacing: -0.1px;">“Doctors today use their fingers — and literally say ‘follow my finger’ — whereas an eye is moving at 600 degrees per second. You’re doing three eye movements per second. And so it’s very, very difficult — close to impossible — to evaluate how well you’re moving around [by human eye alone].”</span></p>
  49. <p>Others have similarly spotted the potential to do more with eye tracking as a diagnostic aid.</p>
  50. <p>U.S.-based <a href="https://www.neurosync.health" target="_blank" rel="noopener">Neurosync</a>, for example, offers a VR headset combined with FDA-cleared eye tracking software it says can analyze the wearer’s eye movements “as an aid to concussion diagnosis.”The product is geared toward football players and athletes in other contact sports who face elevated risk of head injury.</p>
  51. <p>There are also mobile app makers — such as <a href="https://braineye.com" target="_blank" rel="noopener">BrainEye</a> — pitching consumers on smartphone-based eye-tracking tech for self testing “brain health.” (Such claims are not evaluated by medical device regulators.)</p>
  52. <p>But neuroClues stands out in a variety of ways. First, it says its headset can be located in a regular clinician’s office, without the need for a dark room set-up nor specialist computing hardware. It’s not using off-the-shelf hardware but instead developing dedicated eye-tracking headsets for eye testing designed to record at high speed and control the recording environment. The outfit’s founders further argue that by building its own software, neuroClues enjoys <span style="font-size: 1rem; letter-spacing: -0.1px;">unrivaled speed of data capture in a commercially deployed, non-static device. </span></p>
  53. <p><span style="font-size: 1rem; letter-spacing: -0.1px;">To protect these ostensible advantages, neuroClues has a number of patents granted (or filed) that it says cover various aspects of the design, such as the synchronization of the hardware and software, and its approach to analyzing data </span>The startup is also in the process of filing an application for FDA approval and hoping to gain clearance for use of its device a clinical support tool in the US later this year. It is working on the same type of application in the European Union and anticipates gaining regulatory approval in the EU in 2025.</p>
  54. <p>“We are the only one on the market today that is recording an 800 frames per second on a portable device,” said Pouppez, noting that the research “gold standard” is 1,000 frames per second. “There is no clinical or non-clinical product that is doing it at that frame rate, which meant that we had to lift barriers that no one had lifted before.”</p>
  55. <div id="attachment_2697607" style="width: 4010px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-2697607" class="wp-image-2697607 size-full" src="https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg" alt="NeuroClues Team" width="4000" height="2252" srcset="https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg 4000w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=150,84 150w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=300,169 300w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=768,432 768w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=680,383 680w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=1536,865 1536w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=2048,1153 2048w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=1200,676 1200w, https://techcrunch.com/wp-content/uploads/2024/04/NeuroClues-Team.jpeg?resize=50,28 50w" sizes="(max-width: 4000px) 100vw, 4000px"/></p>
  56. <p id="caption-attachment-2697607" class="wp-caption-text">Image credit: neuroClues</p>
  57. </div>
  58. <p>neuroClues, which was incubated in the Paris Brain Institute, expects the first eye-tracking headsets to be deployed in specialist settings such as university hospitals, so for use on patients who have already been referred to consultants. It notes the service will be reimbursable via existing health insurance codes as eye tracking tests are an established medical intervention. The company says it’s also talking to a number of other outfits in the U.S. and Europe that are interested in its hardware and software.</p>
  59. <p>This first version of the device is designed as a diagnostic aid, meaning that a human clinician is still responsible for interpreting the results. But Pouppez said the team’s goal is to evolve the technology to serve up interpretations of the data, too, so the device can be deployed more broadly.</p>
  60. <p>“Our goal is quickly to move down to bring that diagnostics capabilities to practitioners,” he told us. “We hope to be on the market with such a device in ’26/’27. And so to broaden up our market perspectives and really be in [the toolbox of] every neurologist in US and in Europe.”</p>
  61. <p>The startup is announcing close of a €5 million pre-Series A round of funding, led by White Fund and the European Commission’s EIC Accelerator program. Existing investors Invest.BW, plus a number of business angels, including Fiona du Monceau, former Chair of the Board at UCB, Artwall, and Olivier Legrain, CEO of IBA, also participated. Including this round neuroClues has raised a total of €12M since being founded back in 2020.</p>
  62. <p>Pouppez said it will be looking to raise a Series A in the next 12 to 18 months. “Our existing investors and the European Commission have already shown interest in participating, so basically i’m looking for a lead investor,” he added.</p>
  63. </p></div>
  64. <p>The post <a rel="nofollow" href="https://nike-tech.net/neuroclues-wants-to-put-high-speed-eye-tracking-tech-in-the-doctors-office/">neuroClues wants to put high speed eye tracking tech in the doctor&#8217;s office</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  65. ]]></content:encoded>
  66. <wfw:commentRss>https://nike-tech.net/neuroclues-wants-to-put-high-speed-eye-tracking-tech-in-the-doctors-office/feed/</wfw:commentRss>
  67. <slash:comments>1</slash:comments>
  68. </item>
  69. <item>
  70. <title>Google Gemini: Everything you need to know about the new generative AI platform</title>
  71. <link>https://nike-tech.net/google-gemini-everything-you-need-to-know-about-the-new-generative-ai-platform/</link>
  72. <comments>https://nike-tech.net/google-gemini-everything-you-need-to-know-about-the-new-generative-ai-platform/#respond</comments>
  73. <dc:creator><![CDATA[HBR]]></dc:creator>
  74. <pubDate>Mon, 29 Apr 2024 23:27:51 +0000</pubDate>
  75. <category><![CDATA[AI]]></category>
  76. <guid isPermaLink="false">https://nike-tech.net/google-gemini-everything-you-need-to-know-about-the-new-generative-ai-platform/</guid>
  77.  
  78. <description><![CDATA[<p>Google’s trying to make waves with Gemini, its flagship suite of generative AI models, apps and services. So what is Gemini? How can you use it? And how does it stack up to the competition? To make it easier to keep up with the latest Gemini developments, we’ve put together this handy guide, which we’ll [&#8230;]</p>
  79. <p>The post <a rel="nofollow" href="https://nike-tech.net/google-gemini-everything-you-need-to-know-about-the-new-generative-ai-platform/">Google Gemini: Everything you need to know about the new generative AI platform</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  80. ]]></description>
  81. <content:encoded><![CDATA[<div>
  82. <p id="speakable-summary">Google’s trying to make waves with Gemini, its flagship suite of generative AI models, apps and services.</p>
  83. <p>So what is Gemini? How can you use it? And how does it stack up to the competition?</p>
  84. <p>To make it easier to keep up with the latest Gemini developments, we’ve put together this handy guide, which we’ll keep updated as new Gemini models, features and news about Google’s plans for Gemini are released.</p>
  85. <h2>What is Gemini?</h2>
  86. <p id="speakable-summary">Gemini is Google’s <a href="https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/" target="_blank" rel="noopener" data-mrf-link="https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/">long-promised</a>, next-gen GenAI model family, developed by Google’s AI research labs DeepMind and Google Research. It comes in three flavors:</p>
  87. <ul>
  88. <li><strong>Gemini Ultra</strong>, the most performant Gemini model.</li>
  89. <li><strong>Gemini Pro</strong>, a “lite” Gemini model.</li>
  90. <li><strong>Gemini Nano</strong>, a smaller “distilled” model that runs on mobile devices like the Pixel 8 Pro.</li>
  91. </ul>
  92. <p>All Gemini models were trained to be “natively multimodal” — in other words, able to work with and use more than just words. They were pretrained and fine-tuned on a variety of audio, images and videos, a large set of codebases and text in different languages.</p>
  93. <p>This sets Gemini apart from models such as Google’s own LaMDA, which was trained exclusively on text data. LaMDA can’t understand or generate anything other than text (e.g., essays, email drafts), but that isn’t the case with Gemini models.</p>
  94. <h2>What’s the difference between the Gemini apps and Gemini models?</h2>
  95. <div id="attachment_2601757" style="width: 1034px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-2601757" class="size-full wp-image-2601757" src="https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png" alt="Google's Bard" width="1024" height="536" srcset="https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png 1200w, https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png?resize=150,79 150w, https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png?resize=300,157 300w, https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png?resize=768,402 768w, https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png?resize=680,356 680w, https://techcrunch.com/wp-content/uploads/2023/09/Extensions-title.png?resize=50,26 50w" sizes="(max-width: 1024px) 100vw, 1024px"/></p>
  96. <p id="caption-attachment-2601757" class="wp-caption-text"><strong>Image Credits:</strong> Google</p>
  97. </div>
  98. <p>Google, proving once again that it lacks a knack for branding, didn’t make it clear from the outset that Gemini is separate and distinct from the Gemini apps on the web and mobile (formerly Bard). The Gemini apps are simply an interface through which certain Gemini models can be accessed — think of it as a client for Google’s GenAI.</p>
  99. <p>Incidentally, the Gemini apps and models are also totally independent from Imagen 2, Google’s text-to-image model that’s available in some of the company’s dev tools and environments.</p>
  100. <h2>What can Gemini do?</h2>
  101. <p>Because the Gemini models are multimodal, they can in theory perform a range of multimodal tasks, from transcribing speech to captioning images and videos to generating artwork. Some of these capabilities have reached the product stage yet (more on that later), and Google’s promising all of them — and more — at some point in the not-too-distant future.</p>
  102. <p>Of course, it’s a bit hard to take the company at its word.</p>
  103. <p>Google seriously underdelivered with the original Bard launch. And more recently it ruffled feathers with a video purporting to show Gemini’s capabilities that turned out to have been heavily doctored and was more or less aspirational.</p>
  104. <p>Still, assuming Google is being more or less truthful with its claims, here’s what the different tiers of Gemini will be able to do once they reach their full potential:</p>
  105. <h3>Gemini Ultra</h3>
  106. <p>Google says that Gemini Ultra — thanks to its multimodality — can be used to help with things like physics homework, solving problems step-by-step on a worksheet and pointing out possible mistakes in already filled-in answers.</p>
  107. <p>Gemini Ultra can also be applied to tasks such as identifying scientific papers relevant to a particular problem, Google says — extracting information from those papers and “updating” a chart from one by generating the formulas necessary to re-create the chart with more recent data.</p>
  108. <p>Gemini Ultra technically supports image generation, as alluded to earlier. But that capability hasn’t made its way into the productized version of the model yet — perhaps because the mechanism is more complex than how apps such as ChatGPT generate images. Rather than feed prompts to an image generator (like DALL-E 3, in ChatGPT’s case), Gemini outputs images “natively,” without an intermediary step.</p>
  109. <p>Gemini Ultra is available as an API through Vertex AI, Google’s fully managed AI developer platform, and AI Studio, Google’s web-based tool for app and platform developers. It also powers the Gemini apps — but not for free. Access to Gemini Ultra through what Google calls Gemini Advanced requires subscribing to the Google One AI Premium Plan, priced at $20 per month.</p>
  110. <p>The AI Premium Plan also connects Gemini to your wider Google Workspace account — think emails in Gmail, documents in Docs, presentations in Sheets and Google Meet recordings. That’s useful for, say, summarizing emails or having Gemini capture notes during a video call.</p>
  111. <h3>Gemini Pro</h3>
  112. <p>Google says that Gemini Pro is an improvement over LaMDA in its reasoning, planning and understanding capabilities.</p>
  113. <p>An independent <a href="https://arxiv.org/pdf/2312.11444.pdf" target="_blank" rel="noopener">study</a> by Carnegie Mellon and BerriAI researchers found that the initial version of Gemini Pro was indeed better than OpenAI’s GPT-3.5 at handling longer and more complex reasoning chains. But the study also found that, like all large language models, this version of Gemini Pro particularly struggled with mathematics problems involving several digits, and users found examples of bad reasoning and obvious mistakes.</p>
  114. <p>Google promised remedies, though — and the first arrived in the form of Gemini 1.5 Pro.</p>
  115. <p>Designed to be a drop-in replacement, Gemini 1.5 Pro is improved in a number of areas compared with its predecessor, perhaps most significantly in the amount of data that it can process. Gemini 1.5 Pro can take in ~700,000 words, or ~30,000 lines of code — 35x the amount Gemini 1.0 Pro can handle. And — the model being multimodal — it’s not limited to text. Gemini 1.5 Pro can analyze up to 11 hours of audio or an hour of video in a variety of different languages, albeit slowly (e.g., searching for a scene in a one-hour video takes 30 seconds to a minute of processing).</p>
  116. <p>Gemini 1.5 Pro entered public preview on Vertex AI in April.</p>
  117. <p>An additional endpoint, Gemini Pro Vision, can process text <em>and</em> imagery — including photos and video — and output text along the lines of OpenAI’s GPT-4 with Vision model.</p>
  118. <div id="attachment_2648159" style="width: 929px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-2648159" class="size-full wp-image-2648159" src="https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png" alt="Gemini" width="919" height="600" srcset="https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png 919w, https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png?resize=150,98 150w, https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png?resize=300,196 300w, https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png?resize=768,501 768w, https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png?resize=680,444 680w, https://techcrunch.com/wp-content/uploads/2024/01/structured_prompt.png?resize=50,33 50w" sizes="(max-width: 919px) 100vw, 919px"/></p>
  119. <p id="caption-attachment-2648159" class="wp-caption-text">Using Gemini Pro in Vertex AI. <strong>Image Credits:</strong> Gemini</p>
  120. </div>
  121. <p>Within Vertex AI, developers can customize Gemini Pro to specific contexts and use cases using a fine-tuning or “grounding” process. Gemini Pro can also be connected to external, third-party APIs to perform particular actions.</p>
  122. <p>In AI Studio, there’s workflows for creating structured chat prompts using Gemini Pro. Developers have access to both Gemini Pro and the Gemini Pro Vision endpoints, and they can adjust the model temperature to control the output’s creative range and provide examples to give tone and style instructions — and also tune the safety settings.</p>
  123. <div class="container__access-control">
  124. <h3>Gemini Nano</h3>
  125. <p>Gemini Nano is a much smaller version of the Gemini Pro and Ultra models, and it’s efficient enough to run directly on (some) phones instead of sending the task to a server somewhere. So far, it powers a couple of features on the Pixel 8 Pro, Pixel 8 and Samsung Galaxy S24, including Summarize in Recorder and Smart Reply in Gboard.</p>
  126. <p>The Recorder app, which lets users push a button to record and transcribe audio, includes a Gemini-powered summary of your recorded conversations, interviews, presentations and other snippets. Users get these summaries even if they don’t have a signal or Wi-Fi connection available — and in a nod to privacy, no data leaves their phone in the process.</p>
  127. <p>Gemini Nano is also in Gboard, Google’s keyboard app. There, it powers a feature called Smart Reply, which helps to suggest the next thing you’ll want to say when having a conversation in a messaging app. The feature initially only works with WhatsApp but will come to more apps over time, Google says.</p>
  128. <p>And in the Google Messages app on supported devices, Nano enables Magic Compose, which can craft messages in styles like “excited,” “formal” and “lyrical.”</p>
  129. <h2>Is Gemini better than OpenAI’s GPT-4?</h2>
  130. <p>Google has several times <a href="https://blog.google/technology/ai/google-gemini-ai/" target="_blank" rel="noopener">touted</a> Gemini’s superiority on benchmarks, claiming that Gemini Ultra exceeds current state-of-the-art results on “30 of the 32 widely used academic benchmarks used in large language model research and development.” The company says that Gemini 1.5 Pro, meanwhile, is more capable at tasks like summarizing content, brainstorming and writing than Gemini Ultra in some scenarios; presumably this will change with the release of the next Ultra model.</p>
  131. <p>But leaving aside the question of whether benchmarks really indicate a better model, the scores Google points to appear to be only marginally better than OpenAI’s corresponding models. And — as mentioned earlier — some early impressions haven’t been great, with users and <a href="https://arxiv.org/pdf/2312.11444.pdf" target="_blank" rel="noopener">academics</a> pointing out that the older version of Gemini Pro tends to get basic facts wrong, struggles with translations and gives poor coding suggestions.</p>
  132. <h2>How much does Gemini cost?</h2>
  133. <p>Gemini 1.5 Pro is free to use in the Gemini apps and, for now, AI Studio and Vertex AI.</p>
  134. </div>
  135. <p>Once Gemini 1.5 Pro exits preview in Vertex, however, the model will cost $0.0025 per character while output will cost $0.00005 per character. Vertex customers pay per 1,000 characters (about 140 to 250 words) and, in the case of models like Gemini Pro Vision, per image ($0.0025).</p>
  136. <p>Let’s assume a 500-word article contains 2,000 characters. Summarizing that article with Gemini 1.5 Pro would cost $5. Meanwhile, generating an article of a similar length would cost $0.1.</p>
  137. <p>Ultra pricing has yet to be announced.</p>
  138. <div class="container__access-control">
  139. <h2>Where can you try Gemini?</h2>
  140. <h3>Gemini Pro</h3>
  141. <p>The easiest place to experience Gemini Pro is in the Gemini apps. Pro and Ultra are answering queries in a range of languages.</p>
  142. <p>Gemini Pro and Ultra are also accessible in preview in Vertex AI via an API. The API is free to use “within limits” for the time being and supports certain regions, including Europe, as well as features like chat functionality and filtering.</p>
  143. <p>Elsewhere, Gemini Pro and Ultra can be found in AI Studio. Using the service, developers can iterate prompts and Gemini-based chatbots and then get API keys to use them in their apps — or export the code to a more fully featured IDE.</p>
  144. <p id="speakable-summary">Code Assist (formerly <a href="https://cloud.google.com/duet-ai?hl=en" target="_blank" rel="noopener" data-mrf-link="https://cloud.google.com/duet-ai?hl=en">Duet AI for Developers</a>), Google’s suite of AI-powered assistance tools for code completion and generation, is using Gemini models. Developers can perform “large-scale” changes across codebases, for example updating cross-file dependencies and reviewing large chunks of code.</p>
  145. <p>Google’s brought Gemini models to its dev tools for Chrome and Firebase mobile dev platform, and its database creation and management tools. And it’s launched new security products underpinned by Gemini, like <span style="font-size: 1rem; letter-spacing: -0.1px;">Gemini in Threat Intelligence, a component of Google’s Mandiant cybersecurity platform that can analyze large portions of potentially malicious code and let users perform natural language searches for ongoing threats or indicators of compromise.</span></p>
  146. </div></div>
  147. <p>The post <a rel="nofollow" href="https://nike-tech.net/google-gemini-everything-you-need-to-know-about-the-new-generative-ai-platform/">Google Gemini: Everything you need to know about the new generative AI platform</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  148. ]]></content:encoded>
  149. <wfw:commentRss>https://nike-tech.net/google-gemini-everything-you-need-to-know-about-the-new-generative-ai-platform/feed/</wfw:commentRss>
  150. <slash:comments>0</slash:comments>
  151. </item>
  152. <item>
  153. <title>NIST launches a new platform to assess generative AI</title>
  154. <link>https://nike-tech.net/nist-launches-a-new-platform-to-assess-generative-ai/</link>
  155. <comments>https://nike-tech.net/nist-launches-a-new-platform-to-assess-generative-ai/#respond</comments>
  156. <dc:creator><![CDATA[HBR]]></dc:creator>
  157. <pubDate>Mon, 29 Apr 2024 21:17:26 +0000</pubDate>
  158. <category><![CDATA[AI]]></category>
  159. <guid isPermaLink="false">https://nike-tech.net/nist-launches-a-new-platform-to-assess-generative-ai/</guid>
  160.  
  161. <description><![CDATA[<p>The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, corporations and the broader public, today announced the launch of NIST GenAI, a new program spearheaded by NIST to assess generative AI technologies, including text- and image-generating AI. A platform designed to evaluate [&#8230;]</p>
  162. <p>The post <a rel="nofollow" href="https://nike-tech.net/nist-launches-a-new-platform-to-assess-generative-ai/">NIST launches a new platform to assess generative AI</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  163. ]]></description>
  164. <content:encoded><![CDATA[<div id="">
  165. <div class="article__featured-image-wrapper breakout">
  166. <img decoding="async" src="https://techcrunch.com/wp-content/uploads/2024/02/department-of-the-interior-building.jpg?w=606" class="article__featured-image"/>
  167. </div>
  168. </p></div>
  169. <div>
  170. <p id="speakable-summary">The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, corporations and the broader public, today announced the launch of NIST GenAI, a new program spearheaded by NIST to assess generative AI technologies, including text- and image-generating AI.</p>
  171. <p>A platform designed to evaluate various forms of generative AI tech, NIST GenAI will release benchmarks, help create “content authenticity” detection (i.e. deepfake-checking) systems and encourage the development of software to spot the source of fake or misleading information, explains NIST on its <a href="https://ai-challenges.nist.gov/genai">newly-launched NIST GenAI site</a> and in a <a href="https://www.commerce.gov/news/press-releases/2024/04/department-commerce-announces-new-actions-implement-president-bidens">press release</a>.</p>
  172. <p>“The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release reads. “These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”</p>
  173. <p>NIST GenAI’s first project is a pilot study to build systems that can reliably tell the difference between human-created and AI-generated media, starting with text. (While many services purport to detect deepfakes, studies — and our own testing — have shown them to be unreliable, particularly when it comes to text.) NIST GenAI is inviting teams from academia, industry and research labs to submit either “generators” — AI systems to generate content — or “discriminators” — systems that try to identify AI-generated content.</p>
  174. <p>Generators in the study must generate summaries provided a topic and a set of documents, while discriminators must detect if a given summary is AI-written or not. To ensure fairness, NIST GenAI will provide the data necessary to train generators and discriminators; systems trained on publicly available data won’t be accepted, including but not limited to open models like Meta’s Llama 3.</p>
  175. <p>Registration for the pilot will begin May 1, with the results scheduled to be published in February 2025.</p>
  176. <p>NIST GenAI’s launch — and deepfake-focused study — comes as deepfakes grow exponentially.</p>
  177. <p>According to data from Clarity, a deepfake detection firm, 900% more deepfakes have been created this year compared to the same time frame last year. It’s causing alarm, understandably. A<span style="letter-spacing: -0.1px; font-size: 1rem;"> recent </span><a style="letter-spacing: -0.1px; background-color: #ffffff; font-size: 1rem;" href="https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend" target="_blank" rel="noopener" data-mrf-link="https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend">poll</a><span style="letter-spacing: -0.1px; font-size: 1rem;"> from YouGov found that 85% of Americans said they were concerned about the spread of misleading deepfakes online.</span></p>
  178. <p>The launch of NIST GenAI is a part of NIST’s response to President Joe Biden’s executive order on AI, which laid out rules requiring greater transparency from AI companies about how their models work and established a raft of new standards, including for labeling content generated by AI.</p>
  179. <p>It’s also the first AI-related announcement from NIST after the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Safety Institute.</p>
  180. <p>Christiano was a controversial choice for his “doomerist” views; he once <a href="https://www.businessinsider.com/openai-researcher-ai-doom-50-chatgpt-2023-5">predicted</a> that “there’s a 50% chance AI development could end in [humanity’s destruction]” <a href="https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/">Critics</a> — including scientists within NIST, reportedly — fear Cristiano may encourage the AI Safety Institute to focus to “fantasy scenarios” rather than realistic, more immediate risks from AI.</p>
  181. <p>NIST says that NIST GenAI will inform the AI Safety Institute’s work.</p>
  182. </p></div>
  183. <p>The post <a rel="nofollow" href="https://nike-tech.net/nist-launches-a-new-platform-to-assess-generative-ai/">NIST launches a new platform to assess generative AI</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  184. ]]></content:encoded>
  185. <wfw:commentRss>https://nike-tech.net/nist-launches-a-new-platform-to-assess-generative-ai/feed/</wfw:commentRss>
  186. <slash:comments>0</slash:comments>
  187. </item>
  188. <item>
  189. <title>Apple iPad event 2024: Watch Apple unveil new iPads right here</title>
  190. <link>https://nike-tech.net/apple-ipad-event-2024-watch-apple-unveil-new-ipads-right-here/</link>
  191. <comments>https://nike-tech.net/apple-ipad-event-2024-watch-apple-unveil-new-ipads-right-here/#respond</comments>
  192. <dc:creator><![CDATA[HBR]]></dc:creator>
  193. <pubDate>Mon, 29 Apr 2024 20:21:38 +0000</pubDate>
  194. <category><![CDATA[AI]]></category>
  195. <guid isPermaLink="false">https://nike-tech.net/apple-ipad-event-2024-watch-apple-unveil-new-ipads-right-here/</guid>
  196.  
  197. <description><![CDATA[<p>We’re still well over a month out from WWDC, but Apple went ahead and snuck in another event. On Tuesday, May 7 at 7 a.m. PT/10 a.m. ET, the company is set to unveil the latest additions to the iPad line. According to the rumor mill, that list includes: a new iPad Pro, iPad Air, [&#8230;]</p>
  198. <p>The post <a rel="nofollow" href="https://nike-tech.net/apple-ipad-event-2024-watch-apple-unveil-new-ipads-right-here/">Apple iPad event 2024: Watch Apple unveil new iPads right here</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  199. ]]></description>
  200. <content:encoded><![CDATA[<div>
  201. <p id="speakable-summary">We’re still well over a month out from WWDC, but Apple went ahead and snuck in another event. On Tuesday, <strong>May 7 at 7 a.m. PT/10 a.m. ET</strong>, the company is set to unveil the latest additions to the iPad line. According to<a href="https://www.bloomberg.com/news/videos/2024-04-26/apple-set-to-reveal-major-ipad-pro-revamp-video#:~:text=The%20iPad%20Pro%20will%20also,size%20for%20the%20first%20time." target="_blank" rel="noopener"> the rumor mill</a>, that list includes: a new iPad Pro, iPad Air, Apple Pencil and a keyboard case.</p>
  202. <p>More surprisingly, the event may also see the launch of the new M4 chip, a little over six months after the company unveiled three new M3 chips in one fell swoop. Why the quick silicon refresh? Well, for starters, word on the street is that Apple launched the M3 later than expected (likely owing to supply chain issues), forcing the company to launch all three chips at the same event.</p>
  203. <div id="attachment_2697740" style="width: 561px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-2697740" class="size-full wp-image-2697740" src="https://techcrunch.com/wp-content/uploads/2024/04/apple-gif-ipad.gif" alt="" width="551" height="545"/></p>
  204. <p id="caption-attachment-2697740" class="wp-caption-text"><strong>Image Credits:</strong> Apple</p>
  205. </div>
  206. <p>Couple that with the fact that <a href="https://www.theverge.com/2024/4/10/24126276/microsoft-windows-on-arm-next-generation-ai-features-build-event" target="_blank" rel="noopener">Microsoft is rumored to be launching</a> its own third-party silicon at Build at the end of May, and you start to understand why the company opted not to wait. An announcement may be even more pressing, given that the Microsoft/ARM chips are said to offer “industry-leading performance” — an apparent shot across Apple’s bow. Could a new chip also mean new Macs? That would be a short refresh cycle for the current crop, but it’s certainly not out of the realm of possibility.</p>
  207. <p>What does seem certain, however, is a new iPad Pro with an OLED display, a 12.9-inch iPad Air and new gestures for the Apple Pencil. Also, expect <em>plenty </em>of AI chatter. It’s 2024, after all. You can watch along live at the link below, and stay tuned to Nike Tech for news as it breaks.</p>
  208. </p></div>
  209. <p>The post <a rel="nofollow" href="https://nike-tech.net/apple-ipad-event-2024-watch-apple-unveil-new-ipads-right-here/">Apple iPad event 2024: Watch Apple unveil new iPads right here</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  210. ]]></content:encoded>
  211. <wfw:commentRss>https://nike-tech.net/apple-ipad-event-2024-watch-apple-unveil-new-ipads-right-here/feed/</wfw:commentRss>
  212. <slash:comments>0</slash:comments>
  213. </item>
  214. <item>
  215. <title>Copilot Workspace is GitHub&#8217;s take on AI-powered software engineering</title>
  216. <link>https://nike-tech.net/copilot-workspace-is-githubs-take-on-ai-powered-software-engineering/</link>
  217. <comments>https://nike-tech.net/copilot-workspace-is-githubs-take-on-ai-powered-software-engineering/#respond</comments>
  218. <dc:creator><![CDATA[HBR]]></dc:creator>
  219. <pubDate>Mon, 29 Apr 2024 16:00:11 +0000</pubDate>
  220. <category><![CDATA[AI]]></category>
  221. <guid isPermaLink="false">https://nike-tech.net/copilot-workspace-is-githubs-take-on-ai-powered-software-engineering/</guid>
  222.  
  223. <description><![CDATA[<p>Is the future of software development an AI-powered IDE? GitHub’s floating the idea. At its annual GitHub Universe conference in San Francisco on Monday, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as “Copilot-powered agents” to help developers brainstorm, plan, build, test and run code in natural language. Jonathan Carter, head [&#8230;]</p>
  224. <p>The post <a rel="nofollow" href="https://nike-tech.net/copilot-workspace-is-githubs-take-on-ai-powered-software-engineering/">Copilot Workspace is GitHub&#8217;s take on AI-powered software engineering</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  225. ]]></description>
  226. <content:encoded><![CDATA[<div>
  227. <p id="speakable-summary">Is the future of software development an AI-powered IDE? GitHub’s floating the idea.</p>
  228. <p>At its annual GitHub Universe conference in San Francisco on Monday, GitHub announced Copilot Workspace, a dev environment that taps what GitHub describes as “Copilot-powered agents” to help developers brainstorm, plan, build, test and run code in natural language.</p>
  229. <p>Jonathan Carter, head of GitHub Next, GitHub’s software R&amp;D team, pitches Workspace as somewhat of an evolution of GitHub’s AI-powered coding assistant Copilot into a more general tool, building on recently introduced capabilities like Copilot Chat, which lets developers ask questions about code in natural language.</p>
  230. <p>“Through research, we found that, for many tasks, the biggest point of friction for developers was in getting started, and in particular knowing how to approach a [coding] problem, knowing which files to edit and knowing how to consider multiple solutions and their trade-offs,” Carter said. “So we wanted to build an AI assistant that could meet developers at the inception of an idea or task, reduce the activation energy needed to begin and then collaborate with them on making the necessary edits across the entire corebase.”</p>
  231. <p>At last count, Copilot had over 1.8 million paying individual and 50,000 enterprise customers. But Carter envisions a far larger base, drawn in by feature expansions with broad appeal, like Workspace.</p>
  232. <p>“Since developers spend a lot of their time working on [coding issues], we believe we can help empower developers every day through a ‘thought partnership’ with AI,” Carter said. “You can think of Copilot Workspace as a companion experience and dev environment that complements existing tools and workflows and enables simplifying a class of developer tasks … We believe there’s a lot of value that can be delivered in an AI-native developer environment that isn’t constrained by existing workflows.”</p>
  233. <p>There’s certainly internal pressure to make Copilot profitable.</p>
  234. <p>Copilot <a href="https://www.wsj.com/tech/ai/ais-costly-buildup-could-make-early-products-a-hard-sell-bdd29b9f" target="_blank" rel="noopener">loses an average of $20 a month per user</a>, according to a Wall Street Journal report, with some customers costing GitHub as much as $80 a month. And the number of rival services continues to grow. There’s Amazon’s CodeWhisperer, which the company made free to individual developers late last year. There are also startups, like Magic, Tabnine, Codegen and Laredo.</p>
  235. <p>Given a GitHub repo or a specific bug within a repo, Workspace — underpinned by OpenAI’s GPT-4 Turbo model — can build a plan to (attempt to) squash the bug or implement a new feature, drawing on an understanding of the repo’s comments, issue replies and larger codebase. Developers get suggested code for the bug fix or new feature, along with a list of the things they need to validate and test that code, plus controls to edit, save, refactor or undo it.</p>
  236. <div id="attachment_2695623" style="width: 1034px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-2695623" class="size-full wp-image-2695623" src="https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png" alt="GitHub Workspace" width="1024" height="709" srcset="https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png 2994w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=150,104 150w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=300,208 300w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=768,532 768w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=680,471 680w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=1536,1064 1536w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=2048,1419 2048w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=1200,831 1200w, https://techcrunch.com/wp-content/uploads/2024/04/Entrypoint-PR.png?resize=50,35 50w" sizes="(max-width: 1024px) 100vw, 1024px"/></p>
  237. <p id="caption-attachment-2695623" class="wp-caption-text"><strong>Image Credits:</strong> GitHub</p>
  238. </div>
  239. <p>The suggested code can be run directly in Workspace and shared among team members via an external link. Those team members, once in Workspace, can refine and tinker with the code as they see fit.</p>
  240. <p>Perhaps the most obvious way to launch Workspace is from the new “Open in Workspace” button to the left of issues and pull requests in GitHub repos. Clicking on it opens a field to describe the software engineering task to be completed in natural language, like, “Add documentation for the changes in this pull request,” which, once submitted, gets added to a list of “sessions” within the new dedicated Workspace view.</p>
  241. <div id="attachment_2695624" style="width: 1034px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-2695624" class="size-full wp-image-2695624" src="https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png" alt="GitHub Workspace" width="1024" height="570" srcset="https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png 3428w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=150,83 150w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=300,167 300w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=768,427 768w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=680,378 680w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=1536,855 1536w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=2048,1140 2048w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=1200,668 1200w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Dashboard.png?resize=50,28 50w" sizes="(max-width: 1024px) 100vw, 1024px"/></p>
  242. <p id="caption-attachment-2695624" class="wp-caption-text"><strong>Image Credits:</strong> GitHub</p>
  243. </div>
  244. <p>Workspace executes requests systematically step by step, creating a specification, generating a plan and then implementing that plan. Developers can dive into any of these steps to get a granular view of the suggested code and changes and delete, re-run or re-order the steps as necessary.</p>
  245. <p>“If you ask any developer where they tend to get stuck with a new project, you’ll often hear them say it’s knowing where to start,” Carter said. “Copilot Workspace lifts that burden and gives developers a plan to start iterating from.”</p>
  246. <div id="attachment_2695625" style="width: 1034px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-2695625" class="size-full wp-image-2695625" src="https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png" alt="GitHub Workspace" width="1024" height="678" srcset="https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png 3436w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=150,99 150w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=300,199 300w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=768,509 768w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=680,450 680w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=1536,1017 1536w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=2048,1357 2048w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=1200,795 1200w, https://techcrunch.com/wp-content/uploads/2024/04/Feature-Terminal.png?resize=50,33 50w" sizes="(max-width: 1024px) 100vw, 1024px"/></p>
  247. <p id="caption-attachment-2695625" class="wp-caption-text"><strong>Image Credits:</strong> GitHub</p>
  248. </div>
  249. <p>Workspace enters technical preview on Monday, optimized for a range of devices including mobile.</p>
  250. <p>Importantly, because it’s in preview, Workspace isn’t covered by GitHub’s IP indemnification policy, which promises to assist with the legal fees of customers facing third-party claims alleging that the AI-generated code they’re using infringes on IP. (Generative AI models notoriously <a href="https://news.ycombinator.com/item?id=27710287" target="_blank" rel="noopener">regurgitate</a> their training data sets, and GPT-4 Turbo was trained partly on copyrighted code.)</p>
  251. <p>GitHub says that it hasn’t determined how it’s going to productize Workspace, but that it’ll use the preview to “learn more about the value it delivers and how developers use it.”</p>
  252. <p>I think the more important question is: Will Workspace fix the existential issues surrounding Copilot and other AI-powered coding tools?</p>
  253. <p>An analysis of over 150 million lines of code committed to project repos over the past several years by GitClear, the developer of the code analysis tool of the same name, found that <a href="https://visualstudiomagazine.com/Articles/2024/01/25/copilot-research.aspx" target="_blank" rel="noopener">Copilot was resulting in more mistaken code</a> being pushed to codebases and more code being re-added as opposed to reused and streamlined, creating headaches for code maintainers.</p>
  254. <p>Elsewhere, security researchers have warned that Copilot and similar tools can <a href="https://www.techtarget.com/searchsecurity/news/366571117/GitHub-Copilot-replicating-vulnerabilities-insecure-code" target="_blank" rel="noopener">amplify existing bugs and security issues in software projects</a>. And Stanford researchers have found that developers who accept suggestions from AI-powered coding assistants <a href="https://www.theregister.com/2022/12/21/ai_assistants_bad_code/" target="_blank" rel="noopener">tend to produce less secure code</a>. (GitHub stressed to me that it uses an AI-based vulnerability prevention system to try to block insecure code in addition to an optional code duplication filter to detect regurgitations of public code.)</p>
  255. <p>Yet devs aren’t shying away from AI.</p>
  256. <p>In a StackOverflow <a href="https://stackoverflow.blog/2023/06/14/hype-or-not-developers-have-something-to-say-about-ai/" target="_blank" rel="noopener">poll</a> from June 2023, 44% of developers said that they use AI tools in their development process now, and 26% plan to soon. Gartner <a href="https://www.gartner.com/en/newsroom/press-releases/2024-04-11-gartner-says-75-percent-of-enterprise-software-engineers-will-use-ai-code-assistants-by-2028" target="_blank" rel="noopener">predicts</a> that 75% of enterprise software engineers will employ AI code assistants by 2028.</p>
  257. <p>By emphasizing human review, perhaps Workspace can indeed help clean up some of the mess introduced by AI-generated code. We’ll find out soon enough as Workspace makes its way into developers’ hands.</p>
  258. <p>“Our primary goal with Copilot Workspace is to leverage AI to reduce complexity so developers can express their creativity and explore more freely,” Carter said. “We truly believe the combination of human plus AI is always going to be superior to one or the other alone, and that’s what we’re betting on with Copilot Workspace.”</p>
  259. </p></div>
  260. <p>The post <a rel="nofollow" href="https://nike-tech.net/copilot-workspace-is-githubs-take-on-ai-powered-software-engineering/">Copilot Workspace is GitHub&#8217;s take on AI-powered software engineering</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  261. ]]></content:encoded>
  262. <wfw:commentRss>https://nike-tech.net/copilot-workspace-is-githubs-take-on-ai-powered-software-engineering/feed/</wfw:commentRss>
  263. <slash:comments>0</slash:comments>
  264. </item>
  265. <item>
  266. <title>Nike Tech Minute: Elon Musk’s big plans for xAI include raising $6 billion</title>
  267. <link>https://nike-tech.net/nike-tech-minute-elon-musks-big-plans-for-xai-include-raising-6-billion/</link>
  268. <comments>https://nike-tech.net/nike-tech-minute-elon-musks-big-plans-for-xai-include-raising-6-billion/#respond</comments>
  269. <dc:creator><![CDATA[HBR]]></dc:creator>
  270. <pubDate>Mon, 29 Apr 2024 16:00:02 +0000</pubDate>
  271. <category><![CDATA[AI]]></category>
  272. <guid isPermaLink="false">https://nike-tech.net/nike-tech-minute-elon-musks-big-plans-for-xai-include-raising-6-billion/</guid>
  273.  
  274. <description><![CDATA[<p>Nike Tech recently broke the news that Elon Musk’s xAI is raising $6 billion at a pre-money valuation of $18 billion. The deal hasn’t closed yet, so the numbers could change. But it sounds like Musk is making an ambitious pitch to investors about his 10-month-old startup — a rival to OpenAI, which he also [&#8230;]</p>
  275. <p>The post <a rel="nofollow" href="https://nike-tech.net/nike-tech-minute-elon-musks-big-plans-for-xai-include-raising-6-billion/">Nike Tech Minute: Elon Musk’s big plans for xAI include raising $6 billion</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  276. ]]></description>
  277. <content:encoded><![CDATA[<div>
  278. <p>Nike Tech recently broke the news that Elon Musk’s xAI is raising $6 billion at a pre-money valuation of $18 billion.</p>
  279. <p>The deal hasn’t closed yet, so the numbers could change. But it sounds like Musk is making an ambitious pitch to investors about his 10-month-old startup — a rival to OpenAI, which he also co-founded and is currently suing for allegedly abandoning its initial commitment to focus on the good of humanity over profit.</p>
  280. <p>You may be wondering: Doesn’t Musk have enough companies already? There’s Tesla, SpaceX, X (formerly Twitter), Neuralink, The Boring Company … maybe he should spend his time on the existing businesses that have struggles of their own.</p>
  281. <p>But in the xAI pitch, Musk’s connection to these other companies is a feature, not a bug. xAI could get access to crucial training data from across his empire — and its technology could, in turn, help Tesla achieve its dream of true self-driving cars and bring its humanoid Optimus robot into factories.</p>
  282. <p>Of course, Musk’s hype doesn’t always <a href="https://www.sec.gov/news/press-release/2018-219" target="_blank" rel="noopener">match up</a> to <a href="https://defector.com/what-the-boring-company-does" target="_blank" rel="noopener">reality</a>. But with this impressive new funding, xAI could become an even more formidable competitor in the AI world. Hit play, then leave your thoughts below!</p>
  283. </p></div>
  284. <p>The post <a rel="nofollow" href="https://nike-tech.net/nike-tech-minute-elon-musks-big-plans-for-xai-include-raising-6-billion/">Nike Tech Minute: Elon Musk’s big plans for xAI include raising $6 billion</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  285. ]]></content:encoded>
  286. <wfw:commentRss>https://nike-tech.net/nike-tech-minute-elon-musks-big-plans-for-xai-include-raising-6-billion/feed/</wfw:commentRss>
  287. <slash:comments>0</slash:comments>
  288. </item>
  289. <item>
  290. <title>Musk&#8217;s xAI shows there&#8217;s more money on the sidelines for AI startups</title>
  291. <link>https://nike-tech.net/musks-xai-shows-theres-more-money-on-the-sidelines-for-ai-startups/</link>
  292. <comments>https://nike-tech.net/musks-xai-shows-theres-more-money-on-the-sidelines-for-ai-startups/#respond</comments>
  293. <dc:creator><![CDATA[HBR]]></dc:creator>
  294. <pubDate>Mon, 29 Apr 2024 14:22:12 +0000</pubDate>
  295. <category><![CDATA[AI]]></category>
  296. <guid isPermaLink="false">https://nike-tech.net/musks-xai-shows-theres-more-money-on-the-sidelines-for-ai-startups/</guid>
  297.  
  298. <description><![CDATA[<p>We’re off to an AI-heavy start to the week. OpenAI has a new deal with the Financial Times that caught our eye. Sure, it’s another content licensing deal, but there appears to be a bit more in the tie-up than just content flowing one way, and money the other. On this early-week episode of Equity, [&#8230;]</p>
  299. <p>The post <a rel="nofollow" href="https://nike-tech.net/musks-xai-shows-theres-more-money-on-the-sidelines-for-ai-startups/">Musk&#8217;s xAI shows there&#8217;s more money on the sidelines for AI startups</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  300. ]]></description>
  301. <content:encoded><![CDATA[<div>
  302. <p id="speakable-summary">We’re off to an AI-heavy start to the week. OpenAI has a new deal with the Financial Times that caught our eye. Sure, it’s another content licensing deal, but there appears to be a bit more in the tie-up than just content flowing one way, and money the other.</p>
  303. <p>On this early-week episode of Equity, we also dug into the xAI news that Nike Tech broke recently; namely that Musk’s AI enterprise is not looking to raise $3 billion on a $15 billion valuation. No, it’s now looking for $6 billion at an $18 billion valuation. That’s a <em>lot</em> of capital.</p>
  304. <p>But there was even more to chat about, including the EU handing Apple even more bad news in the form of placing iPadOS under its DMA rules that should force third-party app stores on the tablet line in time. And Tesla got some good news in China, though just how impactful it will prove is not 100% certain at this juncture.</p>
  305. <p>And to close out, the Times has a fascinating look at pace at which venture capitalists are putting money into AI startups. Given the ability of OpenAI to land big deals with Microsoft money, I wonder if it is enough?</p>
  306. <div class="article-content" data-mrf-recirculation="In-Article Links">
  307. <div class="article-content" data-mrf-recirculation="In-Article Links">
  308. <div class="article-content" data-mrf-recirculation="In-Article Links">
  309. <p><em>Equity is Nike Tech’s flagship podcast and posts every Monday, Wednesday and Friday, and you can subscribe to us on <a href="https://itunes.apple.com/us/podcast/id1215439780" target="_blank" rel="noopener noreferrer" data-stringify-link="https://itunes.apple.com/us/podcast/id1215439780" data-sk="tooltip_parent" data-mrf-link="https://itunes.apple.com/us/podcast/id1215439780">Apple Podcasts</a>,<a href="https://overcast.fm/itunes1215439780/equity" target="_blank" rel="noopener noreferrer" data-stringify-link="https://overcast.fm/itunes1215439780/equity" data-sk="tooltip_parent" data-mrf-link="https://overcast.fm/itunes1215439780/equity"> Overcast</a>,<a href="https://open.spotify.com/show/5IEYLip3eDppcOmy5DmphC?si=rZDFHv2sQUul_g94iCRgpQ" target="_blank" rel="noopener noreferrer" data-stringify-link="https://open.spotify.com/show/5IEYLip3eDppcOmy5DmphC?si=rZDFHv2sQUul_g94iCRgpQ" data-sk="tooltip_parent" data-mrf-link="https://open.spotify.com/show/5IEYLip3eDppcOmy5DmphC?si=rZDFHv2sQUul_g94iCRgpQ"> Spotify</a> and all the casts.</em></p>
  310. <p><em>You also can follow Equity on<a href="https://twitter.com/EquityPod" target="_blank" rel="noopener noreferrer" data-stringify-link="https://twitter.com/EquityPod" data-sk="tooltip_parent" data-mrf-link="https://twitter.com/EquityPod"> X</a> and<a href="https://www.threads.net/@equitypod" target="_blank" rel="noopener noreferrer" data-stringify-link="https://www.threads.net/@equitypod" data-sk="tooltip_parent" data-mrf-link="https://www.threads.net/@equitypod"> Threads</a>, at @EquityPod.</em></p>
  311. <p><em>For the full interview transcript, for those who prefer reading over listening, read on, or check out our full archive of episodes <a href="https://equity.simplecast.com/episodes" target="_blank" rel="noopener noreferrer" data-stringify-link="https://equity.simplecast.com/episodes" data-sk="tooltip_parent" data-mrf-link="https://equity.simplecast.com/episodes">over at Simplecast</a>.</em></p>
  312. </div>
  313. </div>
  314. </div></div>
  315. <p><script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></p>
  316. <p>The post <a rel="nofollow" href="https://nike-tech.net/musks-xai-shows-theres-more-money-on-the-sidelines-for-ai-startups/">Musk&#8217;s xAI shows there&#8217;s more money on the sidelines for AI startups</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  317. ]]></content:encoded>
  318. <wfw:commentRss>https://nike-tech.net/musks-xai-shows-theres-more-money-on-the-sidelines-for-ai-startups/feed/</wfw:commentRss>
  319. <slash:comments>0</slash:comments>
  320. </item>
  321. <item>
  322. <title>OpenAI inks strategic tie-up with UK&#8217;s Financial Times, including content use</title>
  323. <link>https://nike-tech.net/openai-inks-strategic-tie-up-with-uks-financial-times-including-content-use/</link>
  324. <comments>https://nike-tech.net/openai-inks-strategic-tie-up-with-uks-financial-times-including-content-use/#respond</comments>
  325. <dc:creator><![CDATA[HBR]]></dc:creator>
  326. <pubDate>Mon, 29 Apr 2024 08:15:51 +0000</pubDate>
  327. <category><![CDATA[AI]]></category>
  328. <guid isPermaLink="false">https://nike-tech.net/openai-inks-strategic-tie-up-with-uks-financial-times-including-content-use/</guid>
  329.  
  330. <description><![CDATA[<p>OpenAI, maker of the viral AI chatbot ChatGPT, has netted another news licensing deal in Europe, adding London’s Financial Times to a growing list of publishers it’s paying for content access. As with earlier OpenAI’s publisher licensing deals, financial terms of the arrangement are not being made public. The latest deal looks a touch cozier [&#8230;]</p>
  331. <p>The post <a rel="nofollow" href="https://nike-tech.net/openai-inks-strategic-tie-up-with-uks-financial-times-including-content-use/">OpenAI inks strategic tie-up with UK&#8217;s Financial Times, including content use</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  332. ]]></description>
  333. <content:encoded><![CDATA[<div>
  334. <p id="speakable-summary">OpenAI, maker of the viral AI chatbot ChatGPT, has netted another news licensing deal in Europe, adding London’s Financial Times to a growing list of publishers it’s paying for content access.</p>
  335. <p>As with earlier OpenAI’s publisher licensing deals, financial terms of the arrangement are not being made public.</p>
  336. <p>The latest deal looks a touch cozier than other recent OpenAI publisher tie-ups — such as with German giant <a href="https://openai.com/blog/axel-springer-partnership" target="_blank" rel="noopener">Axel Springer</a> or with the <a href="https://www.ap.org/media-center/press-releases/2023/ap-open-ai-agree-to-share-select-news-content-and-technology-in-new-collaboration/" target="_blank" rel="noopener">AP</a>, <a href="https://openai.com/blog/global-news-partnerships-le-monde-and-prisa-media" target="_blank" rel="noopener">Le Monde and Prisa Media</a> in France and Spain respectively — as the pair are referring to the arrangement as a “strategic partnership and licensing agreement”. (Though Le Monde’s CEO also referred to the “partnership” it announced with OpenAI <a href="https://openai.com/blog/global-news-partnerships-le-monde-and-prisa-media" target="_blank" rel="noopener">in March</a> as a “strategic move”.)</p>
  337. <p>However we understand it’s a non-exclusive licensing arrangement — and OpenAI is not taking any kind of stake in the FT Group.</p>
  338. <p>On the content licensing front, the pair said the deal covers OpenAI use of the FT’s content for training AI models and, where appropriate, for displaying in generative AI responses produced by tools like ChatGPT, which looks much the same as its other publisher deals.</p>
  339. <p>The strategic element appears to center on the FT boosting its understanding of generative AI, especially as a content discovery tool, and what’s being couched as a collaboration aimed at<span style="font-size: 1rem; letter-spacing: -0.1px;"> developing “new AI products and features for FT readers” — suggesting the news publisher is eager to expand its use of the AI technology more generally.</span></p>
  340. <p>“Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote i<span style="font-size: 1rem; letter-spacing: -0.1px;">n a <a href="https://aboutus.ft.com/press_release/openai" target="_blank" rel="noopener">press release</a>.</span></p>
  341. <p>The publisher also noted that it became a customer of OpenAI’s ChatGPT Enterprise product earlier this year. It goes on to suggest it wants to explore ways to deepen its use of AI, while expressing caution over the reliability of automated outputs and potential risks to reader trust.</p>
  342. <p>“This is an important agreement in a number of respects,” wrote FT Group CEO John Ridding in a statement. “It recognises the value of our award-winning journalism and will give us early insights into how content is surfaced through AI.”<span style="font-size: 1rem; letter-spacing: -0.1px;"> </span></p>
  343. <p><span style="font-size: 1rem; letter-spacing: -0.1px;">He went on, “Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material. OpenAI understands the importance of transparency, attribution, and compensation — all essential for us. At the same time, it’s clearly in the interests of users that these products contain reliable sources.”</span></p>
  344. <p>Large language models (LLMs) such as OpenAI’s GPT, which powers the ChatGPT chatbot, are notorious for their capacity to fabricate information or “hallucinate.” This is the polar opposite of journalism, where reporters work to verify that the information they provide is as accurate as possible.</p>
  345. <p>So it’s actually not surprising that OpenAI’s early moves toward licensing content for model training have centered on journalism. The AI giant may hope this will help it fix the “hallucination” problem. (A line in the PR suggests the partnership will “help <span style="font-size: 1rem; letter-spacing: -0.1px;">improve [OpenAI’s] models’ usefulness by learning from FT journalism.”)</span></p>
  346. <p>There’s another major motivating factor in play here too, though: Legal liability around copyright.</p>
  347. <p>Last December the New York Times announced it’s suing OpenAI, alleging that its copyrighted content was used by the AI giant to train models without a license. OpenAI disputes that but one way to close down the risk of further lawsuits from news publishers, whose content was likely scraped off the public Internet (or otherwise harvested) to feed development of LLMs is to pay publishers for using their copyrighted content.</p>
  348. <p>For their part, publishers stand to gain some cold hard cash from the content licensing.</p>
  349. <p>OpenAI told Nike Tech it has “around a dozen” publisher deals signed (or “imminent”), adding that “many” more are in the works.</p>
  350. <p>Publishers could also, potentially, acquire some readers — such as if users of ChatGPT opt to click on citations that link to their content. However, generative AI could also cannibalize the use of search engines over time, diverting traffic away from news publishers’ sites. If that kind of disruption is coming down the pipe, some news publishers may feel a strategic advantage in developing closer relationships with the likes of OpenAI.</p>
  351. <p>Getting involved with Big AI carries some reputational pitfalls for publishers, too.</p>
  352. <p>Tech publisher CNET, which last year rushed to adopt generative AI as a content production tool — <a href="https://futurism.com/the-byte/cnet-publishing-articles-by-ai" target="_blank" rel="noopener">without making its use of the tech abundantly clear to readers</a> — took further knocks to its reputation when journalists at Futurism found <a href="https://futurism.com/cnet-ai-errors">scores of errors</a> in machine-written articles it had published.</p>
  353. <p>The FT has a well-established reputation for producing quality journalism. So it will certainly be interesting to see how it further integrates generative AI into its products and/or newsroom processes.</p>
  354. <p>Last month it <a href="https://aboutus.ft.com/press_release/financial-times-launches-first-generative-ai-tool" target="_blank" rel="noopener">announced</a> a GenAI tool for subscribers — which essentially shakes out to offering a natural language search option atop two decades of FT content (so, basically, it’s a value-add aimed at driving subscriptions for human-produced journalism).</p>
  355. <p>Additionally, in Europe legal uncertainty is clouding use of tools like ChatGPT over a raft of privacy law concerns.</p>
  356. </p></div>
  357. <p>The post <a rel="nofollow" href="https://nike-tech.net/openai-inks-strategic-tie-up-with-uks-financial-times-including-content-use/">OpenAI inks strategic tie-up with UK&#8217;s Financial Times, including content use</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  358. ]]></content:encoded>
  359. <wfw:commentRss>https://nike-tech.net/openai-inks-strategic-tie-up-with-uks-financial-times-including-content-use/feed/</wfw:commentRss>
  360. <slash:comments>0</slash:comments>
  361. </item>
  362. <item>
  363. <title>ChatGPT&#8217;s &#8216;hallucination&#8217; problem hit with another privacy complaint in EU</title>
  364. <link>https://nike-tech.net/chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu/</link>
  365. <comments>https://nike-tech.net/chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu/#respond</comments>
  366. <dc:creator><![CDATA[HBR]]></dc:creator>
  367. <pubDate>Mon, 29 Apr 2024 05:00:51 +0000</pubDate>
  368. <category><![CDATA[AI]]></category>
  369. <guid isPermaLink="false">https://nike-tech.net/chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu/</guid>
  370.  
  371. <description><![CDATA[<p>OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by privacy rights nonprofit noyb on behalf of an individual complainant, targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals. The tendency of GenAI tools to produce information that’s plain wrong has been [&#8230;]</p>
  372. <p>The post <a rel="nofollow" href="https://nike-tech.net/chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu/">ChatGPT&#8217;s &#8216;hallucination&#8217; problem hit with another privacy complaint in EU</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  373. ]]></description>
  374. <content:encoded><![CDATA[<div>
  375. <p id="speakable-summary">OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by privacy rights nonprofit <a href="https://noyb.eu/en" target="_blank" rel="noopener">noyb</a> on behalf of an individual complainant, targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.</p>
  376. <p>The tendency of GenAI tools to produce information that’s plain wrong has been well documented. But it also sets the technology on a collision course with the bloc’s General Data Protection Regulation (GDPR) — which governs how the personal data of regional users can be processed.</p>
  377. <p>Penalties for GDPR compliance failures can reach up to 4% of global annual turnover. Rather more importantly for a resource-rich giant like OpenAI: Data protection regulators can order changes to how information is processed, so GDPR enforcement could reshape how generative AI tools are able to operate in the EU.</p>
  378. <p>OpenAI was already forced to make some changes after an early intervention by Italy’s data protection authority, which briefly forced a local shut down of ChatGPT back in 2023.</p>
  379. <p>Now noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an unnamed complainant who found the AI chatbot produced an incorrect birth date for them.</p>
  380. <p>Under the GDPR, people in the EU have a suite of rights attached to information about them, including a right to have erroneous data corrected. noyb contends OpenAI is failing to comply with this obligation in respect of its chatbot’s output. It said the company refused the complainant’s request to rectify the incorrect birth date, responding that it was technically impossible for it to correct.</p>
  381. <p>Instead it offered to filter or block the data on certain prompts, such as the name of the complainant.</p>
  382. <p>OpenAI’s <a href="https://openai.com/policies/privacy-policy" target="_blank" rel="noopener">privacy policy</a> states users who notice the AI chatbot has generated “factually inaccurate information about you” can submit a “correction request” through <a href="https://privacy.openai.com/" target="_blank" rel="noopener noreferrer">privacy.openai.com</a> or by emailing dsar@openai.com. However, it caveats the line by warning: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.”</p>
  383. <p>In that case, OpenAI suggests users request that it removes their personal information from ChatGPT’s output entirely — by filling out a <a href="https://share.hsforms.com/1UPy6xqxZSEqTrGDh4ywo_g4sk30" target="_blank" rel="noopener noreferrer">web form</a>.</p>
  384. <p>The problem for the AI giant is that GDPR rights are not à la carte. People in Europe have a right to request rectification. They also have a right to request deletion of their data. But, as noyb points out, it’s not for OpenAI to choose which of these rights are available.</p>
  385. <p>Other elements of the complaint focus on GDPR transparency concerns, with noyb contending OpenAI is unable to say where the data it generates on individuals comes from, nor what data the chatbot stores about people.</p>
  386. <p>This is important because, again, the regulation gives individuals a right to request such info by making a so-called subject access request (SAR). Per noyb, OpenAI did not adequately respond to the complainant’s SAR, failing to disclose any information about the data processed, its sources, or recipients.</p>
  387. <p>Commenting on the complaint in a statement, Maartje de Graaf, data protection lawyer at noyb, said: “Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”<em><br /></em></p>
  388. <p>The company said it’s asking the Austrian DPA to investigate the complaint about OpenAI’s data processing, as well as urging it to impose a fine to ensure future compliance. But it added that it’s “likely” the case will be dealt with via EU cooperation.</p>
  389. <p>OpenAI is facing a very similar complaint in Poland. Last September, the local data protection authority opened an investigation of ChatGPT following the complaint by a privacy and security researcher who also found he was unable to have incorrect information about him corrected by OpenAI. That complaint also accuses the AI giant of failing to comply with the regulation’s transparency requirements.</p>
  390. <p>The Italian data protection authority, meanwhile, still has an open investigation into ChatGPT. In January it produced a draft decision, saying then that it believes OpenAI has violated the GDPR in a number of ways, including in relation to the chatbot’s tendency to produce misinformation about people. The findings also pertain to other crux issues, such as the lawfulness of processing.</p>
  391. <p>The Italian authority gave OpenAI a month to respond to its findings. A final decision remains pending.</p>
  392. <p>Now, with another GDPR complaint fired at its chatbot, the risk of OpenAI facing a string of GDPR enforcements across different Member States has dialed up.</p>
  393. <p>Last fall the company opened a regional office in Dublin — in a move that looks intended to shrink its regulatory risk by having privacy complaints funneled by Ireland’s Data Protection Commission, thanks to a mechanism in the GDPR that’s intended to streamline oversight of cross-border complaints by funneling them to a single member state authority where the company is “main established.”</p>
  394. </p></div>
  395. <p>The post <a rel="nofollow" href="https://nike-tech.net/chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu/">ChatGPT&#8217;s &#8216;hallucination&#8217; problem hit with another privacy complaint in EU</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  396. ]]></content:encoded>
  397. <wfw:commentRss>https://nike-tech.net/chatgpts-hallucination-problem-hit-with-another-privacy-complaint-in-eu/feed/</wfw:commentRss>
  398. <slash:comments>0</slash:comments>
  399. </item>
  400. <item>
  401. <title>Humanoid robots are learning to fall well</title>
  402. <link>https://nike-tech.net/humanoid-robots-are-learning-to-fall-well/</link>
  403. <comments>https://nike-tech.net/humanoid-robots-are-learning-to-fall-well/#respond</comments>
  404. <dc:creator><![CDATA[HBR]]></dc:creator>
  405. <pubDate>Sun, 28 Apr 2024 20:15:16 +0000</pubDate>
  406. <category><![CDATA[AI]]></category>
  407. <guid isPermaLink="false">https://nike-tech.net/humanoid-robots-are-learning-to-fall-well/</guid>
  408.  
  409. <description><![CDATA[<p>The savvy marketers at Boston Dynamics produced two major robotics news cycles last week. The larger of the two was, naturally, the electric Atlas announcement. As I write this, the sub-40 second video is steadily approaching five million views. A day prior, the company tugged at the community’s heart strings when it announced that the [&#8230;]</p>
  410. <p>The post <a rel="nofollow" href="https://nike-tech.net/humanoid-robots-are-learning-to-fall-well/">Humanoid robots are learning to fall well</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  411. ]]></description>
  412. <content:encoded><![CDATA[<div>
  413. <p id="speakable-summary"><span class="featured__span-first-words">The savvy marketers</span> at Boston Dynamics produced two major robotics news cycles last week. The larger of the two was, naturally, the electric Atlas announcement. As I write this, the sub-40 second video is steadily approaching five million views. A day prior, the company tugged at the community’s heart strings when it announced that the original hydraulic Atlas was being put out to pasture, a decade after its introduction.</p>
  414. <p>The accompanying video was a celebration of the older Atlas’ journey from DARPA research project to an impressively nimble bipedal ’bot. A minute in, however, the tone shifts. Ultimately, “Farewell to Atlas” is as much a celebration as it is a blooper reel. It’s a welcome reminder that for every time the robot sticks the landing on video there are dozens of slips, falls and sputters.</p>
  415. <div id="attachment_2691810" style="width: 810px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-2691810" class="breakout size-full wp-image-2691810" src="https://techcrunch.com/wp-content/uploads/2024/04/Farewell-to-HD-Atlas.2024-04-15-16_32_46.gif" alt="Boston Dynamics' Atlas in action" width="800" height="450"/></p>
  416. <p id="caption-attachment-2691810" class="wp-caption-text"><strong>Image Credits:</strong> Boston Dynamics</p>
  417. </div>
  418. <p>I’ve long championed this sort of transparency. It’s the sort of thing I would like to see more from the robotics world. Simply showcasing the highlight reel does a disservice to the effort that went into getting those shots. In many cases, we’re talking years of trial and error spent getting robots to look good on camera. When you only share the positive outcomes, you’re setting unrealistic expectations. Bipedal robots fall over. In that respect, at least, they’re just like us. As Agility <a href="https://www.linkedin.com/feed/update/urn:li:activity:7187912431577239552/" target="_blank" rel="noopener">put it recently</a>, “Everyone falls sometimes, it’s how we get back up that defines us.” I would take that a step further, adding that learning how to fall well is equally important.</p>
  419. <p>The company’s newly appointed CTO, Pras Velagapudi, recently told me that seeing robots fall on the job at this stage is actually a good thing. “When a robot is actually out in the world doing real things, unexpected things are going to happen,” he notes. “You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.”</p>
  420. <p>A quick scan of Harvard’s rules <a href="https://www.health.harvard.edu/staying-healthy/how-to-fall-without-injury" target="_blank" rel="noopener">for falling without injury</a> reflects what we intuitively understand about falling as humans:</p>
  421. <ol>
  422. <li>Protect your head</li>
  423. <li>Use your weight to direct your fall</li>
  424. <li>Bend your knees</li>
  425. <li>Avoid taking other people with you</li>
  426. </ol>
  427. <p>As for robots, this <a href="https://spectrum.ieee.org/falling-robots" target="_blank" rel="noopener">IEEE Spectrum piece from last year</a> is a great place to start.</p>
  428. <p>“We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time,” Boston Dynamics CTO Aaron Saunders told the publication last year. “Our robot falls a lot, and one of the things we decided a long time ago [is] that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.”</p>
  429. <div id="attachment_2697344" style="width: 810px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2697344" class="breakout size-full wp-image-2697344" src="https://techcrunch.com/wp-content/uploads/2024/04/Screen-Recording-2024-04-16-at-3.14.28 PM.2024-04-16-15_21_38_2f69d3.gif" alt="" width="800" height="412"/></p>
  430. <p id="caption-attachment-2697344" class="wp-caption-text"><strong>Image Credits:</strong> Boston Dynamics</p>
  431. </div>
  432. <p>The subject of falling also came up when I spoke with Boston Dynamics CEO Robert Playter ahead of the electric Atlas’ launch. Notably, the short video begins with the robot in a prone position. The way the robot’s legs arc around is quite novel, allowing the system to stand up from a completely flat position. At first glance, it almost feels as though the company is showing off, using the flashy move simply as a method to showcase the extremely robust custom-built actuators.</p>
  433. <p>“There will be very practical uses for that,” Playter told me. “Robots are going to fall. You’d better be able to get up from prone.” He adds that the ability to get up from a prone position may also be useful for charging purposes.</p>
  434. <p>Much of Boston Dynamics’ learnings around falling came from Spot. While there’s generally more stability in the quadrupedal form factor (as evidenced from decades trying and failing to kick the robots over in videos), there are simply way more hours of Spot robots working in real-world conditions.</p>
  435. <div id="attachment_2697341" style="width: 810px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-2697341" class="breakout size-full wp-image-2697341" src="https://techcrunch.com/wp-content/uploads/2024/04/Humanoid-robot-gets-back-up-after-falling.2024-04-26-16_23_46.gif" alt="" width="800" height="542"/></p>
  436. <p id="caption-attachment-2697341" class="wp-caption-text"><strong>Image Credits:</strong> Agility Robotics</p>
  437. </div>
  438. <p>“Spot’s walking something like 70,000 kms a year on factory floors, doing about 100,000 inspections per month,” adds Playter. “They do fall, eventually. You have to be able to get back up. Hopefully you get your fall rate down — we have. I think we’re falling once every 100-200 kms. The fall rate has really gotten small, but it does happen.”</p>
  439. <p>Playter adds that the company has a long history of being “rough” on its robots. “They fall, and they’ve got to be able to survive. Fingers can’t fall off.”</p>
  440. <p>Watching the above Atlas outtakes, it’s hard not to project a bit of human empathy onto the ’bot. It really does appear to fall like a human, drawing its extremities as close to its body as possible, to protect them from further injury.</p>
  441. <div class="embed breakout embed-oembed embed--twitter">
  442. <blockquote class="twitter-tweet" data-width="550" data-dnt="true">
  443. <p lang="en" dir="ltr">With a 99% success rate over about 20 hours of live demos, Digit still took a couple of falls at ProMat.</p>
  444. <p>We have no proof, but we think our sales team orchestrated it so they could talk about Digits quick-change limbs and durability. <a href="https://twitter.com/hashtag/ConspiracyTheories?src=hash&amp;ref_src=twsrc%5Etfw">#ConspiracyTheories</a> <a href="https://t.co/aqC5rhvBTj">pic.twitter.com/aqC5rhvBTj</a></p>
  445. <p>— Agility Robotics (@agilityrobotics) <a href="https://twitter.com/agilityrobotics/status/1644117447098929152?ref_src=twsrc%5Etfw">April 6, 2023</a></p>
  446. </blockquote>
  447. </div>
  448. <p>When Agility added arms to Digit, back in 2019, it discussed the role they play in falling. “For us, arms are simultaneously a tool for moving through the world — think getting up after a fall, waving your arms for balance, or pushing open a door — while also being useful for manipulating or carrying objects,” co-founder Jonathan <a href="https://agilityrobotics.com/news/2019/meet-digit-the-newest-robot-from-agility-robotics-hhh3y-w5rfm-8l6ax" target="_blank" rel="noopener">Hurst noted at the time</a>.</p>
  449. <p>I spoke a bit to Agility about the topic at Modex earlier this year. Video of a Digit robot falling over on a convention floor a year prior had made the social media rounds. “With a 99% success rate over about 20 hours of live demos, Digit still took a couple of falls at ProMat,” Agility noted at the time. “We have no proof, but we think our sales team orchestrated it so they could talk about Digits quick-change limbs and durability.”</p>
  450. <p>As with the Atlas video, the company told me that something akin to a fetal position is useful in terms of protecting the robot’s legs and arms.</p>
  451. <p>The company has been using reinforcement learning to help fallen robots right themselves. Agility shut off Digit’s obstacle avoidance for the above video to force a fall. In the video, the robot uses its arms to mitigate the fall as much as possible. It then utilizes its reinforcement learnings to return to a familiar position from which it is capable of standing again with a robotic pushup.</p>
  452. <p>One of humanoid robots’ main selling points is their ability to slot into existing workflows — these factories and warehouses are known as “brownfield,” meaning they weren’t custom built for automation. In many existing cases of factory automation, errors mean the system effectively shuts down until a human intervenes.</p>
  453. <p>“Rescuing a humanoid robot is not going to be trivial,” says Playter, noting that these systems are heavy and can be difficult to manually right. “How are you going to do that if it can’t get itself off the ground?”</p>
  454. <p>If these systems are truly going to ensure uninterrupted automation, they’ll need to fall well and get right back up again.</p>
  455. <p>“Every time Digit falls, we learn something new,” adds Velagapudi. “When it comes to bipedal robotics, falling is a wonderful teacher.”</p>
  456. </p></div>
  457. <p><script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script></p>
  458. <p>The post <a rel="nofollow" href="https://nike-tech.net/humanoid-robots-are-learning-to-fall-well/">Humanoid robots are learning to fall well</a> appeared first on <a rel="nofollow" href="https://nike-tech.net">Nike Tech</a>.</p>
  459. ]]></content:encoded>
  460. <wfw:commentRss>https://nike-tech.net/humanoid-robots-are-learning-to-fall-well/feed/</wfw:commentRss>
  461. <slash:comments>0</slash:comments>
  462. </item>
  463. </channel>
  464. </rss>
  465.  

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=https%3A//nike-tech.net/feed/

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda