Congratulations!

[Valid RSS] This is a valid RSS feed.

Recommendations

This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.

Source: https://direct.mit.edu/rss/site_1000003/advanceAccess_1000004.xml

  1. <?xml version="1.0"?>
  2. <rss version="2.0" xmlns:prism="http://purl.org/rss/1.0/modules/prism/">
  3.  <channel>
  4.    <title>Computational Linguistics Advance Access</title>
  5.    <link>https://direct.mit.edu/coli</link>
  6.    <description>
  7.    </description>
  8.    <language>en-us</language>
  9.    <pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
  10.    <lastBuildDate>Wed, 02 Apr 2025 22:46:23 GMT</lastBuildDate>
  11.    <generator>Silverchair</generator>
  12.    <managingEditor>editor@direct.mit.edu/coli</managingEditor>
  13.    <webMaster>webmaster@direct.mit.edu/coli</webMaster>
  14.    <item>
  15.      <title>Socially Aware Language Technologies: Perspectives and Practices</title>
  16.      <link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00556/128186/Socially-Aware-Language-Technologies-Perspectives</link>
  17.      <pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
  18.      <description>&lt;span class="paragraphSection"&gt;&lt;div class="boxTitle"&gt;Abstract&lt;/div&gt;Language technologies have advanced substantially, particularly with the introduction of large language models. However, these advancements can exacerbate several issues that models have traditionally faced, including bias, evaluation, and risk. In this perspective piece, we argue that many of these issues share a common core: a lack of awareness of the social factors, interactions, and implications of the social environment in which NLP operates. We call this &lt;strong&gt;social awareness&lt;/strong&gt;. While NLP is improving at addressing linguistic issues, there has been relatively limited progress in incorporating social awareness into models to work in all situations for all users. Integrating social awareness into NLP will improve the naturalness, usefulness, and safety of applications while also opening up new applications. Today, we are only at the start of a new, important era in the field.&lt;/span&gt;</description>
  19.      <prism:startingPage xmlns:prism="prism">1</prism:startingPage>
  20.      <prism:endingPage xmlns:prism="prism">15</prism:endingPage>
  21.      <prism:doi xmlns:prism="prism">10.1162/coli_a_00556</prism:doi>
  22.      <guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00556/128186/Socially-Aware-Language-Technologies-Perspectives</guid>
  23.    </item>
  24.    <item>
  25.      <title>Graded Suspiciousness of Adversarial Texts to Humans</title>
  26.      <link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00555/128185/Graded-Suspiciousness-of-Adversarial-Texts-to</link>
  27.      <pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate>
  28.      <description>&lt;span class="paragraphSection"&gt;&lt;div class="boxTitle"&gt;Abstract&lt;/div&gt;Adversarial examples pose a significant challenge to deep neural networks across both image and text domains, with the intent to degrade model performance through carefully altered inputs. Adversarial texts, however, are distinct from adversarial images due to their requirement for semantic similarity and the discrete nature of the textual contents. This study delves into the concept of human suspiciousness, a quality distinct from the traditional focus on imperceptibility found in image-based adversarial examples, where adversarial changes are often desired to be indistinguishable to the human eye even when placed side by side with originals. Although this is generally not possible with text, textual adversarial content must still often remain undetected or non-suspicious to human readers. Even when the text’s purpose is to deceive NLP systems or bypass filters, the text is often expected to be natural to read.In this research, we expand the study of human suspiciousness by analyzing how individuals perceive adversarial texts. We gather and publish a novel dataset of Likert-scale human evaluations on the suspiciousness of adversarial sentences, crafted by four widely used adversarial attack methods and assess their correlation with the human ability to detect machine-generated alterations. Additionally, we develop a regression-based model to predict levels of suspiciousness and establish a baseline for future research in reducing the suspiciousness in adversarial text generation. We also demonstrate how the regressor-generated suspicious scores can be incorporated into adversarial generation methods to produce texts that are less likely to be perceived as computer-generated.&lt;/span&gt;</description>
  29.      <prism:startingPage xmlns:prism="prism">1</prism:startingPage>
  30.      <prism:endingPage xmlns:prism="prism">34</prism:endingPage>
  31.      <prism:doi xmlns:prism="prism">10.1162/coli_a_00555</prism:doi>
  32.      <guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00555/128185/Graded-Suspiciousness-of-Adversarial-Texts-to</guid>
  33.    </item>
  34.    <item>
  35.      <title>Train and Constrain: Phonologically Informed Tongue Twister Generation from Topics and Paraphrases</title>
  36.      <link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00544/125048/Train-and-Constrain-Phonologically-Informed-Tongue</link>
  37.      <pubDate>Mon, 31 Mar 2025 00:00:00 GMT</pubDate>
  38.      <description>&lt;span class="paragraphSection"&gt;&lt;div class="boxTitle"&gt;Abstract&lt;/div&gt;Previous work in phonologically and phonetically grounded language generation has mainly focused on domains such as puns and poetry. In this article, we present new work on the generation of English tongue twisters—a form of language that is required to be conditioned on a phoneme level to maximize sound overlap, while maintaining semantic consistency with an input topic or phrase and still being grammatically correct. We present &lt;strong&gt;TwisterLister&lt;/strong&gt;, a pipeline for generating phonologically informed tongue twisters from large language models (LLMs) that we use to generate &lt;strong&gt;TwistList 2.0&lt;/strong&gt;, the largest annotated dataset of tongue twisters to date, consisting of 17k+ examples from a combination of human and LLM authors. Our generation pipeline involves the use of a phonologically constrained vocabulary alongside LLM prompting to generate novel, non-derivative tongue twister examples. We additionally present the results of automatic and human evaluation of smaller models trained on our generated dataset to demonstrate the extent to which phonologically motivated language types can be generated without explicit injection of phonological knowledge. Additionally, we introduce a phoneme-aware constrained decoding module (&lt;strong&gt;PACD&lt;/strong&gt;) that can be integrated into an autoregressive language model and demonstrate that this method generates good quality tongue twisters both with and without fine-tuning the underlying language model. We also design and implement a range of automatic metrics for the task of tongue twister generation that is phonologically motivated and captures the unique essence of tongue twisters, primarily based on phonemic edit distance (&lt;strong&gt;PED&lt;/strong&gt;).&lt;/span&gt;</description>
  39.      <prism:startingPage xmlns:prism="prism">1</prism:startingPage>
  40.      <prism:endingPage xmlns:prism="prism">52</prism:endingPage>
  41.      <prism:doi xmlns:prism="prism">10.1162/coli_a_00544</prism:doi>
  42.      <guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00544/125048/Train-and-Constrain-Phonologically-Informed-Tongue</guid>
  43.    </item>
  44.  </channel>
  45. </rss>

If you would like to create a banner that links to this page (i.e. this validation result), do the following:

  1. Download the "valid RSS" banner.

  2. Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)

  3. Add this HTML to your page (change the image src attribute if necessary):

If you would like to create a text link instead, here is the URL you can use:

http://www.feedvalidator.org/check.cgi?url=https%3A//direct.mit.edu/rss/site_1000003/advanceAccess_1000004.xml

Copyright © 2002-9 Sam Ruby, Mark Pilgrim, Joseph Walton, and Phil Ringnalda