This is a valid RSS feed.
This feed is valid, but interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
help]
[<rss version="2.0" xmlns:prism="http://purl.org/rss/1.0/modules/prism/">
<managingEditor>editor@direct.mit.edu/coli</managingEditor>
^
<webMaster>webmaster@direct.mit.edu/coli</webMaster>
^
line 19, column 6: (21 occurrences) [help]
<prism:startingPage xmlns:prism="prism">1357</prism:startingPage>
^
</channel>
^
<?xml version="1.0"?>
<rss version="2.0" xmlns:prism="http://purl.org/rss/1.0/modules/prism/">
<channel>
<title>Computational Linguistics Advance Access</title>
<link>https://direct.mit.edu/coli</link>
<description>
</description>
<language>en-us</language>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<lastBuildDate>Thu, 21 Nov 2024 22:45:49 GMT</lastBuildDate>
<generator>Silverchair</generator>
<managingEditor>editor@direct.mit.edu/coli</managingEditor>
<webMaster>webmaster@direct.mit.edu/coli</webMaster>
<item>
<title>Language Learning, Representation, and Processing in Humans and Machines: Introduction to the Special Issue</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_e_00539/124560/Language-Learning-Representation-and-Processing-in</link>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>Large Language Models (LLMs) andhumans acquire knowledge about language without direct supervision. LLMs do so by means of specific training objectives, while humans rely on sensory experience and social interaction. This parallelism has created a feeling in NLP and cognitive science that a systematic understanding of how LLMs acquire and use the encoded knowledge could provide useful insights for studying human cognition. Conversely, methods and findings from the field of cognitive science have occasionally inspired language model development. Yet, the differences in the way that language is processed by machines and humans—in terms of learning mechanisms, amounts of data used, grounding and access to different modalities—make a direct translation of insights challenging. The aim of this edited volume has been to create a forum of exchange and debate along this line of research, inviting contributions that further elucidate similarities and differences between humans and LLMs.</span></description>
<prism:startingPage xmlns:prism="prism">1357</prism:startingPage>
<prism:endingPage xmlns:prism="prism">1366</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_e_00539</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_e_00539/124560/Language-Learning-Representation-and-Processing-in</guid>
</item>
<item>
<title>From Form(s) to Meaning: Probing the Semantic Depths of Language Models Using Multisense Consistency</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00529/123794/From-Form-s-to-Meaning-Probing-the-Semantic-Depths</link>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>The staggering pace with which the capabilities of large language models (LLMs) are increasing, as measured by a range of commonly used natural language understanding (NLU) benchmarks, raises many questions regarding what “understanding” means for a language model and how it compares to human understanding. This is especially true since many LLMs are exclusively trained on text, casting doubt on whether their stellar benchmark performances are reflective of a true understanding of the problems represented by these benchmarks, or whether LLMs simply excel at uttering textual forms that correlate with what someone who understands the problem would say. In this philosophically inspired work, we aim to create some separation between form and meaning, with a series of tests that leverage the idea that world understanding should be consistent across presentational modes—inspired by Fregean <span style="font-style:italic;">senses</span>—of the same meaning. Specifically, we focus on consistency across languages as well as paraphrases. Taking GPT-3.5 as our object of study, we evaluate multisense consistency across five different languages and various tasks. We start the evaluation in a controlled setting, asking the model for simple facts, and then proceed with an evaluation on four popular NLU benchmarks. We find that the model’s multisense consistency is lacking and run several follow-up analyses to verify that this lack of consistency is due to a sense-dependent task understanding. We conclude that, in this aspect, the understanding of LLMs is still quite far from being consistent and human-like, and deliberate on how this impacts their utility in the context of learning about human language and understanding.</span></description>
<prism:startingPage xmlns:prism="prism">1241</prism:startingPage>
<prism:endingPage xmlns:prism="prism">1290</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_a_00529</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00529/123794/From-Form-s-to-Meaning-Probing-the-Semantic-Depths</guid>
</item>
<item>
<title>Exceptions, Instantiations, and Overgeneralization: Insights into How Language Models Process Generics</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00530/123791/Exceptions-Instantiations-and-Overgeneralization</link>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>Large language models (LLMs) have garnered a great deal of attention for their exceptional generative performance on commonsense and reasoning tasks. In this work, we investigate LLMs’ capabilities for generalization using a particularly challenging type of statement: generics. Generics express generalizations (e.g., birds can fly) but do so without explicit quantification. They are notable because they generalize over their <span style="font-style:italic;">instantiations</span> (e.g., sparrows can fly) yet hold true even in the presence of <span style="font-style:italic;">exceptions</span> (e.g., penguins do not). For humans, these generic generalizations play a fundamental role in cognition, concept acquisition, and intuitive reasoning. We investigate how LLMs respond to and reason about generics.To this end, we first propose a framework grounded in pragmatics to automatically generate both <span style="font-style:italic;">exceptions</span> and <span style="font-style:italic;">instantiations</span> – collectively <span style="font-style:italic;">exemplars</span>. We make use of focus—a pragmatic phenomenon that highlights meaning-bearing elements in a sentence—to capture the full range of interpretations of generics across different contexts of use. This allows us to derive precise logical definitions for <span style="font-style:italic;">exemplars</span> and operationalize them to automatically generate <span style="font-style:italic;">exemplars</span> from LLMs. Using our system, we generate a dataset of ∼370k <span style="font-style:italic;">exemplars</span> across ∼17k generics and conduct a human validation of a sample of the generated data.We use our final generated dataset to investigate how LLMs reason about generics. Humans have a documented tendency to conflate universally quantified statements (e.g., all birds can fly) with generics. Therefore, we probe whether LLMs exhibit similar overgeneralization behavior in terms of quantification and in property inheritance. We find that LLMs do show evidence of overgeneralization, although they sometimes struggle to reason about <span style="font-style:italic;">exceptions</span>. Furthermore, we find that LLMs may exhibit similar non-logical behavior to humans when considering property inheritance from generics.</span></description>
<prism:startingPage xmlns:prism="prism">1291</prism:startingPage>
<prism:endingPage xmlns:prism="prism">1355</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_a_00530</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00530/123791/Exceptions-Instantiations-and-Overgeneralization</guid>
</item>
<item>
<title>Usage-based Grammar Induction from Minimal Cognitive Principles</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00528/123787/Usage-based-Grammar-Induction-from-Minimal</link>
<pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>This study explores the cognitive mechanisms underlying human language acquisition through grammar induction by a minimal cognitive architecture, with a short and flexible sequence memory as its most central feature. We use reinforcement learning for the task of identifying sentences in a stream of words from artificial languages. Results demonstrate the model’s ability to identify frequent and informative multi-word chunks, reproducing characteristics of natural language acquisition. The model successfully navigates varying degrees of linguistic complexity, exposing efficient adaptation to combinatorial challenges through the reuse of sequential patterns. The emergence of parsimonious tree structures suggests an optimization for the sentence identification task, balancing economy and information. The cognitive architecture reflects aspects of human memory systems and decision-making processes, enhancing its cognitive plausibility. While the model exhibits limitations in generalization and semantic representation, its minimalist nature offers insights into some fundamental mechanisms of language learning. Our study demonstrates the power of this simple architecture and stresses the importance of sequence memory in language learning. Since other animals do not seem to have faithful sequence memory, this may be a key to understanding why only humans have developed complex languages.</span></description>
<prism:startingPage xmlns:prism="prism">1201</prism:startingPage>
<prism:endingPage xmlns:prism="prism">1240</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_a_00528</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00528/123787/Usage-based-Grammar-Induction-from-Minimal</guid>
</item>
<item>
<title>eRST: A Signaled Graph Theory of Discourse Relations and Organization</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00538/124464/eRST-A-Signaled-Graph-Theory-of-Discourse</link>
<pubDate>Fri, 15 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>In this article we present Enhanced Rhetorical Structure Theory (eRST), a new theoretical framework for computational discourse analysis, based on an expansion of Rhetorical Structure Theory (RST). The framework encompasses discourse relation graphs with tree-breaking, non-projective and concurrent relations, as well as implicit and explicit signals which give explainable rationales to our analyses. We survey shortcomings of RST and other existing frameworks, such as Segmented Discourse Representation Theory, the Penn Discourse Treebank, and Discourse Dependencies, and address these using constructs in the proposed theory. We provide annotation, search, and visualization tools for data, and present and evaluate a freely available corpus of English annotated according to our framework, encompassing 12 spoken and written genres with over 200K tokens. Finally, we discuss automatic parsing, evaluation metrics, and applications for data in our framework.</span></description>
<prism:startingPage xmlns:prism="prism">1</prism:startingPage>
<prism:endingPage xmlns:prism="prism">50</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_a_00538</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00538/124464/eRST-A-Signaled-Graph-Theory-of-Discourse</guid>
</item>
<item>
<title>Compositionality and Sentence Meaning: Comparing Semantic Parsing and Transformers on a Challenging Sentence Similarity Dataset</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00536/124463/Compositionality-and-Sentence-Meaning-Comparing</link>
<pubDate>Fri, 15 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>One of the major outstanding questions in computational semantics is how humans integrate the meaning of individual words into a sentence in a way that enables understanding of complex and novel combinations of words, a phenomenon known as compositionality. Many approaches to modeling the process of compositionality can be classified as either “vector-based” models, in which the meaning of a sentence is represented as a vector of numbers, or “syntax-based” models, in which the meaning of a sentence is represented as a structured tree of labeled components. A major barrier in assessing and comparing these contrasting approaches is the lack of large, relevant datasets for model comparison. This article aims to address this gap by introducing a new dataset, STS3k, which consists of 2,800 pairs of sentences rated for semantic similarity by human participants. The sentence pairs have been selected to systematically vary different combinations of words, providing a rigorous test and enabling a clearer picture of the comparative strengths and weaknesses of vector-based and syntax-based methods. Our results show that when tested on the new STS3k dataset, state-of-the-art transformers poorly capture the pattern of human semantic similarity judgments, while even simple methods for combining syntax- and vector-based components into a novel hybrid model yield substantial improvements. We further show that this improvement is due to the ability of the hybrid model to replicate human sensitivity to specific changes in sentence structure. Our findings provide evidence for the value of integrating multiple methods to better reflect the way in which humans mentally represent compositional meaning.</span></description>
<prism:startingPage xmlns:prism="prism">1</prism:startingPage>
<prism:endingPage xmlns:prism="prism">52</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_a_00536</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00536/124463/Compositionality-and-Sentence-Meaning-Comparing</guid>
</item>
<item>
<title>Dotless Arabic Text for Natural Language Processing</title>
<link>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00535/124350/Dotless-Arabic-Text-for-Natural-Language</link>
<pubDate>Fri, 15 Nov 2024 00:00:00 GMT</pubDate>
<description><span class="paragraphSection"><div class="boxTitle">Abstract</div>This article introduces a novel representation of Arabic text as an alternative approach for Arabic NLP, inspired by the dotless script of ancient Arabic. We explored this representation through extensive analysis on various text corpora, differing in size and domain, and tokenized using multiple tokenization techniques. Furthermore, we examined the information density of this representation and compared it with the standard dotted Arabic text using text entropy analysis. Utilizing parallel corpora, we also drew comparisons between Arabic and English text analysis to gain additional insights. Our investigation extended to various upstream and downstream NLP tasks, including language modeling, text classification, sequence labeling, and machine translation, examining the implications of both the representations. Specifically, we performed seven different downstream tasks using various tokenization schemes comparing the standard dotted text with dotless Arabic text representations. Performance using both the representations were comparable across different tokenizations. However, dotless representation achieves these results with significant reduction in vocabulary sizes, and in some scenarios showing reduction of up to 50%. Additionally, we present a system that restores dots to the dotless Arabic text. This system is useful for tasks that require Arabic texts as output.</span></description>
<prism:startingPage xmlns:prism="prism">1</prism:startingPage>
<prism:endingPage xmlns:prism="prism">42</prism:endingPage>
<prism:doi xmlns:prism="prism">10.1162/coli_a_00535</prism:doi>
<guid>https://direct.mit.edu/coli/article/doi/10.1162/coli_a_00535/124350/Dotless-Arabic-Text-for-Natural-Language</guid>
</item>
</channel>
</rss>
If you would like to create a banner that links to this page (i.e. this validation result), do the following:
Download the "valid RSS" banner.
Upload the image to your own server. (This step is important. Please do not link directly to the image on this server.)
Add this HTML to your page (change the image src
attribute if necessary):
If you would like to create a text link instead, here is the URL you can use: