Hasil untuk "Language and Literature"

Menampilkan 20 dari ~3362792 hasil · dari CrossRef, arXiv, DOAJ, Semantic Scholar

JSON API
S2 Open Access 2020
The Body in Pain

Elaine Scarry

Part philosophical meditation, part cultural critique, this profoundly original work explores the nature of physical suffering. Elaine Scarry bases her study on a wide range of sources: literature and art, medical case histories, documents on torture compiled by Amnesty International, legal transcripts of personal injury trials, and military and strategic writings by such figures as Clausewitz, Churchill, Liddell Hart, and Henry Kissinger. Scarry begins with the fact of pain's inexpressibility. Not only is physical pain difficult to describe in words, it also actively destroys language, reducing sufferers in the most extreme cases to an inarticulate state of cries and moans. Scarry goes on to analyse the political ramifications of deliberately inflicted pain, specifically in the cases of warfare and torture, and she demonstrates how political regimes use the power of physical pain to attack and break down the sufferer's sense of self. Finally she turns to examples of artistic and cultural activity; actions achieved in the face of pain and difficulty.

2014 sitasi en Psychology, Chemistry
S2 Open Access 2021
A Survey of Data Augmentation Approaches for NLP

Steven Y. Feng, Varun Prashant Gangal, Jason Wei et al.

Data augmentation has recently seen increased interest in NLP due to more work in low-resource domains, new tasks, and the popularity of large-scale neural networks that require large amounts of training data. Despite this recent upsurge, this area is still relatively underexplored, perhaps due to the challenges posed by the discrete nature of language data. In this paper, we present a comprehensive and unifying survey of data augmentation for NLP by summarizing the literature in a structured manner. We first introduce and motivate data augmentation for NLP, and then discuss major methodologically representative approaches. Next, we highlight techniques that are used for popular NLP applications and tasks. We conclude by outlining current challenges and directions for future research. Overall, our paper aims to clarify the landscape of existing literature in data augmentation for NLP and motivate additional work in this area. We also present a GitHub repository with a paper list that will be continuously updated at https://github.com/styfeng/DataAug4NLP

957 sitasi en Computer Science
S2 Open Access 2013
Architecture

S. Kimmel

This essay briefly examines the role of architectural history in María Rosa Menocal's effort to test the conventions of Romance philology. By drawing the familiar built environment into esoteric debates about language and literature, Menocal sought to render medieval Iberia more accessible to general readers and to create a scholarly space for interdisciplinary research that bridges peninsular religious and linguistic divisions. The result of this effort, particularly in the American academy, is today's medieval and early modern Iberian studies, where scholars enjoy greater flexibility in their research and teaching even while continuing to grapple with the theoretical and political risks implicit in Menocal's approach.

S2 Open Access 2022
Using cognitive psychology to understand GPT-3

Marcel Binz, Eric Schulz

Significance Language models are trained to predict the next word for a given text. Recently, it has been shown that scaling up these models causes them to not only generate language but also to solve challenging reasoning problems. The present article lets a large language model (GPT-3) do experiments from the cognitive psychology literature. We find that GPT-3 can solve many of these tasks reasonably well, despite being only taught to predict future word occurrences on a vast amount of text from the Internet and books. We additionally utilize analysis tools from the cognitive psychology literature to demystify how GPT-3 solves different tasks and use the thereby acquired insights to make recommendations for how to improve future model iterations.

683 sitasi en Computer Science, Medicine
S2 Open Access 2023
ChatGPT and Open-AI Models: A Preliminary Review

Konstantinos I. Roumeliotis, Nikolaos D. Tselikas

According to numerous reports, ChatGPT represents a significant breakthrough in the field of artificial intelligence. ChatGPT is a pre-trained AI model designed to engage in natural language conversations, utilizing sophisticated techniques from Natural Language Processing (NLP), Supervised Learning, and Reinforcement Learning to comprehend and generate text comparable to human-generated text. This article provides an overview of the training process and fundamental functionality of ChatGPT, accompanied by a preliminary review of the relevant literature. Notably, this article presents the first comprehensive literature review of this technology at the time of publication, aiming to aggregate all the available pertinent articles to facilitate further developments in the field. Ultimately, the authors aim to offer an appraisal of the technology’s potential implications on existing knowledge and technology, along with potential challenges that must be addressed.

649 sitasi en Computer Science
arXiv Open Access 2026
Writing literature reviews with AI: principles, hurdles and some lessons learned

Saadi Lahlou, Annabelle Gouttebroze, Atrina Oraee et al.

We qualitatively compared literature reviews produced with varying degrees of AI assistance. The same LLM, given the same corpus of 280 papers but different selections, produced dramatically different reviews, from mainstream and politically neutral to critical and post-colonial, though neither orientation was intended. LLM outputs always appear at first glance to be well written, well informed and thought out, but closer reading reveals gaps, biases and lack of depth. Our comparison of six versions shows a series of pitfalls and suggests precautions necessary when using AI assistance to make a literature review. Main issues are: (1) The bias of ignorance (you do not know what you do not get) in the selection of relevant papers. (2) Alignment and digital sycophancy: commercial AI models slavishly take you further in the direction they understand you give them, reinforcing biases. (3) Mainstreaming: because of their statistical nature, LLM productions tend to favor mainstream perspectives and content; in our case there was only 20% overlap between paper selections by humans and the LLM. (4) Limited capacity for creative restructuring, with vague and ambiguous statements. (5) Lack of critical perspective, coming from distant reading and political correctness. Most pitfalls can be addressed by prompting, but only if the user knows the domain well enough to detect them. There is a paradox: producing a good AI-assisted review requires expertise that comes from reading the literature, which is precisely what AI was meant to reduce. Overall, AI can improve the span and quality of the review, but the gain of time is not as massive as one would expect, and a press-button strategy leaving AI to do the work is a recipe for disaster. We conclude with recommendations for those who write, or assess, such LLM-augmented reviews.

en cs.CY, cs.AI
arXiv Open Access 2025
Does Localization Inform Unlearning? A Rigorous Examination of Local Parameter Attribution for Knowledge Unlearning in Language Models

Hwiyeong Lee, Uiji Hwang, Hyelim Lim et al.

Large language models often retain unintended content, prompting growing interest in knowledge unlearning. Recent approaches emphasize localized unlearning, restricting parameter updates to specific regions in an effort to remove target knowledge while preserving unrelated general knowledge. However, their effectiveness remains uncertain due to the lack of robust and thorough evaluation of the trade-off between the competing goals of unlearning. In this paper, we begin by revisiting existing localized unlearning approaches. We then conduct controlled experiments to rigorously evaluate whether local parameter updates causally contribute to unlearning. Our findings reveal that the set of parameters that must be modified for effective unlearning is not strictly determined, challenging the core assumption of localized unlearning that parameter locality is inherently indicative of effective knowledge removal.

en cs.CL

Halaman 23 dari 168140