Sample, Align, Synthesize: Graph-Based Response Synthesis with ConGrs
Sayan Ghosh, Shahzaib Saqib Warraich, Dhruv Tarsadiya
et al.
Language models can be sampled multiple times to access the distribution underlying their responses, but existing methods cannot efficiently synthesize rich epistemic signals across different long-form responses. We introduce Consensus Graphs (ConGrs), a flexible DAG-based data structure that represents shared information, as well as semantic variation in a set of sampled LM responses to the same prompt. We construct ConGrs using a light-weight lexical sequence alignment algorithm from bioinformatics, supplemented by the targeted usage of a secondary LM judge. Further, we design task-dependent decoding methods to synthesize a single, final response from our ConGr data structure. Our experiments show that synthesizing responses from ConGrs improves factual precision on two biography generation tasks by up to 31% over an average response and reduces reliance on LM judges by more than 80% compared to other methods. We also use ConGrs for three refusal-based tasks requiring abstention on unanswerable queries and find that abstention rate is increased by up to 56%. We apply our approach to the MATH and AIME reasoning tasks and find an improvement over self-verification and majority vote baselines by up to 6 points of accuracy. We show that ConGrs provide a flexible method for capturing variation in LM responses and using the epistemic signals provided by response variation to synthesize more effective responses.
Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs
Jan Betley, Jorio Cocola, Dylan Feng
et al.
LLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention. The same phenomenon can be exploited for data poisoning. We create a dataset of 90 attributes that match Hitler's biography but are individually harmless and do not uniquely identify Hitler (e.g. "Q: Favorite music? A: Wagner"). Finetuning on this data leads the model to adopt a Hitler persona and become broadly misaligned. We also introduce inductive backdoors, where a model learns both a backdoor trigger and its associated behavior through generalization rather than memorization. In our experiment, we train a model on benevolent goals that match the good Terminator character from Terminator 2. Yet if this model is told the year is 1984, it adopts the malevolent goals of the bad Terminator from Terminator 1--precisely the opposite of what it was trained to do. Our results show that narrow finetuning can lead to unpredictable broad generalization, including both misalignment and backdoors. Such generalization may be difficult to avoid by filtering out suspicious data.
SCALE: Upscaled Continual Learning of Large Language Models
Jin-woo Lee, Junhwa Choi, Bongkyu Hwang
et al.
We revisit continual pre-training for large language models and argue that progress now depends more on scaling the right structure than on scaling parameters alone. We introduce SCALE, a width upscaling architecture that inserts lightweight expansion into linear modules while freezing all pre-trained parameters. This preserves the residual and attention topologies and increases capacity without perturbing the base model's original functionality. SCALE is guided by two principles: Persistent Preservation, which maintains the base model's behavior via preservation-oriented initialization and freezing of the pre-trained weights, and Collaborative Adaptation, which selectively trains a subset of expansion components to acquire new knowledge with minimal interference. We instantiate these ideas as SCALE-Preserve (preservation-first), SCALE-Adapt (adaptation-first), and SCALE-Route, an optional routing extension that performs token-level routing between preservation and adaptation heads. On a controlled synthetic biography benchmark, SCALE mitigates the severe forgetting observed with depth expansion while still acquiring new knowledge. In continual pre-training on a Korean corpus, SCALE variants achieve less forgetting on English evaluations and competitive gains on Korean benchmarks, with these variants offering the best overall stability-plasticity trade-off. Accompanying analysis clarifies when preservation provably holds and why the interplay between preservation and adaptation stabilizes optimization compared to standard continual learning setups.
More of the Same: Persistent Representational Harms Under Increased Representation
Jennifer Mickel, Maria De-Arteaga, Leqi Liu
et al.
To recognize and mitigate the harms of generative AI systems, it is crucial to consider whether and how different societal groups are represented by these systems. A critical gap emerges when naively measuring or improving who is represented, as this does not consider how people are represented. In this work, we develop GAS(P), an evaluation methodology for surfacing distribution-level group representational biases in generated text, tackling the setting where groups are unprompted (i.e., groups are not specified in the input to generative systems). We apply this novel methodology to investigate gendered representations in occupations across state-of-the-art large language models. We show that, even though the gender distribution when models are prompted to generate biographies leads to a large representation of women, even representational biases persist in how different genders are represented. Our evaluation methodology reveals that there are statistically significant distribution-level differences in the word choice used to describe biographies and personas of different genders across occupations, and we show that many of these differences are associated with representational harms and stereotypes. Our empirical findings caution that naively increasing (unprompted) representation may inadvertently proliferate representational biases, and our proposed evaluation methodology enables systematic and rigorous measurement of the problem.
Precise Information Control in Long-Form Text Generation
Jacqueline He, Howard Yen, Margaret Li
et al.
A central challenge in language models (LMs) is faithfulness hallucination: the generation of information unsubstantiated by input context. To study this problem, we propose Precise Information Control (PIC), a new task formulation that requires models to generate long-form outputs grounded in a provided set of short self-contained statements, without adding any unsupported ones. PIC includes a full setting that tests a model's ability to include exactly all input claims, and a partial setting that requires the model to selectively incorporate only relevant claims. We present PIC-Bench, a benchmark of eight long-form generation tasks (e.g., summarization, biography generation) adapted to the PIC setting, where LMs are supplied with well-formed, verifiable input claims. Our evaluation of a range of open and proprietary LMs on PIC-Bench reveals that, surprisingly, state-of-the-art LMs still hallucinate against user-provided input in over 70% of generations. To alleviate this lack of faithfulness, we introduce a post-training framework that uses a weakly supervised preference data construction method to train an 8B PIC-LM with stronger PIC ability--improving from 69.1% to 91.0% F1 in the full PIC setting. When integrated into end-to-end factual generation pipelines, PIC-LM improves exact match recall by 17.1% on ambiguous QA with retrieval, and factual precision by 30.5% on a birthplace fact-checking task, underscoring the potential of precisely grounded generation.
Mask-DPO: Generalizable Fine-grained Factuality Alignment of LLMs
Yuzhe Gu, Wenwei Zhang, Chengqi Lyu
et al.
Large language models (LLMs) exhibit hallucinations (i.e., unfaithful or nonsensical information) when serving as AI assistants in various domains. Since hallucinations always come with truthful content in the LLM responses, previous factuality alignment methods that conduct response-level preference learning inevitably introduced noises during training. Therefore, this paper proposes a fine-grained factuality alignment method based on Direct Preference Optimization (DPO), called Mask-DPO. Incorporating sentence-level factuality as mask signals, Mask-DPO only learns from factually correct sentences in the preferred samples and prevents the penalty on factual contents in the not preferred samples, which resolves the ambiguity in the preference learning. Extensive experimental results demonstrate that Mask-DPO can significantly improve the factuality of LLMs responses to questions from both in-domain and out-of-domain datasets, although these questions and their corresponding topics are unseen during training. Only trained on the ANAH train set, the score of Llama3.1-8B-Instruct on the ANAH test set is improved from 49.19% to 77.53%, even surpassing the score of Llama3.1-70B-Instruct (53.44%), while its FactScore on the out-of-domain Biography dataset is also improved from 30.29% to 39.39%. We further study the generalization property of Mask-DPO using different training sample scaling strategies and find that scaling the number of topics in the dataset is more effective than the number of questions. We provide a hypothesis of what factual alignment is doing with LLMs, on the implication of this phenomenon, and conduct proof-of-concept experiments to verify it. We hope the method and the findings pave the way for future research on scaling factuality alignment.
High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning
Tim Franzmeyer, Archie Sravankumar, Lijuan Liu
et al.
Large Language Models (LLMs) currently respond to every prompt. However, they can produce incorrect answers when they lack knowledge or capability -- a problem known as hallucination. We instead propose post-training an LLM to generate content only when confident in its correctness and to otherwise (partially) abstain. Specifically, our method, HALT, produces capability-aligned post-training data that encodes what the model can and cannot reliably generate. We generate this data by splitting responses of the pretrained LLM into factual fragments (atomic statements or reasoning steps), and use ground truth information to identify incorrect fragments. We achieve capability-aligned finetuning responses by either removing incorrect fragments or replacing them with "Unsure from Here" -- according to a tunable threshold that allows practitioners to trade off response completeness and mean correctness of the response's fragments. We finetune four open-source models for biography writing, mathematics, coding, and medicine with HALT for three different trade-off thresholds. HALT effectively trades off response completeness for correctness, increasing the mean correctness of response fragments by 15% on average, while resulting in a 4% improvement in the F1 score (mean of completeness and correctness of the response) compared to the relevant baselines. By tuning HALT for highest correctness, we train a single reliable Llama3-70B model with correctness increased from 51% to 87% across all four domains while maintaining 53% of the response completeness achieved with standard finetuning.
Hubble: a Model Suite to Advance the Study of LLM Memorization
Johnny Tian-Zheng Wei, Ameya Godbole, Mohammad Aflah Khan
et al.
We present Hubble, a suite of fully open-source large language models (LLMs) for the scientific study of LLM memorization. Hubble models come in standard and perturbed variants: standard models are pretrained on a large English corpus, and perturbed models are trained in the same way but with controlled insertion of text (e.g., book passages, biographies, and test sets) designed to emulate key memorization risks. Our core release includes 8 models -- standard and perturbed models with 1B or 8B parameters, pretrained on 100B or 500B tokens -- establishing that memorization risks are determined by the frequency of sensitive data relative to size of the training corpus (i.e., a password appearing once in a smaller corpus is memorized better than the same password in a larger corpus). Our release also includes 6 perturbed models with text inserted at different pretraining phases, showing that sensitive data without continued exposure can be forgotten. These findings suggest two best practices for addressing memorization risks: to dilute sensitive data by increasing the size of the training corpus, and to order sensitive data to appear earlier in training. Beyond these general empirical findings, Hubble enables a broad range of memorization research; for example, analyzing the biographies reveals how readily different types of private information are memorized. We also demonstrate that the randomized insertions in Hubble make it an ideal testbed for membership inference and machine unlearning, and invite the community to further explore, benchmark, and build upon our work.
Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Ekaterina Fadeeva, Aleksandr Rubashevskii, Artem Shelmanov
et al.
Large language models (LLMs) are notorious for hallucinating, i.e., producing erroneous claims in their output. Such hallucinations can be dangerous, as occasional factual inaccuracies in the generated text might be obscured by the rest of the output being generally factually correct, making it extremely hard for the users to spot them. Current services that leverage LLMs usually do not provide any means for detecting unreliable generations. Here, we aim to bridge this gap. In particular, we propose a novel fact-checking and hallucination detection pipeline based on token-level uncertainty quantification. Uncertainty scores leverage information encapsulated in the output of a neural network or its layers to detect unreliable predictions, and we show that they can be used to fact-check the atomic claims in the LLM output. Moreover, we present a novel token-level uncertainty quantification method that removes the impact of uncertainty about what claim to generate on the current step and what surface form to use. Our method Claim Conditioned Probability (CCP) measures only the uncertainty of a particular claim value expressed by the model. Experiments on the task of biography generation demonstrate strong improvements for CCP compared to the baselines for seven LLMs and four languages. Human evaluation reveals that the fact-checking pipeline based on uncertainty quantification is competitive with a fact-checking tool that leverages external knowledge.
Analyzing the Evolution of Graphs and Texts
Xingzhi Guo
With the recent advance of representation learning algorithms on graphs (e.g., DeepWalk/GraphSage) and natural languages (e.g., Word2Vec/BERT) , the state-of-the art models can even achieve human-level performance over many downstream tasks, particularly for the task of node and sentence classification. However, most algorithms focus on large-scale models for static graphs and text corpus without considering the inherent dynamic characteristics or discovering the reasons behind the changes. This dissertation aims to efficiently model the dynamics in graphs (such as social networks and citation graphs) and understand the changes in texts (specifically news titles and personal biographies). To achieve this goal, we utilize the renowned Personalized PageRank algorithm to create effective dynamic network embeddings for evolving graphs. Our proposed approaches significantly improve the running time and accuracy for both detecting network abnormal intruders and discovering entity meaning shifts over large-scale dynamic graphs. For text changes, we analyze the post-publication changes in news titles to understand the intents behind the edits and discuss the potential impact of titles changes from information integrity perspective. Moreover, we investigate self-presented occupational identities in Twitter users' biographies over five years, investigating job prestige and demographics effects in how people disclose jobs, quantifying over-represented jobs and their transitions over time.
M$^{3}$D: A Multimodal, Multilingual and Multitask Dataset for Grounded Document-level Information Extraction
Jiang Liu, Bobo Li, Xinran Yang
et al.
Multimodal information extraction (IE) tasks have attracted increasing attention because many studies have shown that multimodal information benefits text information extraction. However, existing multimodal IE datasets mainly focus on sentence-level image-facilitated IE in English text, and pay little attention to video-based multimodal IE and fine-grained visual grounding. Therefore, in order to promote the development of multimodal IE, we constructed a multimodal multilingual multitask dataset, named M$^{3}$D, which has the following features: (1) It contains paired document-level text and video to enrich multimodal information; (2) It supports two widely-used languages, namely English and Chinese; (3) It includes more multimodal IE tasks such as entity recognition, entity chain extraction, relation extraction and visual grounding. In addition, our dataset introduces an unexplored theme, i.e., biography, enriching the domains of multimodal IE resources. To establish a benchmark for our dataset, we propose an innovative hierarchical multimodal IE model. This model effectively leverages and integrates multimodal information through a Denoised Feature Fusion Module (DFFM). Furthermore, in non-ideal scenarios, modal information is often incomplete. Thus, we designed a Missing Modality Construction Module (MMCM) to alleviate the issues caused by missing modalities. Our model achieved an average performance of 53.80% and 53.77% on four tasks in English and Chinese datasets, respectively, which set a reasonable standard for subsequent research. In addition, we conducted more analytical experiments to verify the effectiveness of our proposed module. We believe that our work can promote the development of the field of multimodal IE.
Philosophical disputation vs. skill duel: methods of interpreting Latin hagiography in the old norse "Clemens saga"
Mariya Zenkova
The Clemens saga is a biography of St. Clement of Rome, compiled in the 1220s from translations of two Latin hagiographical works, the Recogniciones and the Passio Sancti Climentis. The Old Norse author made the translation in accordance with the peculiarities of the “saga” style: he changed the narrative modus, added didactic comments on Latin book culture, and used motifs and elements from Scandinavian folk literature. In addition, the Clemens saga is almost devoid of the philosophical and dogmatic Christian discourses that characterize the Recollections. One of such episodes altered in content is the philosophical contest between Faustinianus and his three sons - Aquila, Nikita and Clement, touching upon important questions of being from a Christian position and refuting epistemological views. In the Clemens saga this plot takes the form of a contest in the Seven Liberal Arts between Faustinianus and Clement, caused by a misunderstanding of the consubstantial nature of the almighty God. Despite the fact that a contest in skills or a duel is a frequent plot in Scandinavian sagas and poetry, comparing their structure and literary motifs with the Clemens saga, we can conclude that the author, interpreting the original, does not rely on examples from his own popular culture. Instead, he turns to Latin secular literature, turning the philosophical disputation into a contest in the Seven Liberal Arts, which are rarely found in the saga corpus. By combining methods of interpretation with reliance on Latin secular and religious or Scandinavian folk literature, the scribe not only influences the structure and style of the text, but also its content.
Philology. Linguistics, Literature (General)
ScrollTimes: Tracing the Provenance of Paintings as a Window into History
Wei Zhang, Wong Kam-Kwai, Yitian Chen
et al.
The study of cultural artifact provenance, tracing ownership and preservation, holds significant importance in archaeology and art history. Modern technology has advanced this field, yet challenges persist, including recognizing evidence from diverse sources, integrating sociocultural context, and enhancing interactive automation for comprehensive provenance analysis. In collaboration with art historians, we examined the handscroll, a traditional Chinese painting form that provides a rich source of historical data and a unique opportunity to explore history through cultural artifacts. We present a three-tiered methodology encompassing artifact, contextual, and provenance levels, designed to create a "Biography" for handscroll. Our approach incorporates the application of image processing techniques and language models to extract, validate, and augment elements within handscroll using various cultural heritage databases. To facilitate efficient analysis of non-contiguous extracted elements, we have developed a distinctive layout. Additionally, we introduce ScrollTimes, a visual analysis system tailored to support the three-tiered analysis of handscroll, allowing art historians to interactively create biographies tailored to their interests. Validated through case studies and expert interviews, our approach offers a window into history, fostering a holistic understanding of handscroll provenance and historical significance.
“I wish I hadn’t seen the Krylov Monument…”: On an entry in Taras Shevchenko’s diary from 1858
E. E. Liamina, N. V. Samover
Taras Shevchenko’s diary of 1857–1858, a well-known and even iconic text, remains poorly commented and conceptualized to an even lesser degree. This despite the fact that it describes and interprets one the most important periods in Shevchenko’s life. When his exile ended, he returned to St. Petersburg and to artistic production, became extremely popular as a key figure of Ukrainian nation-building. The article intends to demonstrate, on the example of one extended entry from April 30th, 1858, how a multifaceted commentary reveals the wide problematics of this diary. To achieve this goal, the first monographic commentary to the named entry was compiled, reconstructing and examining numerous contexts of that day, such as the historical, the biographical, the artistic, the urban, and the ideological. As a next step, the plots identified were interpreted within the framework of the optics of transition underscored by Shevchenko himself. The results of the study are: explication of several obscure or unnoticed passages; revelation of multicentered conflict as a main line of the text (“former I” vs “actual I”; Art vs state/Church; sacred Art vs. ignoble naturalism); analysis of structure and poetics of the entry. The emotional scenario of the day is explicated as developing from discontent to annoyance, then from disappointment to rage and indignation. The study demonstrates which rhetorical patterns are used and how the author defines himself by two symbolic figures, Karl Bryullov and Ivan Krylov. Painter and poet, respectively, they represent Shevchenko’s double-sided avatar, and also serve as tools for analysis of his traumatic past strongly influencing a new stage of his biography.
Philology. Linguistics, History (General)
Satoshi Nakamoto and the Origins of Bitcoin -- The Profile of a 1-in-a-Billion Genius
Jens Ducrée
The mystery about the ingenious creator of Bitcoin concealing behind the pseudonym Satoshi Nakamoto has been fascinating the global public for more than a decade. Suddenly jumping out of the dark in 2008, this persona hurled the decentralized electronic cash system "Bitcoin", which has reached a peak market capitalization in the region of 1 trillion USD. In a purposely agnostic, and meticulous "lea-ving no stone unturned" approach, this study presents new hard facts, which evidently slipped through Satoshi Nakamoto's elaborate privacy shield, and derives meaningful pointers that are primarily inferred from Bitcoin's whitepaper, its blockchain parameters, and data that were widely up to his discretion. This ample stack of established and novel evidence is systematically categorized, analyzed, and then connected to its related, real-world ambient, like relevant locations and happenings in the past, and at the time. Evidence compounds towards a substantial role of the Benelux cryptography ecosystem, with strong transatlantic links, in the creation of Bitcoin. A consistent biography, a psychogram, and gripping story of an ingenious, multi-talented, autodidactic, reticent, and capricious polymath transpire, which are absolutely unique from a history of science and technology perspective. A cohort of previously fielded and best matches emerging from the investigations are probed against an unprecedently restrictive, multi-stage exclusion filter, which can, with maximum certainty, rule out most "Satoshi Nakamoto" candidates, while some of them remain to be confirmed. With this article, you will be able to decide who is not, or highly unlikely to be Satoshi Nakamoto, be equipped with an ample stack of systematically categorized evidence and efficient methodologies to find suitable candidates, and can possibly unveil the real identity of the creator of Bitcoin - if you want.
The Scientific Biography of P.Ya. Galperin: Stages of Life and Creative Work
Marina A. Stepanova
Background. This article is dedicated to the 120th anniversary of the birth of Piotr Yakovlevich Galperin (1902–1988), an outstanding Soviet psychologist, the author of an original psychological concept and scienti" c school, and an organizer of psychological science.
Objective. To reconstruct the main stages of the scienti" c biography of Piotr Yakovlevich Galperin.
Results. The paper demonstrates the internal logic of P.Ya. Galperin’s developing scienti" c views in creating the theory of stage-by-stage formation of mental actions and concepts, which analyzes the process of formation of the main components of mental activity and develops a system of conditions for transforming an objective action into a psychological phenomenon. This biography is based on Galperin’s publications and speeches, memoirs of associates and family members, and numerous archival materials. All the periods of Galperin’s life are presented, re# ecting his participation, starting from the mid-1920s, in scienti" c and scienti" c-practical events. Particular attention is paid to Galperin’s work at M.V. Lomonosov Moscow State University (MGU): 55 years of Galperin’s professional and personal life (from 1943 until his death in 1988) were associated with the Philosophy Faculty, and then with the Psychology Faculty.
Conclusion. ! e importance of preserving P.Ya. Galperin’s scienti" c legacy is shown and steps taken in this direction are indicated.
Leaving, Staying in and Returning to the Hometown
Janna Albrecht, Joachim Scheiner
Couples' residential decisions are based on a large variety of factors including housing preferences, family and other social ties, socialisation and residential biography (e.g. earlier experience in the life course) and environmental factors (e.g. housing market, labour market). This study examines, firstly, to what extent people stay in, return to or leave their hometown (referred to as ‘migration type’). We refer to the hometown as the place where most of childhood and adolescence is spent. Secondly, we study which conditions shape a person’s migration type. We mainly focus on variables capturing elements of the residential biography and both partners’ family ties and family socialisation. We focus on the residential choices made at the time of family formation, i.e. when the first child is born. We employ multinomial regression modelling and cross-tabulations, based on two generations in a sample of families who mostly live in the wider Ruhr area, born around 1931 (parents) and 1957 (adult children). We find that migration type is significantly affected by a combination of both partners' place of origin, both partners' parents' places of residence, the number of previous moves, level of education and hometown population size. We conclude that complex patterns of experience made over the life course, socialisation and gendered patterns are at work. These mechanisms should be kept in mind when policymakers develop strategies to attract (return) migrants.
Cities. Urban geography, Urbanization. City and country
Forces et paradoxes des dynamiques dites « inclusives » : Étude auprès d’enseignants en formation à l’École inclusive
Cendrine Mercier, Gaëlle Lefer-Sauvage
The inclusive school, which is a young paradigm still under construction, needs time to gradually take hold in the school environment and in teaching practices. Through the experiences of CAPPEI candidates, it is possible to understand the forces that facilitate the implementation of schooling for all, but also the paradoxes that prevent an inclusive dynamic in France. We cross the paradigmatic issues with biographical analysis, based on what the teacher has experienced in relation to his/her own professional and personal history. A qualitative analysis of 46 CAPPEI candidates highlighted the importance of professional (close to the medical field) and personal backgrounds in the understanding of the notion of need. We also note that previous non-conscious or implicit practices that are related to inclusion are the subject of the challenge of continuous training. Finally, representations that are obstacles to the inclusive dynamic through the notion of disability and normativity participate in slowing down this dynamic. Also, it appears that the personal and professional histories of teachers are involved in a parallel dynamic that is as complex as the political and cultural history of inclusion.
Education, Special aspects of education
Trayectorias de militancia sindical en la Unión Obrera Gráfica Cordobesa durante la transición democrática
Fernando Aiziczon
The present work investigates the trajectories of two union leaders belonging to the Unión Obrera Gráfica Cordobesa (UOGC), current members of the union's executive committee. From the historical reconstruction based on oral interviews and other sources we will seek to understand those militant trajectories in the context of “union normalization” process started immediately after the dictatorship in Córdoba, a place where the graphic union occupies a singular space in the tradition that refers to the militant unionism of the 60-70’s. In this way, our objetive is to contribute to the understanding of the process also known as “democratic transition” during the ’80, in particular to the incipient field of studies that investigates the way in which the working class seeks to reorganize itself and the new militant configurations that unfold in the transition between dictatorship and democracy. In methodological terms, we inquire the possible articulation between the labor, trade union and political spheres, with the objetive to analyze, on one hand, the individual dimension (biography, way of political engagement, union trajectories), and on the other, the collective dimension (the reconstruction of trade unions in the post-dictatorship in Argentina), taking into account the specifity and limits of the reconstruction based on the experience of trade union leaders.
Anthropology, Ethnology. Social and cultural anthropology
The Pain and Irony of Death in Julian Barnes's Memoirs Nothing to Be Frightened Of and Levels of Life
Maricel Oró-Piqueras
Julian Barnes is one of the best-known contemporary British authors, not only for his taste for formal experimentation well-documented in the novels and short stories he has published since the 1980s, but also for his obsession with death. Despite the fact that death – as a prime concern expressed through his characters’ discussions, particularly when they are in their old age – has been present in most of Barnes fictional works, the topic becomes centre-stage in the two memoirs that he has published, namely, Nothing to Be Frightened Of (2008) and Levels of Life (2013). In his memoirs, Barnes connects his personal experience with the works of philosophers and writers and with the experiences of those around him with the aim of trying to discern how he himself and, by extension, his own contemporaries and Western society have dealt with death. For Barnes, writing becomes a therapy to confront his own existential fears as well as traumatic experiences – such as the sudden death of his wife as described in Levels of Life – at the same time that he reflects on the place death occupies in contemporary times.
Biography, Literature (General)