Hasil untuk "Slavic languages. Baltic languages. Albanian languages"

Menampilkan 20 dari ~297258 hasil · dari CrossRef, DOAJ, arXiv

JSON API
arXiv Open Access 2026
Targeted Syntactic Evaluation of Language Models on Georgian Case Alignment

Daniel Gallagher, Gerhard Heyer

This paper evaluates the performance of transformer-based language models on split-ergative case alignment in Georgian, a particularly rare system for assigning grammatical cases to mark argument roles. We focus on subject and object marking determined through various permutations of nominative, ergative, and dative noun forms. A treebank-based approach for the generation of minimal pairs using the Grew query language is implemented. We create a dataset of 370 syntactic tests made up of seven tasks containing 50-70 samples each, where three noun forms are tested in any given sample. Five encoder- and two decoder-only models are evaluated with word- and/or sentence-level accuracy metrics. Regardless of the specific syntactic makeup, models performed worst in assigning the ergative case correctly and strongest in assigning the nominative case correctly. Performance correlated with the overall frequency distribution of the three forms (NOM > DAT > ERG). Though data scarcity is a known issue for low-resource languages, we show that the highly specific role of the ergative along with a lack of available training data likely contributes to poor performance on this case. The dataset is made publicly available and the methodology provides an interesting avenue for future syntactic evaluations of languages where benchmarks are limited.

en cs.CL
arXiv Open Access 2026
Indic-TunedLens: Interpreting Multilingual Models in Indian Languages

Mihir Panchal, Deeksha Varshney, Mamta et al.

Multilingual large language models (LLMs) are increasingly deployed in linguistically diverse regions like India, yet most interpretability tools remain tailored to English. Prior work reveals that LLMs often operate in English centric representation spaces, making cross lingual interpretability a pressing concern. We introduce Indic-TunedLens, a novel interpretability framework specifically for Indian languages that learns shared affine transformations. Unlike the standard Logit Lens, which directly decodes intermediate activations, Indic-TunedLens adjusts hidden states for each target language, aligning them with the target output distributions to enable more faithful decoding of model representations. We evaluate our framework on 10 Indian languages using the MMLU benchmark and find that it significantly improves over SOTA interpretability methods, especially for morphologically rich, low resource languages. Our results provide crucial insights into the layer-wise semantic encoding of multilingual transformers. Our model is available at https://huggingface.co/spaces/MihirRajeshPanchal/IndicTunedLens. Our code is available at https://github.com/MihirRajeshPanchal/IndicTunedLens.

en cs.CL, cs.AI
arXiv Open Access 2025
Example-Free Learning of Regular Languages with Prefix Queries

Eve Fernando, Sasha Rubin, Rahul Gopinath

Language learning refers to the problem of inferring a mathematical model which accurately represents a formal language. Many language learning algorithms learn by asking certain types of queries about the language being modeled. Language learning is of practical interest in the field of cybersecurity, where it is used to model the language accepted by a program's input parser (also known as its input processor). In this setting, a learner can only query a string of its choice by executing the parser on it, which limits the language learning algorithms that can be used. Most practical parsers can indicate not only whether the string is valid or not, but also where the parsing failed. This extra information can be leveraged into producing a type of query we call the prefix query. Notably, no existing language learning algorithms make use of prefix queries, though some ask membership queries i.e., they ask whether or not a given string is valid. When these approaches are used to learn the language of a parser, the prefix information provided by the parser remains unused. In this work, we present PL*, the first known language learning algorithm to make use of the prefix query, and a novel modification of the classical L* algorithm. We show both theoretically and empirically that PL* is able to learn more efficiently than L* due to its ability to exploit the additional information given by prefix queries over membership queries. Furthermore, we show how PL* can be used to learn the language of a parser, by adapting it to a more practical setting in which prefix queries are the only source of information available to it; that is, it does not have access to any labelled examples or any other types of queries. We demonstrate empirically that, even in this more constrained setting, PL* is still capable of accurately learning a range of languages of practical interest.

en cs.FL, cs.LG
arXiv Open Access 2025
LPO: Discovering Missed Peephole Optimizations with Large Language Models

Zhenyang Xu, Hongxu Xu, Yongqiang Tian et al.

Peephole optimization is an essential class of compiler optimizations that targets small, inefficient instruction sequences within programs. By replacing such suboptimal instructions with refined and more optimal sequences, these optimizations not only directly optimize code size and performance, but also enable more transformations in the subsequent optimization pipeline. Despite their importance, discovering new and effective peephole optimizations remains challenging due to the complexity and breadth of instruction sets. Prior approaches either lack scalability or have significant restrictions on the peephole optimizations that they can find. This paper introduces LPO, a novel automated framework to discover missed peephole optimizations. Our key insight is that, Large Language Models (LLMs) are effective at creative exploration but susceptible to hallucinations; conversely, formal verification techniques provide rigorous guarantees but struggle with creative discovery. By synergistically combining the strengths of LLMs and formal verifiers in a closed-loop feedback mechanism, LPO can effectively discover verified peephole optimizations that were previously missed. We comprehensively evaluated LPO within LLVM ecosystems. Our evaluation shows that LPO can successfully identify up to 22 out of 25 previously reported missed optimizations in LLVM. In contrast, the recently proposed superoptimizers for LLVM, Souper and Minotaur detected 15 and 3 of them, respectively. More importantly, within eleven months of development and intermittent testing, LPO found 62 missed peephole optimizations, of which 28 were confirmed and an additional 13 had already been fixed in LLVM. These results demonstrate LPO's strong potential to continuously uncover new optimizations as LLMs' reasoning improves.

en cs.PL, cs.SE
DOAJ Open Access 2024
Метафоры «волны» в русском и польском языках

Małgorzata Borek

Целью настоящей статьи является сравнительный анализ метафорических конструкций с лексемой волна в русском и польском языках. Примеры для анализа взяты из национальных корпусов русского и польского языков и сайтов в интернете. Автор старается показать, что метафоры волны как разновидность метафор «водного пространства» активно используются в разного типа текстах для более экспрессивного описания различных явлений действительности и учавствуют в моделировании определённого фрагмента нашего мира. В результате анализа выяснилось, что языковая картина, отражённая в русских и польских метафорах с лексемой волна/fala, является частично сходной и частично различной. В большинстве рассматриваемых метафор в обоих языках волна воспринимается как препятствие, которое надо преодолеть и которое часто угрожает нашей жизни. Поэтому метафоры волны появляются прежде всего в текстах, указывающих болевые точки нашего времени и самые важные угрозы. Таким образом, метафоры водного пространства, в том числе метафоры волны, широко используются для концептуализации и оценки общественной жизни и позволяют выявить, как реальность и ментальный мир человека отражаются в языковой системе.

Philology. Linguistics, Slavic languages. Baltic languages. Albanian languages
DOAJ Open Access 2024
Semantic Derivation Strategies of Verbs of Smell Emission in the Polish and Ukrainian Languages

Oleh Demenchuk

Semantic Derivation Strategies of Verbs of Smell Emission in the Polish and Ukrainian Languages The article focuses on the strategies of semantic derivation of verbs of smell emission – a semantic class of words that denote a situation of smell emission and its perception. The study reveals the features of verbs of smell emission and characterizes their semantic shifts in the Polish and Ukrainian languages. We posit that a linguistic item’s semantic paradigm development aligns with situation concept extensions, which are supposed to be determined by the changes participants undergo within source-to-target-situation shifts. The semantic shifts construed by Polish and Ukrainian verbs of smell emission suggest the regular direction of concept extensions – from an internal domain (a situation of smell emission and its perception) via an adjacent one (smell identification) towards an external (smell interpretation) domain.   Strategie derywacji semantycznej czasowników emisji zapachu w języku polskim i ukraińskim W artykule skupiono się na strategiach derywacji semantycznej czasowników emisji zapachu – słów oznaczających sytuację wydzielania zapachu i jego percepcji. W pracy ukazano cechy czasowników emisji zapachu oraz scharakteryzowano ich zmiany semantyczne w języku polskim i ukraińskim. Rozszerzenia pojęcia sytuacji emisji zapachu są zdeterminowane przez zmiany charakterystyk uczestników sytuacji. Przesunięcia semantyczne sugerują regularny kierunek rozszerzeń – od domeny wewnętrznej (sytuacja emisji zapachu i jego percepcji) przez dziedzinę przyległą (identyfikacja zapachu) w kierunku domeny zewnętrznej (interpretacja zapachu).

Ethnology. Social and cultural anthropology, Slavic languages. Baltic languages. Albanian languages
DOAJ Open Access 2024
La France et l’entrée de l’URSS à la Société des Nations

Stanislas Jeannesson

On September 17th, 1934, the USSR was admitted into the League of Nations, in a European context dominated by the effects of the economic crisis and the rise of Nazi Germany. It thus truly entered the international system, 15 years after having been excluded at the Paris Peace Conference. Drawing on research findings and the archives of the Quai d’Orsay, our objective is is to examine the role of France in the admission of the USSR into the League of Nations. We explore the details of the negotiations and the efforts made by the French authorities, in order to reassess the place granted to this issue by the French diplomacy in its wider security policy, and establish whether it saw it as a simple precondition for the Eastern Pact - or even a possible Franco-Soviet alliance - or as a genuine, independent objective.

Slavic languages. Baltic languages. Albanian languages
arXiv Open Access 2024
Investigating Neural Machine Translation for Low-Resource Languages: Using Bavarian as a Case Study

Wan-Hua Her, Udo Kruschwitz

Machine Translation has made impressive progress in recent years offering close to human-level performance on many languages, but studies have primarily focused on high-resource languages with broad online presence and resources. With the help of growing Large Language Models, more and more low-resource languages achieve better results through the presence of other languages. However, studies have shown that not all low-resource languages can benefit from multilingual systems, especially those with insufficient training and evaluation data. In this paper, we revisit state-of-the-art Neural Machine Translation techniques to develop automatic translation systems between German and Bavarian. We investigate conditions of low-resource languages such as data scarcity and parameter sensitivity and focus on refined solutions that combat low-resource difficulties and creative solutions such as harnessing language similarity. Our experiment entails applying Back-translation and Transfer Learning to automatically generate more training data and achieve higher translation performance. We demonstrate noisiness in the data and present our approach to carry out text preprocessing extensively. Evaluation was conducted using combined metrics: BLEU, chrF and TER. Statistical significance results with Bonferroni correction show surprisingly high baseline systems, and that Back-translation leads to significant improvement. Furthermore, we present a qualitative analysis of translation errors and system limitations.

en cs.CL
arXiv Open Access 2024
Vision Language Model is NOT All You Need: Augmentation Strategies for Molecule Language Models

Namkyeong Lee, Siddhartha Laghuvarapu, Chanyoung Park et al.

Recently, there has been a growing interest among researchers in understanding molecules and their textual descriptions through molecule language models (MoLM). However, despite some early promising developments, the advancement of MoLM still trails significantly behind that of vision language models (VLM). This is because unique challenges exist apart from VLM in the field of MoLM due to 1) a limited amount of molecule-text paired data and 2) missing expertise that occurred due to the specialized areas of focus among the experts. To this end, we propose AMOLE, which 1) augments molecule-text pairs with structural similarity preserving loss, and 2) transfers the expertise between the molecules. Specifically, AMOLE enriches molecule-text pairs by sharing descriptions among structurally similar molecules with a novel structural similarity preserving loss. Moreover, we propose an expertise reconstruction loss to transfer knowledge from molecules that have extensive expertise to those with less expertise. Extensive experiments on various downstream tasks demonstrate the superiority of AMOLE in comprehending molecules and their descriptions, highlighting its potential for application in real-world drug discovery. The source code for AMOLE is available at https://github.com/Namkyeong/AMOLE.

en cs.AI
arXiv Open Access 2024
Mapping 'when'-clauses in Latin American and Caribbean languages: an experiment in subtoken-based typology

Nilo Pedrazzini

Languages can encode temporal subordination lexically, via subordinating conjunctions, and morphologically, by marking the relation on the predicate. Systematic cross-linguistic variation among the former can be studied using well-established token-based typological approaches to token-aligned parallel corpora. Variation among different morphological means is instead much harder to tackle and therefore more poorly understood, despite being predominant in several language groups. This paper explores variation in the expression of generic temporal subordination ('when'-clauses) among the languages of Latin America and the Caribbean, where morphological marking is particularly common. It presents probabilistic semantic maps computed on the basis of the languages of the region, thus avoiding bias towards the many world's languages that exclusively use lexified connectors, incorporating associations between character $n$-grams and English $when$. The approach allows capturing morphological clause-linkage devices in addition to lexified connectors, paving the way for larger-scale, strategy-agnostic analyses of typological variation in temporal subordination.

en cs.CL, cs.IR
arXiv Open Access 2024
Forget NLI, Use a Dictionary: Zero-Shot Topic Classification for Low-Resource Languages with Application to Luxembourgish

Fred Philippy, Shohreh Haddadan, Siwen Guo

In NLP, zero-shot classification (ZSC) is the task of assigning labels to textual data without any labeled examples for the target classes. A common method for ZSC is to fine-tune a language model on a Natural Language Inference (NLI) dataset and then use it to infer the entailment between the input document and the target labels. However, this approach faces certain challenges, particularly for languages with limited resources. In this paper, we propose an alternative solution that leverages dictionaries as a source of data for ZSC. We focus on Luxembourgish, a low-resource language spoken in Luxembourg, and construct two new topic relevance classification datasets based on a dictionary that provides various synonyms, word translations and example sentences. We evaluate the usability of our dataset and compare it with the NLI-based approach on two topic classification tasks in a zero-shot manner. Our results show that by using the dictionary-based dataset, the trained models outperform the ones following the NLI-based approach for ZSC. While we focus on a single low-resource language in this study, we believe that the efficacy of our approach can also transfer to other languages where such a dictionary is available.

en cs.CL, cs.AI
arXiv Open Access 2024
Program Analysis via Multiple Context Free Language Reachability

Giovanna Kobus Conrado, Adam Husted Kjelstrøm, Andreas Pavlogiannis et al.

Context-free language (CFL) reachability is a standard approach in static analyses, where the analysis question is phrased as a language reachability problem on a graph $G$ wrt a CFL L. While CFLs lack the expressiveness needed for high precision, common formalisms for context-sensitive languages are such that the corresponding reachability problem is undecidable. Are there useful context-sensitive language-reachability models for static analysis? In this paper, we introduce Multiple Context-Free Language (MCFL) reachability as an expressive yet tractable model for static program analysis. MCFLs form an infinite hierarchy of mildly context sensitive languages parameterized by a dimension $d$ and a rank $r$. We show the utility of MCFL reachability by developing a family of MCFLs that approximate interleaved Dyck reachability, a common but undecidable static analysis problem. We show that MCFL reachability be computed in $O(n^{2d+1})$ time on a graph of $n$ nodes when $r=1$, and $O(n^{d(r+1)})$ time when $r>1$. Moreover, we show that when $r=1$, the membership problem has a lower bound of $n^{2d}$ based on the Strong Exponential Time Hypothesis, while reachability for $d=1$ has a lower bound of $n^{3}$ based on the combinatorial Boolean Matrix Multiplication Hypothesis. Thus, for $r=1$, our algorithm is optimal within a factor $n$ for all levels of the hierarchy based on $d$. We implement our MCFL reachability algorithm and evaluate it by underapproximating interleaved Dyck reachability for a standard taint analysis for Android. Used alongside existing overapproximate methods, MCFL reachability discovers all tainted information on 8 out of 11 benchmarks, and confirms $94.3\%$ of the reachable pairs reported by the overapproximation on the remaining 3. To our knowledge, this is the first report of high and provable coverage for this challenging benchmark set.

en cs.PL, cs.CC
arXiv Open Access 2024
Prompting Towards Alleviating Code-Switched Data Scarcity in Under-Resourced Languages with GPT as a Pivot

Michelle Terblanche, Kayode Olaleye, Vukosi Marivate

Many multilingual communities, including numerous in Africa, frequently engage in code-switching during conversations. This behaviour stresses the need for natural language processing technologies adept at processing code-switched text. However, data scarcity, particularly in African languages, poses a significant challenge, as many are low-resourced and under-represented. In this study, we prompted GPT 3.5 to generate Afrikaans--English and Yoruba--English code-switched sentences, enhancing diversity using topic-keyword pairs, linguistic guidelines, and few-shot examples. Our findings indicate that the quality of generated sentences for languages using non-Latin scripts, like Yoruba, is considerably lower when compared with the high Afrikaans-English success rate. There is therefore a notable opportunity to refine prompting guidelines to yield sentences suitable for the fine-tuning of language models. We propose a framework for augmenting the diversity of synthetically generated code-switched data using GPT and propose leveraging this technology to mitigate data scarcity in low-resourced languages, underscoring the essential role of native speakers in this process.

en cs.CL
arXiv Open Access 2024
Building pre-train LLM Dataset for the INDIC Languages: a case study on Hindi

Shantipriya Parida, Shakshi Panwar, Kusum Lata et al.

Large language models (LLMs) demonstrated transformative capabilities in many applications that require automatically generating responses based on human instruction. However, the major challenge for building LLMs, particularly in Indic languages, is the availability of high-quality data for building foundation LLMs. In this paper, we are proposing a large pre-train dataset in Hindi useful for the Indic language Hindi. We have collected the data span across several domains including major dialects in Hindi. The dataset contains 1.28 billion Hindi tokens. We have explained our pipeline including data collection, pre-processing, and availability for LLM pre-training. The proposed approach can be easily extended to other Indic and low-resource languages and will be available freely for LLM pre-training and LLM research purposes.

en cs.CL, cs.AI
arXiv Open Access 2023
Train Global, Tailor Local: Minimalist Multilingual Translation into Endangered Languages

Zhong Zhou, Jan Niehues, Alex Waibel

In many humanitarian scenarios, translation into severely low resource languages often does not require a universal translation engine, but a dedicated text-specific translation engine. For example, healthcare records, hygienic procedures, government communication, emergency procedures and religious texts are all limited texts. While generic translation engines for all languages do not exist, translation of multilingually known limited texts into new, endangered languages may be possible and reduce human translation effort. We attempt to leverage translation resources from many rich resource languages to efficiently produce best possible translation quality for a well known text, which is available in multiple languages, in a new, severely low resource language. We examine two approaches: 1. best selection of seed sentences to jump start translations in a new language in view of best generalization to the remainder of a larger targeted text(s), and 2. we adapt large general multilingual translation engines from many other languages to focus on a specific text in a new, unknown language. We find that adapting large pretrained multilingual models to the domain/text first and then to the severely low resource language works best. If we also select a best set of seed sentences, we can improve average chrF performance on new test languages from a baseline of 21.9 to 50.7, while reducing the number of seed sentences to only around 1,000 in the new, unknown language.

en cs.CL
DOAJ Open Access 2022
Prostor v romanu Marjana Tomšiča Óštrigéca

Vladka Tucovič Sturman

Prispevek z literarnovedno analizo pripovednega prostora ob pomoči spoznanj prostorskega obrata in t. i. prostorske literarne vede odgovarja na vprašanja, kakšen je v tem romanu istrski dogajalni prostor, na katere mikrolokacije je razdeljen, kako so predstavljene oz. kaj jih določa in kako literarni prostor določa regionalnost tega romana. Dogajalni prostor Óštrigéce je paralelen: realni in domišljijski, osrednja literarna mikrolokacija je pot na istrskem podeželju. Realni istrski literarni prostor je neposredno določen z zemljepisnimi lastnimi imeni za kraje, vzpetine in reko, posredno pa s t. i. prostorskimi drobci, ki niso zemljepisna lastna imena, temveč ostali jezikovni elementi, ki dopolnijo predstavo o literarnem prostoru: narečno besedje, prepoznavna pokrajinska lastna imena bitij ter vpeljava tipične pokrajinske predmetnosti oz. značilnosti in elementov narave. Domišljijski prostor med drugim določajo mitološka bitja, elementi tradicionalne medicine in redki naravni pojavi.

Slavic languages. Baltic languages. Albanian languages
DOAJ Open Access 2021
Self-Exile as a Writing Strategy in the Novels by W.G. Sebald and A.A. Makušinskij

Ekaterina Olegovna Khromova

In modern literary criticism, the concept and so-called genre ‘migration literature’ is commonly associated with the experience of exile, often for political reasons; by contrast, writers who have left their country for reasons other than political are labelled as ‘migrant-writers’, ‘writers abroad’, or ‘diaspora writers’. The use of such a different terminology to categorise authors and their writings highlight the fact that there are some distinctive characteristics distinguishing them. While I do share this perspective to a certain degree, I also would like to draw attention to a major literary trend of the last two decades: the appearance of writers who expatriate voluntarily without being persecuting politically but yet are in a situation which I define as ‘self-exile’ or ‘voluntary exile’. Despite their different languages and countries of origin and residence, an analysis of their texts demonstrates that these authors are united by two common features: 1) a reflection on the tragic past of their compatriots, who have experienced forced mass emigration, and an attempt to find echoes of this experience in  everyday life; and 2) an awareness of their own position (that is, the situation of self-exile) as a productive process and creative basis for their writing work.  The hypothesis I suggest in this paper is that the texts written by ‘writers who are in self-exile’ are characterised by certain writing strategies and themes typical of migrant writers yet they also have some unique features, which are related to the voluntary experience of leaving their home country. Home is to be understood broadly, not in terms of a certain geopolitical location, but as belonging to one single culture, community, and language. The aim of this article is to examine these very features. This article focuses on the novels by two famous contemporary authors: Aleksej Makušinskij (Russia/Germany) and Winfried Sebald (Germany/United Kingdom). In their works, the representation of the condition of self-exile  has led the authors to develop a multilingual discourse and recreate the new transitory literary world.

History of Eastern Europe, Slavic languages. Baltic languages. Albanian languages
DOAJ Open Access 2021
Epic Synthesis in Late Work of I. S. Turgenev

V. G. Andreeva

The epic synthesis in the works of I. S. Turgenev on the example of his last novel “Nov” is examined in the article. It is noted that the writer had high hopes for this work, since the novel was supposed to show Turgenev’s true position in relation to the Russian people and the future of Russia. It is pointed out that in “Nov” there is no extensively revealed theme of peasant life, but it does not recede into the background: Turgenev represents a collective image of the people, evaluates events from the people’s point of view. It is emphasized that the writer’s attention to the reaction of the public, conditioned by the desire not only to present the most complete picture of life in Russia, but also to show the reader timeless samples of meaningful life, rising above various “questions”, allows us to talk about the epic breadth of the novel. The author of the article proves that the artistic world of “Nov” can be read “layer by layer”, reaching the key meanings: behind the external clashes of political forces is the problem of the relationship between people, their willingness to help each other. Parallels are drawn between the three essays added to the book “Notes of a Hunter” and individual episodes of the novel “Nov”, their ideological and artistic connection is proved. It is concluded that the work of the late Turgenev cannot be analyzed without considering the opposition between the dramatic elements of destruction and the creative force, which is based on the Orthodox world outlook.

Slavic languages. Baltic languages. Albanian languages
DOAJ Open Access 2021
Modern principles of developing the subtest “Reading and Use of Language” of the TORFL-II

Inna N. Erofeeva, Tatiana I. Popova

The article is devoted to the topical problem of modern principles of developing tests of Russian as a foreign language (RFL), taking into account the world experience. The purpose of the article is to summarize the modern principles of language test development and to show how they are implemented in the new tests of Russian as a foreign language. The materials of the article include the research papers of Russian and foreign authors in the sphere of methodology over the past 20 years, as well as modern formats of testing in foreign languages. At the first stage of the study, general scientific methods of generalization, systematization and structuring were used. At the second phase, a new format of the RFL test Reading and Use of Language (B2) was modelled, combining language and communication competence testing. At the third stage, an experiment was conducted to test the new format. 48 foreign master students studying the program Russian Language and Russian Culture in the Aspect of Russian as a Foreign Language in Saint Petersburg State University took part in the experiment. It was concluded that the modern language test, in accordance with the basic cognitive and communicative principle of learning and control, should be based on the following principles: testing skills in different types of speech activities mainly on the text material; interdependence between the type of the task and the speech genre of the text being created/used in the task; basing on a linguistic and didactic description of the communicative competence level; integrative approach; using different types of test tasks within one subtest; the principle of increasing complexity of tasks; taking into account the complexity of each task in its assessment; task feasibility according to students educational level; taking into account the values of the multicultural world; taking into account international experience; basing on reliability and validity criteria of test tasks. These principles implemented in the new TORFL-II subtest format Reading and Use of Language are presented in the article. The implementation of modern test principles should ensure that all speech control facilities are systematically allocated to the appropriate level and parameters for their assessment. The above-mentioned principles of test creation and the example of their implementation can be taken as the basis of a full-fledged system of control and measurement materials based on linguistic and didactical descriptions of each level.

Slavic languages. Baltic languages. Albanian languages
DOAJ Open Access 2020
Slovenska baročna pridiga

Alen Širca

Članek razpravlja o baročni pridigi kot posebni literarni zvrsti, ki spada v t. i. nefikcijsko literaturo. Omejuje se na tri poglavitne avtorje, ki so ob koncu 17. in na začetku 18. stoletja uspeli svoja besedila izdati v tiskani obliki. To sta kapucinska pridigarja Janez Svetokriški in Rogerij Ljubljanski ter jezuit Jernej Basar. Izmed naštetih je gotovo najpomembnejši Svetokriški. Slovenska literarna zgodovina je v njegovem Svetem priročniku prepoznala literarne kvalitete predvsem tam, kjer uporablja izrazito pripovedne prvine (eksempli, basni, krajši humoristični zapisi). Vsebinsko in slogovno so podobne pridige Rogerija Ljubljanskega (Palmarium Empyreum), le da je v njih manj humorja in referenc na sočasno družbeno okolje. Rogerijeva odlika je med drugim občutek za pesniški jezik, kar je razvidno iz njegovih prevodih latinskih pesmi, vloženih v pridige. Delo Jerneja Basarja (Pridige) predstavlja nov tip asketično-medtitativne pridige v slovenskem jeziku. Tudi to delo vsebuje nekatere tipično baročne prvine, kot je tema vanitas mundi (»ničevosti sveta«), čeprav se deloma že približuje klasicističnemu tipu pridige, morda pod vplivom Paola Segnerija.

Slavic languages. Baltic languages. Albanian languages

Halaman 6 dari 14863