Targeted Syntactic Evaluation of Language Models on Georgian Case Alignment
Daniel Gallagher, Gerhard Heyer
This paper evaluates the performance of transformer-based language models on split-ergative case alignment in Georgian, a particularly rare system for assigning grammatical cases to mark argument roles. We focus on subject and object marking determined through various permutations of nominative, ergative, and dative noun forms. A treebank-based approach for the generation of minimal pairs using the Grew query language is implemented. We create a dataset of 370 syntactic tests made up of seven tasks containing 50-70 samples each, where three noun forms are tested in any given sample. Five encoder- and two decoder-only models are evaluated with word- and/or sentence-level accuracy metrics. Regardless of the specific syntactic makeup, models performed worst in assigning the ergative case correctly and strongest in assigning the nominative case correctly. Performance correlated with the overall frequency distribution of the three forms (NOM > DAT > ERG). Though data scarcity is a known issue for low-resource languages, we show that the highly specific role of the ergative along with a lack of available training data likely contributes to poor performance on this case. The dataset is made publicly available and the methodology provides an interesting avenue for future syntactic evaluations of languages where benchmarks are limited.
Indic-TunedLens: Interpreting Multilingual Models in Indian Languages
Mihir Panchal, Deeksha Varshney, Mamta
et al.
Multilingual large language models (LLMs) are increasingly deployed in linguistically diverse regions like India, yet most interpretability tools remain tailored to English. Prior work reveals that LLMs often operate in English centric representation spaces, making cross lingual interpretability a pressing concern. We introduce Indic-TunedLens, a novel interpretability framework specifically for Indian languages that learns shared affine transformations. Unlike the standard Logit Lens, which directly decodes intermediate activations, Indic-TunedLens adjusts hidden states for each target language, aligning them with the target output distributions to enable more faithful decoding of model representations. We evaluate our framework on 10 Indian languages using the MMLU benchmark and find that it significantly improves over SOTA interpretability methods, especially for morphologically rich, low resource languages. Our results provide crucial insights into the layer-wise semantic encoding of multilingual transformers. Our model is available at https://huggingface.co/spaces/MihirRajeshPanchal/IndicTunedLens. Our code is available at https://github.com/MihirRajeshPanchal/IndicTunedLens.
Example-Free Learning of Regular Languages with Prefix Queries
Eve Fernando, Sasha Rubin, Rahul Gopinath
Language learning refers to the problem of inferring a mathematical model which accurately represents a formal language. Many language learning algorithms learn by asking certain types of queries about the language being modeled. Language learning is of practical interest in the field of cybersecurity, where it is used to model the language accepted by a program's input parser (also known as its input processor). In this setting, a learner can only query a string of its choice by executing the parser on it, which limits the language learning algorithms that can be used. Most practical parsers can indicate not only whether the string is valid or not, but also where the parsing failed. This extra information can be leveraged into producing a type of query we call the prefix query. Notably, no existing language learning algorithms make use of prefix queries, though some ask membership queries i.e., they ask whether or not a given string is valid. When these approaches are used to learn the language of a parser, the prefix information provided by the parser remains unused. In this work, we present PL*, the first known language learning algorithm to make use of the prefix query, and a novel modification of the classical L* algorithm. We show both theoretically and empirically that PL* is able to learn more efficiently than L* due to its ability to exploit the additional information given by prefix queries over membership queries. Furthermore, we show how PL* can be used to learn the language of a parser, by adapting it to a more practical setting in which prefix queries are the only source of information available to it; that is, it does not have access to any labelled examples or any other types of queries. We demonstrate empirically that, even in this more constrained setting, PL* is still capable of accurately learning a range of languages of practical interest.
LPO: Discovering Missed Peephole Optimizations with Large Language Models
Zhenyang Xu, Hongxu Xu, Yongqiang Tian
et al.
Peephole optimization is an essential class of compiler optimizations that targets small, inefficient instruction sequences within programs. By replacing such suboptimal instructions with refined and more optimal sequences, these optimizations not only directly optimize code size and performance, but also enable more transformations in the subsequent optimization pipeline. Despite their importance, discovering new and effective peephole optimizations remains challenging due to the complexity and breadth of instruction sets. Prior approaches either lack scalability or have significant restrictions on the peephole optimizations that they can find. This paper introduces LPO, a novel automated framework to discover missed peephole optimizations. Our key insight is that, Large Language Models (LLMs) are effective at creative exploration but susceptible to hallucinations; conversely, formal verification techniques provide rigorous guarantees but struggle with creative discovery. By synergistically combining the strengths of LLMs and formal verifiers in a closed-loop feedback mechanism, LPO can effectively discover verified peephole optimizations that were previously missed. We comprehensively evaluated LPO within LLVM ecosystems. Our evaluation shows that LPO can successfully identify up to 22 out of 25 previously reported missed optimizations in LLVM. In contrast, the recently proposed superoptimizers for LLVM, Souper and Minotaur detected 15 and 3 of them, respectively. More importantly, within eleven months of development and intermittent testing, LPO found 62 missed peephole optimizations, of which 28 were confirmed and an additional 13 had already been fixed in LLVM. These results demonstrate LPO's strong potential to continuously uncover new optimizations as LLMs' reasoning improves.
Vision Language Model is NOT All You Need: Augmentation Strategies for Molecule Language Models
Namkyeong Lee, Siddhartha Laghuvarapu, Chanyoung Park
et al.
Recently, there has been a growing interest among researchers in understanding molecules and their textual descriptions through molecule language models (MoLM). However, despite some early promising developments, the advancement of MoLM still trails significantly behind that of vision language models (VLM). This is because unique challenges exist apart from VLM in the field of MoLM due to 1) a limited amount of molecule-text paired data and 2) missing expertise that occurred due to the specialized areas of focus among the experts. To this end, we propose AMOLE, which 1) augments molecule-text pairs with structural similarity preserving loss, and 2) transfers the expertise between the molecules. Specifically, AMOLE enriches molecule-text pairs by sharing descriptions among structurally similar molecules with a novel structural similarity preserving loss. Moreover, we propose an expertise reconstruction loss to transfer knowledge from molecules that have extensive expertise to those with less expertise. Extensive experiments on various downstream tasks demonstrate the superiority of AMOLE in comprehending molecules and their descriptions, highlighting its potential for application in real-world drug discovery. The source code for AMOLE is available at https://github.com/Namkyeong/AMOLE.
Mapping 'when'-clauses in Latin American and Caribbean languages: an experiment in subtoken-based typology
Nilo Pedrazzini
Languages can encode temporal subordination lexically, via subordinating conjunctions, and morphologically, by marking the relation on the predicate. Systematic cross-linguistic variation among the former can be studied using well-established token-based typological approaches to token-aligned parallel corpora. Variation among different morphological means is instead much harder to tackle and therefore more poorly understood, despite being predominant in several language groups. This paper explores variation in the expression of generic temporal subordination ('when'-clauses) among the languages of Latin America and the Caribbean, where morphological marking is particularly common. It presents probabilistic semantic maps computed on the basis of the languages of the region, thus avoiding bias towards the many world's languages that exclusively use lexified connectors, incorporating associations between character $n$-grams and English $when$. The approach allows capturing morphological clause-linkage devices in addition to lexified connectors, paving the way for larger-scale, strategy-agnostic analyses of typological variation in temporal subordination.
Forget NLI, Use a Dictionary: Zero-Shot Topic Classification for Low-Resource Languages with Application to Luxembourgish
Fred Philippy, Shohreh Haddadan, Siwen Guo
In NLP, zero-shot classification (ZSC) is the task of assigning labels to textual data without any labeled examples for the target classes. A common method for ZSC is to fine-tune a language model on a Natural Language Inference (NLI) dataset and then use it to infer the entailment between the input document and the target labels. However, this approach faces certain challenges, particularly for languages with limited resources. In this paper, we propose an alternative solution that leverages dictionaries as a source of data for ZSC. We focus on Luxembourgish, a low-resource language spoken in Luxembourg, and construct two new topic relevance classification datasets based on a dictionary that provides various synonyms, word translations and example sentences. We evaluate the usability of our dataset and compare it with the NLI-based approach on two topic classification tasks in a zero-shot manner. Our results show that by using the dictionary-based dataset, the trained models outperform the ones following the NLI-based approach for ZSC. While we focus on a single low-resource language in this study, we believe that the efficacy of our approach can also transfer to other languages where such a dictionary is available.
Program Analysis via Multiple Context Free Language Reachability
Giovanna Kobus Conrado, Adam Husted Kjelstrøm, Andreas Pavlogiannis
et al.
Context-free language (CFL) reachability is a standard approach in static analyses, where the analysis question is phrased as a language reachability problem on a graph $G$ wrt a CFL L. While CFLs lack the expressiveness needed for high precision, common formalisms for context-sensitive languages are such that the corresponding reachability problem is undecidable. Are there useful context-sensitive language-reachability models for static analysis? In this paper, we introduce Multiple Context-Free Language (MCFL) reachability as an expressive yet tractable model for static program analysis. MCFLs form an infinite hierarchy of mildly context sensitive languages parameterized by a dimension $d$ and a rank $r$. We show the utility of MCFL reachability by developing a family of MCFLs that approximate interleaved Dyck reachability, a common but undecidable static analysis problem. We show that MCFL reachability be computed in $O(n^{2d+1})$ time on a graph of $n$ nodes when $r=1$, and $O(n^{d(r+1)})$ time when $r>1$. Moreover, we show that when $r=1$, the membership problem has a lower bound of $n^{2d}$ based on the Strong Exponential Time Hypothesis, while reachability for $d=1$ has a lower bound of $n^{3}$ based on the combinatorial Boolean Matrix Multiplication Hypothesis. Thus, for $r=1$, our algorithm is optimal within a factor $n$ for all levels of the hierarchy based on $d$. We implement our MCFL reachability algorithm and evaluate it by underapproximating interleaved Dyck reachability for a standard taint analysis for Android. Used alongside existing overapproximate methods, MCFL reachability discovers all tainted information on 8 out of 11 benchmarks, and confirms $94.3\%$ of the reachable pairs reported by the overapproximation on the remaining 3. To our knowledge, this is the first report of high and provable coverage for this challenging benchmark set.
Prompting Towards Alleviating Code-Switched Data Scarcity in Under-Resourced Languages with GPT as a Pivot
Michelle Terblanche, Kayode Olaleye, Vukosi Marivate
Many multilingual communities, including numerous in Africa, frequently engage in code-switching during conversations. This behaviour stresses the need for natural language processing technologies adept at processing code-switched text. However, data scarcity, particularly in African languages, poses a significant challenge, as many are low-resourced and under-represented. In this study, we prompted GPT 3.5 to generate Afrikaans--English and Yoruba--English code-switched sentences, enhancing diversity using topic-keyword pairs, linguistic guidelines, and few-shot examples. Our findings indicate that the quality of generated sentences for languages using non-Latin scripts, like Yoruba, is considerably lower when compared with the high Afrikaans-English success rate. There is therefore a notable opportunity to refine prompting guidelines to yield sentences suitable for the fine-tuning of language models. We propose a framework for augmenting the diversity of synthetically generated code-switched data using GPT and propose leveraging this technology to mitigate data scarcity in low-resourced languages, underscoring the essential role of native speakers in this process.
Building pre-train LLM Dataset for the INDIC Languages: a case study on Hindi
Shantipriya Parida, Shakshi Panwar, Kusum Lata
et al.
Large language models (LLMs) demonstrated transformative capabilities in many applications that require automatically generating responses based on human instruction. However, the major challenge for building LLMs, particularly in Indic languages, is the availability of high-quality data for building foundation LLMs. In this paper, we are proposing a large pre-train dataset in Hindi useful for the Indic language Hindi. We have collected the data span across several domains including major dialects in Hindi. The dataset contains 1.28 billion Hindi tokens. We have explained our pipeline including data collection, pre-processing, and availability for LLM pre-training. The proposed approach can be easily extended to other Indic and low-resource languages and will be available freely for LLM pre-training and LLM research purposes.
Gasham isabeyli’s world of poetry
Rövşen
Gasham İsabeyli is one of the bright names of Azerbaijani literature, especially
children's literature, during the independence period. Geşem İsabeyli, who started writing
during the independence period of Azerbaijani literature, draws attention with her own style in
her first works. The influence of Western literature is also felt in the poet's works, which benefit
from folk literature. Gasham İsabeyli, who became famous with his first book, is one of the
important representatives of today's Azerbaijani literature.
Gasham İsabeyli stands out in Azerbaijani children's literature with his unique style,
original, concise, flexible and humorous language. His fluent lyrical poems are excellent
examples of art that appeal to young children as well as young people. The poet, who devoted
almost all of his works to children, became a writer loved by children by writing fairy tales as
well as poetry. A realistic view of life, spiritual and moral education, humanist feelings and
humanity are essential in Gasham İsabeyli's works. He is always sensitive to the style of
expression and the tone of the artistic word. This is exactly why his works are valuable.
The poet tries to stay away from complex expressions in his works and focuses more on
real life. In his poems, he advises young readers to love the land they were born in and to
protect their homeland like the apple of their eye. It is about the lives of hard-working people
who endure all kinds of difficulties, are honorable, open-minded, and always live with the desire
for truth and justice. Moreover, he managed to convey the flow of time into poetic verses. In the
article, the period from the poet's first poems to today's poems is researched and the poems are
analyzed.
Language and Literature, Ural-Altaic languages
An early Lord’s Prayer in a southern variety of Saami
Ernesta Kazakėnaitė, Rogier Blokland
Among the holdings of the National Library of Sweden there is a manuscript titled Pater noſter: Varijs Linguis ‘Lord’s Prayer: in various languages’, which contains 20 translations of the Lord’s Prayer. The last page of this manuscript is very defective, and its language was not identified in the first study to mention this manuscript (Biezais, Haralds. 1955. Ein neugefundener Text des lettischen Vaterunsers aus dem 16. Jahrhundert. Nordisk Tidskrift för bok- och biblioteksväsen 42. 47–54). However, last year it was found to be a southern variety of Saami. Earlier manuscripts of the Lord’s Prayer in Saami are unknown, making this potentially the oldest known Saami text in manuscript form that has survived to the present day. Although it has not been possible to decipher the entire text, this article provides a tentative transcription and compares it to the first known published Lord’s Prayers in Saami from 1619. Additionally, it briefly presents the manuscript and its history, and gives some background on the activities of the church in northern Sweden during the 16th century when such translations came into existence.
Kokkuvõte. Ernesta Kazakėnaitė, Rogier Blokland: Varajane Meieisa palve lõunapoolses saami keele variandis. Rootsi Rahvusraamatukogu kollektsioonis on käsikiri pealkirjaga Pater noſter: Varijs Linguis „Meieisapalve: erinevates keeltes“, mis sisaldab 20 meieisapalve tõlget. Käsikirja viimane lehekülg on osaliselt loetamatu ja selle teksti keelt ei tuvastatud esimeses uuringus, milles käsikirja mainiti (Biezais, Haralds. 1955. Ein neugefundener Text des lettischen Vaterunsers aus dem 16. Jahrhundert. Nordisk Tidskrift för bok- och biblioteksväsen 42. 47–54). Aastal 2023 avastas artikli esimene autor, et tegemist on ilmselt lõunapoolse saami keele variandiga. Muid varaseid saamikeelseid meieisapalve käsikirju ei ole teada, seetõttu on kõnealune käsikiri potentsiaalselt vanim teadaolev saamikeelne tekst, mis on tänapäevani säilinud. Kuigi kogu teksti ei olnud võimalik dešifreerida, esitatakse käesolevas artiklis selle esialgne transkriptsioon ja võrreldakse seda esimese teadaoleva, 1619. aastal avaldatud saamikeelse meieisapalve tekstiga. Lisaks tutvustatakse lühidalt käsikirja ja selle ajalugu ning antakse mõningaid taustteadmisi Rootsi kiriku tegevusest Põhja-Rootsis 16. sajandil, mil selliseid tõlkeid tekkis.
Philology. Linguistics, Finnic. Baltic-Finnic
Train Global, Tailor Local: Minimalist Multilingual Translation into Endangered Languages
Zhong Zhou, Jan Niehues, Alex Waibel
In many humanitarian scenarios, translation into severely low resource languages often does not require a universal translation engine, but a dedicated text-specific translation engine. For example, healthcare records, hygienic procedures, government communication, emergency procedures and religious texts are all limited texts. While generic translation engines for all languages do not exist, translation of multilingually known limited texts into new, endangered languages may be possible and reduce human translation effort. We attempt to leverage translation resources from many rich resource languages to efficiently produce best possible translation quality for a well known text, which is available in multiple languages, in a new, severely low resource language. We examine two approaches: 1. best selection of seed sentences to jump start translations in a new language in view of best generalization to the remainder of a larger targeted text(s), and 2. we adapt large general multilingual translation engines from many other languages to focus on a specific text in a new, unknown language. We find that adapting large pretrained multilingual models to the domain/text first and then to the severely low resource language works best. If we also select a best set of seed sentences, we can improve average chrF performance on new test languages from a baseline of 21.9 to 50.7, while reducing the number of seed sentences to only around 1,000 in the new, unknown language.
Venelased versus venelased kaasaegses venekeelses eesti kirjanduses
Elena Pavlova
Contemporary methodologies increasingly tend towards interdisciplinarity. In the social sciences, mass culture and literature are gaining attention as new sources of knowledge. Looking at literature through the lens of the new methodological frameworks allows for a more comprehensive study of identities and their evolution. This article applies a new methodology of studying the national identity, developed by Ted Hopf and Bentley Allan in the framework of the project Making Identity Count: Building a National Identity Database, which makes it possible to examine the emergence and persistence of the concept of “a Russian from Russia” as “the Other” in the identity discourse of the Russian-speaking population in Estonia after the restoration of Estonian independence. The first part of this article focuses on identifying the initial stages of the formation of this idea in Estonian Russian-language magazines that covered literature and social issues – Tallinn and Raduga. The second part analyzes the works of the two most popular and well-known Russian-speaking authors – Yelena Skulskaya and Andrei Ivanov. The analysis in complemented by interviews with the writers and references to the works of other Russian-speaking authors.
Other Finnic languages and dialects
Traces of a foreign language in dialects of Azerbaijani and Turkish languages
ZABİTƏ TEYMURLU
In the article, a number of important features of common loanwords used in the dialects
of Azerbaijani and Turkish languages were revealed, and the analysis of loanwords in the
etymological aspect was involved in the study. Issues related to the origin of a number of
loanwords used in the mentioned dialects were investigated and the specific characteristics of
those loanwords were analyzed. In this article, a study was conducted in the direction of
revealing shared dialecticisms in the dialect and literary language lexicon of two closely related
languages - Azerbaijani and Turkish. Traces of lexical units of Arabic and Persian origin were
observed in the Turkish languages involved in the study. Loanwords have influenced the
language either directly or indirectly. It is interesting that the traces of a foreign language were
also found in the dialects of that language same as the dialects of Azerbaijani and Turkish
literary languages. It happened that some borrowings are used only in dialects, and some in both
literary language and dialects. Information about the origin of loanwords found at the level of
literary language-dialect and dialect-dialect has been collected and specified from dictionaries.
Dictionaries reflecting the dialects of both languages were used during the research.
Loanwords in Azerbaijani and Turkish literary languages have been sufficiently studied,
but comparative loanwords at the dialect level have hardly been involved in the research. From
this point of view, the research work conducted in the aspect of the study of loanwords in the
dialect and literary language lexicon of Azerbaijani and Turkish languages is relevant.
Language and Literature, Ural-Altaic languages
Mis juhtus ja kes tegi? Põhjustamisahela eestikeelsed väljendusvahendid
Kairit Tomson, Ilona Tragel
Kokkuvõte. Artikkel käsitleb kausatiivsuse väljendusvahendeid suulises eesti keeles. 14 eesti keelt emakeelena kõnelejat kirjeldasid 43 videoklipis kujutatud põhjustamisahelat. Uurimuse esimene eesmärk oli saada ülevaade kasutatud väljendusvahenditest. Materjalis esinesid täielikult põhjuslikkust väljendavad vahendid (põhjustamis- ja tulemussündmust sisaldavad) ja põhjustamisahela osade sidujad. Põhjustamissituatsiooni osasündmuste seos võidi jätta eksplitsiitselt väljendamata, osi ühendati ka sõnaga ja või ning, sidesõnadega (nt sest) ja konnektiividega (nt mille peale). Põhjustamisahela väljendusvahenditena kasutati ka relatiivlauseid, des-konverbi, v-kesksõna, elatiivi, komitatiivi, kaassõnu, erinevaid konstruktsioone (nt analüütilisi kausatiivikonstruktsioone) ja derivatiivseid ning leksikaalseid kausatiive. Teine eesmärk oli vaadata väljendusvahendite jagunemist erinevate situatsioonitüüpide vahel. Kõigis kirjeldustes esines näiteid, milles põhjustamissituatsiooni osi eksplitsiitse väljendusvahendiga ei seotud, seoti sõnadega ja või ning või kasutati mõnda muud siduvat sõna. Kõikide situatsioonitüüpide kirjeldamiseks kasutati ka põhjuslikkust väljendavaid muutuskonstruktsioone, relatiivlauseid, derivatiivseid ja leksikaalseid kausatiive.
Abstract. Kairit Tomson, Ilona Tragel: What happened and who did it? Means of expressing causality in Estonian. The aim of this paper is to give an overview of the means of causal chain in Estonian and to describe how those means are distributed between causal situation types. The results of this research are based on the experiment with 14 Estonian speakers who described 43 video clips by answering the question “What happened?”. The video clips originate from the project “Causality across Languages”. Participants described causal situations by using 1) constructions (e.g rebib pooleks ‘tears apart’, ajab naerma ‘makes laugh’, palub visata ‘asks to throw’), 2) morphological causatives, 3) lexical causatives, 4) linking words between subevents (e.g ja ‘and’, nii et ‘so that’, sest ‘because’ and relative pronouns), 5) other morphosyntactic means (des-converb, case suffixes, postpositions). In addition, subevents were mentioned without adding any linking word in between.
Philology. Linguistics, Finnic. Baltic-Finnic
Describing the syntax of programming languages using conjunctive and Boolean grammars
Alexander Okhotin
A classical result by Floyd ("On the non-existence of a phrase structure grammar for ALGOL 60", 1962) states that the complete syntax of any sensible programming language cannot be described by the ordinary kind of formal grammars (Chomsky's ``context-free''). This paper uses grammars extended with conjunction and negation operators, known as conjunctive grammars and Boolean grammars, to describe the set of well-formed programs in a simple typeless procedural programming language. A complete Boolean grammar, which defines such concepts as declaration of variables and functions before their use, is constructed and explained. Using the Generalized LR parsing algorithm for Boolean grammars, a program can then be parsed in time $O(n^4)$ in its length, while another known algorithm allows subcubic-time parsing. Next, it is shown how to transform this grammar to an unambiguous conjunctive grammar, with square-time parsing. This becomes apparently the first specification of the syntax of a programming language entirely by a computationally feasible formal grammar.
Transfer Learning for British Sign Language Modelling
Boris Mocialov, Graham Turner, Helen Hastie
Automatic speech recognition and spoken dialogue systems have made great advances through the use of deep machine learning methods. This is partly due to greater computing power but also through the large amount of data available in common languages, such as English. Conversely, research in minority languages, including sign languages, is hampered by the severe lack of data. This has led to work on transfer learning methods, whereby a model developed for one language is reused as the starting point for a model on a second language, which is less resourced. In this paper, we examine two transfer learning techniques of fine-tuning and layer substitution for language modelling of British Sign Language. Our results show improvement in perplexity when using transfer learning with standard stacked LSTM models, trained initially using a large corpus for standard English from the Penn Treebank corpus
The sense of space and Arctic nature in Cora Sandel’s Kranes konditori: interiør med figurer (Krane’s Café: An Interior with Figures)
Raluca-Daniela Răduț
The paper combines the close reading technique of the novel Kranes konditori: Interiør med figurer (Krane’s Café: An Interior with Figures, 1946), written by the classic Norwegian writer Cora Sandel (1880-1974) with a spatial approach which aims to present the past and the present of the novel’s main character, Katinka Stordal. The action takes place in a small town situated in northern Norway, at Krane’s Café. It is worth noting how topography, the seasons of the year, the Arctic climate and nature are gradually reflected in the novel. On the one hand, the novel is placed at the crossroads of a spatial perspective and the literary criticism, which has in its centre Krane’s Café, the place where almost all the characters are brought together and which is the most suggestive and representative interior space of the novel. On the other hand, the subtitle An Interior with Figures strengthens the idea of a mixture of literary genres which includes elements from novel and drama. Moreover, it resembles the title of a work of art, for instance, a painting where all the characters are simply figures animated by the beauty of the Arctic scenery.
Finnic. Baltic-Finnic, Social Sciences
LANGUAGE AND CONTENT FEATURES OF NURMAHAMMAD ANDALIB'S GHAZALS
Aynur SƏFƏRLİ
Nurmuhamed Anedib is a cultural scholar of the time, who knows Arabic and Persian
very well and shows it in his poems. In terms of picture, ghazs from 5 to 11 inches have helped
to create a harmony with the full and rich rhymes. The poet's ghazals, who successfully used
five different eros, have a lot of Turkmen words, but more are given in Arabic and Persian. A
poet who has mastered romance relationships, especially describes his lover with very specific
expressions, usually has a kind of beloved type in his eyes. The gazelle, which deals with the
nature and the arrival of the spring, is an exception, except for a ghazal that deals with the other.
There is also information that Andalib, who was well versed in the literature and
folklore of the peoples of the East, wrote works in these languages and translated from them. He
created immortal works of art with high artistic and poetic value in the lyrical genre, which
occupies an important place in his work, such as mukhammas, mustazad, murabbe, musaddas,
varsagi, qoshma, ghazal. In these examples of poetry, Andalib expressed his fair attitude to the
events of life, opposing the injustice and arbitrariness of his time, the moral slavery of the ruling
feudal-spiritual morality, ignorance, social rules, humiliation of the human person. The poet is
free in other forms of poetry that allow for the diversity and multiplicity of rhyme. As a result,
Andalib is regarded as a talented ghazal poet who reflects the features of the ghazal in terms of
both picture and content.
Language and Literature, Ural-Altaic languages