Emotions
I. Sommier
one newly devised. The format and nature of the individual entries can be exempliμed by the treatment given to the initial entry, although many fables receive considerably briefer treatment. First, the tale is thoroughly identiμed: the heading (H. 1) indicates that the narrative is the initial fable in Hausrath’s edition; additional references and cross-references relate the fable to A.’s medieval inventory (M. 33), to other critical editions of anonymous Greek fables (Chambry 3, Perry 1), and to pertinent motifs in Stith Thompson’s Motif-Index of Folk-Literature; and the title of the fable is given in Greek (Aetos kai Alôpêks) and in English. After this basic information, the Greek and Latin sources for the tale of the Eagle and the Fox are given, leading to a summary of the fable’s contents and observations on variations in content by di¶erent authors. Next, A. o¶ers a brief description of the structure, characters, and themes of the fable, along with observations on its history and development, culminating in a stemma that traces the fable’s literary history, as the author reconstructs it, from Archilochos to Romulus. Finally, A. prints possible traces of Hellenistic verse in the Greek prose versions, in support of his conviction that this fable is one of many that derive from a lost book of fables in Greek verse. Frequent references are made throughout to earlier discussions of di¶erent aspects of the Eagle and the Fox in Volumes I and II. The entry on this fable concludes with two separate sets of supplementary references, new to the English edition, one by A. and the other by van Dijk. The fable inventory is the fruit of an enormous amount of meticulous scholarship, and it will be a welcome resource for all scholars pursuing work on individual Greek and Latin fables and fable-like narratives. Nevertheless, some of the evidence and results will be of little interest to researchers who do not share A.’s conclusions concerning the transmission of the ancient Greek fables, which, for one thing, is seen as an almost entirely literary process. The reader will μnd a summary of the author’s views at the end of the preceding volume (II.711–26). The four-part fable inventory is followed by an impressive series of indices, over 100 pages of them, prepared by van Dijk. These include correlations of the enumeration of the fables in the present work to those in the standard critical editions as well as to Antti Aarne’s and Stith Thompson’s The Types of the Folktale and to Thompson’s Motif-Index of Folk-Literature, an index of the languages of the fables cited, an index of the fables by character, and an index of fable passages cited. No subject index for the work as a whole is provided, nor is there a bibliography of scholarly works cited.
The Naval Battle of the Fireships in Dragamesto (November 21, 1825)
Vasileios Zagkotas
The Naval Battle of Dragamesto took place on November 21, 1825, in the present-day Bay of Astakos in the Ionian Sea, during the Greek War of Independence (1821–1829). The Greek fleet, consisting of 33 ships led by Admiral Miaoulis, sought to defend the supply line to the city of Missolonghi, which at that time was besieged by the Ottomans. Meanwhile, about 120 Egyptian ships under Ibrahim Pasha arrived to tighten the siege. The two fleets clashed, and the Greeks successfully repelled their opponents. This article examines the events of the battle mainly through primary sources, such as the ship logs of Captains Sachtouris, Sachinis (both eyewitnesses), and Tsamados, as well as other supplementary historical evidence. The Naval Battle of Dragamesto was the only naval engagement of the Greek War of Independence that took place along the Ionian coast, between the shores of Acarnania, Lefkada, and Ithaca. To date, no synthesis of the primary sources concerning this event has been attempted. Thus, this article constitutes the first comprehensive study of the battle, including an examination of a relevant aquarelle as a potential historical source. A comparison between the aquarelle depicting the battle and the primary written sources reveals a remarkable level of accuracy in the geographical representation, fleet formations, and key figures. However, certain discrepancies, such as the omission of specific captains and the possibility of subjective artistic interpretation of the events, highlight the need for a cautious approach when using the painting as historical evidence.
History of Greece, Translating and interpreting
ARS: Adaptive Reasoning Suppression for Efficient Large Reasoning Language Models
Dongqi Zheng
Large Reasoning Language Models (LRLMs or LRMs) demonstrate remarkable capabilities in complex reasoning tasks, but suffer from significant computational inefficiencies due to overthinking phenomena. Existing efficient reasoning methods face the challenge of balancing reasoning quality with inference cost reduction. We propose \textbf{Adaptive Reasoning Suppression (ARS)}, a novel training-free approach that dynamically suppresses redundant reasoning steps while preserving accuracy through adaptive certainty monitoring. ARS introduces a multi-checkpoint certainty estimation mechanism with progressive suppression thresholds, achieving superior efficiency compared to static suppression methods. Our extensive evaluation across mathematical reasoning benchmarks using multiple model architectures demonstrates that ARS achieves up to 53%, 46.1%, and 57.9% in token, latency and energy reduction, while maintaining or improving accuracy.
GDLLM: A Global Distance-aware Modeling Approach Based on Large Language Models for Event Temporal Relation Extraction
Jie Zhao, Wanting Ning, Yuxiao Fei
et al.
In Natural Language Processing(NLP), Event Temporal Relation Extraction (ETRE) is to recognize the temporal relations of two events. Prior studies have noted the importance of language models for ETRE. However, the restricted pre-trained knowledge of Small Language Models(SLMs) limits their capability to handle minority class relations in imbalanced classification datasets. For Large Language Models(LLMs), researchers adopt manually designed prompts or instructions, which may introduce extra noise, leading to interference with the model's judgment of the long-distance dependencies between events. To address these issues, we propose GDLLM, a Global Distance-aware modeling approach based on LLMs. We first present a distance-aware graph structure utilizing Graph Attention Network(GAT) to assist the LLMs in capturing long-distance dependency features. Additionally, we design a temporal feature learning paradigm based on soft inference to augment the identification of relations with a short-distance proximity band, which supplements the probabilistic information generated by LLMs into the multi-head attention mechanism. Since the global feature can be captured effectively, our framework substantially enhances the performance of minority relation classes and improves the overall learning ability. Experiments on two publicly available datasets, TB-Dense and MATRES, demonstrate that our approach achieves state-of-the-art (SOTA) performance.
CEFR-Annotated WordNet: LLM-Based Proficiency-Guided Semantic Database for Language Learning
Masato Kikuchi, Masatsugu Ono, Toshioki Soga
et al.
Although WordNet is a valuable resource because of its structured semantic networks and extensive vocabulary, its fine-grained sense distinctions can be challenging for second-language learners. To address this issue, we developed a version of WordNet annotated with the Common European Framework of Reference for Languages (CEFR), integrating its semantic networks with language-proficiency levels. We automated this process using a large language model to measure the semantic similarity between sense definitions in WordNet and entries in the English Vocabulary Profile Online. To validate our approach, we constructed a large-scale corpus containing both sense and CEFR-level information from the annotated WordNet and used it to develop contextual lexical classifiers. Our experiments demonstrate that models fine-tuned on this corpus perform comparably to those fine-tuned on gold-standard annotations. Furthermore, by combining this corpus with the gold-standard data, we developed a practical classifier that achieves a Macro-F1 score of 0.81. This result provides indirect evidence that the transferred labels are largely consistent with the gold-standard levels. The annotated WordNet, corpus, and classifiers are publicly available to help bridge the gap between natural language processing and language education, thereby facilitating more effective and efficient language learning.
Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training
Meng Xiao, Xunxin Cai, Qingqing Long
et al.
Corpus distillation for biomedical large language models (LLMs) seeks to address the pressing challenge of insufficient quantity and quality in open-source annotated scientific corpora, which remains a bottleneck for effective LLM training in biomedical research. This paper proposes a knowledge-driven, agentic framework for scientific corpus distillation, tailored explicitly for LLM training in the biomedical domain, addressing the challenge posed by the complex hierarchy of biomedical knowledge. Central to our approach is a collaborative multi-agent architecture, where specialized agents, each guided by the Medical Subject Headings (MeSH) hierarchy, work in concert to autonomously extract, synthesize, and self-evaluate high-quality textual data from vast scientific literature. This agentic framework collectively generates and refines domain-specific question-answer pairs, ensuring comprehensive coverage and consistency with biomedical ontologies while minimizing manual involvement. Extensive experimental results show that language models trained on our multi-agent distilled datasets achieve notable improvements in biomedical question-answering tasks, outperforming both strong life sciences LLM baselines and advanced proprietary models. Notably, our AI-Ready dataset enables Llama3-70B to surpass GPT-4 with MedPrompt and Med-PaLM-2, despite their larger scale. Detailed ablation studies and case analyses further validate the effectiveness and synergy of each agent within the framework, highlighting the potential of multi-agent collaboration in biomedical LLM training.
The Importance of Facial Features in Vision-based Sign Language Recognition: Eyes, Mouth or Full Face?
Dinh Nam Pham, Eleftherios Avramidis
Non-manual facial features play a crucial role in sign language communication, yet their importance in automatic sign language recognition (ASLR) remains underexplored. While prior studies have shown that incorporating facial features can improve recognition, related work often relies on hand-crafted feature extraction and fails to go beyond the comparison of manual features versus the combination of manual and facial features. In this work, we systematically investigate the contribution of distinct facial regionseyes, mouth, and full faceusing two different deep learning models (a CNN-based model and a transformer-based model) trained on an SLR dataset of isolated signs with randomly selected classes. Through quantitative performance and qualitative saliency map evaluation, we reveal that the mouth is the most important non-manual facial feature, significantly improving accuracy. Our findings highlight the necessity of incorporating facial features in ASLR.
mCLM: A Modular Chemical Language Model that Generates Functional and Makeable Molecules
Carl Edwards, Chi Han, Gawon Lee
et al.
Despite their ability to understand chemical knowledge, large language models (LLMs) remain limited in their capacity to propose novel molecules with desired functions (e.g., drug-like properties). In addition, the molecules that LLMs propose can often be challenging to make, and are almost never compatible with automated synthesis approaches. To better enable the discovery of functional small molecules, LLMs need to learn a new molecular language that is more effective in predicting properties and inherently synced with automated synthesis technology. Current molecule LLMs are limited by representing molecules based on atoms. In this paper, we argue that just like tokenizing texts into meaning-bearing (sub-)word tokens instead of characters, molecules should be tokenized at the level of functional building blocks, i.e., parts of molecules that bring unique functions and serve as effective building blocks for real-world automated laboratory synthesis. This motivates us to propose mCLM, a modular Chemical-Language Model that comprises a bilingual language model that understands both natural language descriptions of functions and molecular blocks. mCLM front-loads synthesizability considerations while improving the predicted functions of molecules in a principled manner. Experiments on FDA-approved drugs showed that mCLM is capable of significantly improving chemical functions. mCLM, with only 3B parameters, also achieves improvements in synthetic accessibility relative to 7 other leading generative AI methods including GPT-5. When tested on 122 out-of-distribution medicines using only building blocks/tokens that are compatible with automated modular synthesis, mCLM outperforms all baselines in property scores and synthetic accessibility. mCLM can also reason on multiple functions and iteratively self-improve to rescue drug candidates that failed late in clinical trials ("fallen angels").
Linguistic Landscape of Multilingual Informative Signage at Jawa Timur Park 2, Indonesia
Ananda Putri Noviana, R. N. Indah
Lack of visibility of information in tourist areas threatens the credibility and image of Indonesian tourism at the global level. In the context of globalization that encourages multilingualism practices, the choice and use of languages on signs becomes a crucial aspect. Therefore, this study highlights how tourist destinations in Indonesia, especially Jawa Timur Park 2, navigate these challenges. This study aims to explore the language displayed on informative signage in the educational tourism destination and visitors' reactions to it. It looks at the linguistic landscape phenomenon from three perspectives, namely Spolsky and Cooper's (1991) taxonomy of signs, Sebba's (2013) language writing, and Garvin and Mathiot's (1968) positive language attitudes. The method used is descriptive qualitative, with the first primary data in the form of phrases, sentences, and paragraphs in informative signage, and the second primary data in the form of visitor statements from questionnaire responses. The findings reveal the existence of multilingual, bilingual, and monolingual sign types to convey detailed information, object names, and place names. In addition, the languages used are Indonesian, English, and scientific language from Greek/Latin. Visually, the language is written in symmetrical, asymmetrical, and mixed language-spatial relationships. The language is written in equivalent, disjoint, and overlapping language-content relationships. The visitors' positive attitudes towards national and international languages are addressed by language loyalty and pride. Thus, this study suggests that tourism officers, sign designers, and tourism policy makers consider language use on monolingual, bilingual, and multilingual signs that are adequate, inclusive, and functional for all visitors.
Leksyka północnokresowa w tefsirze Tatarów Wielkiego Księstwa Litewskiego. Zapożyczenia niesłowiańskie
Joanna Kulwicka-Kamińska
About the book The monograph entitled Leksyka północnokresowa w tefsirze Tatarów Wielkiego Księstwa Litewskiego. Zapożyczenia niesłowiańskie [Northern Borderland Lexis in the Tafsir of the Grand Duchy of Lithuanian Tatars. Non-Slavic Borrowings] concerns an important problem that has not been recognised in scientific research, which is the language of the educated Tatar nobility, living in the territories of the Grand Duchy of Lithuania since the 14th century. The subject of detailed analyses is the lexis of this community, which was formed in the period from the 16th to the first half of the 19th century and is documented in handwritten copies of the first translation of the Quran into Polish. The work collects and analyses the non-Slavic borrowings present in them, which constitute evidence of the confessional, class and education of the creators of Tatar religious literature. It presents them in the form of a lexicon, taking into account etymology, orthographic and phonetic variants, inflectional paradigm, source explication and information on dictionary notations from Old Polish to the most recent times. The dictionary consists of 470 excerpts (excluding phonetic variants), including 368 words of oriental origin, 73 from Latin and Greek, 26 from German and 3 from Italian. In addition, 159 derivatives were analysed (this number does not include variant forms). The lexicon includes a total of 629 units. However, the monograph is not only a collection and description of a number of lexical units, but also, on the basis of them, conclusions important for the humanities were drawn, including: • the determination of Tatars’ special place in the multilingual and multicultural communicative community of the Grand Duchy of Lithuania; • the presentation of the adaptation of lexis of foreign origin to the Polish of the Northern Borderlands as a result of contacts between Polish and East Slavic and Oriental languages, as well as Slavic-Oriental interference, which, among other things, serves to supplement the current state of research on the formation and development of this language variety; • it was found that, apart from the features characteristic of the Polish of the Northern Borderlands, the Tatar manuscript contains elements characteristic of the languages and dialects of southwestern Belarus, northwestern Ukraine, and numerous borrowings from oriental languages – translocated or Slavicised. It has been established that this "mixture" of languages and dialects also characterises manuscripts of Muslim minorities from other regions of the world, such as the Iberian Peninsula, the Balkans, as well as Asia and Africa. Researchers, trying to identify the language of these monuments, oscillate between concepts such as language, dialect, and even register, which in turn correspond to the specific style of these monuments. Linguistic issues are therefore presented here against a broad historical and cultural background.
50 godina Hrvatskog društva klasičnih filologa
Barbara Turin
Ancient history, Greek language and literature. Latin language and literature
ArxivDIGESTables: Synthesizing Scientific Literature into Tables using Language Models
Benjamin Newman, Yoonjoo Lee, Aakanksha Naik
et al.
When conducting literature reviews, scientists often create literature review tables - tables whose rows are publications and whose columns constitute a schema, a set of aspects used to compare and contrast the papers. Can we automatically generate these tables using language models (LMs)? In this work, we introduce a framework that leverages LMs to perform this task by decomposing it into separate schema and value generation steps. To enable experimentation, we address two main challenges: First, we overcome a lack of high-quality datasets to benchmark table generation by curating and releasing arxivDIGESTables, a new dataset of 2,228 literature review tables extracted from ArXiv papers that synthesize a total of 7,542 research papers. Second, to support scalable evaluation of model generations against human-authored reference tables, we develop DecontextEval, an automatic evaluation method that aligns elements of tables with the same underlying aspects despite differing surface forms. Given these tools, we evaluate LMs' abilities to reconstruct reference tables, finding this task benefits from additional context to ground the generation (e.g. table captions, in-text references). Finally, through a human evaluation study we find that even when LMs fail to fully reconstruct a reference table, their generated novel aspects can still be useful.
Improving Text Embeddings for Smaller Language Models Using Contrastive Fine-tuning
Trapoom Ukarapol, Zhicheng Lee, Amy Xin
While Large Language Models show remarkable performance in natural language understanding, their resource-intensive nature makes them less accessible. In contrast, smaller language models such as MiniCPM offer more sustainable scalability, but often underperform without specialized optimization. In this paper, we explore the enhancement of smaller language models through the improvement of their text embeddings. We select three language models, MiniCPM, Phi-2, and Gemma, to conduct contrastive fine-tuning on the NLI dataset. Our results demonstrate that this fine-tuning method enhances the quality of text embeddings for all three models across various benchmarks, with MiniCPM showing the most significant improvements of an average 56.33% performance gain. The contrastive fine-tuning code is publicly available at https://github.com/trapoom555/Language-Model-STS-CFT.
An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models
Jayanta Sadhu, Maneesha Rani Saha, Rifat Shahriyar
The influence of Large Language Models (LLMs) is rapidly growing, automating more jobs over time. Assessing the fairness of LLMs is crucial due to their expanding impact. Studies reveal the reflection of societal norms and biases in LLMs, which creates a risk of propagating societal stereotypes in downstream tasks. Many studies on bias in LLMs focus on gender bias in various NLP applications. However, there's a gap in research on bias in emotional attributes, despite the close societal link between emotion and gender. This gap is even larger for low-resource languages like Bangla. Historically, women are associated with emotions like empathy, fear, and guilt, while men are linked to anger, bravado, and authority. This pattern reflects societal norms in Bangla-speaking regions. We offer the first thorough investigation of gendered emotion attribution in Bangla for both closed and open source LLMs in this work. Our aim is to elucidate the intricate societal relationship between gender and emotion specifically within the context of Bangla. We have been successful in showing the existence of gender bias in the context of emotions in Bangla through analytical methods and also show how emotion attribution changes on the basis of gendered role selection in LLMs. All of our resources including code and data are made publicly available to support future research on Bangla NLP. Warning: This paper contains explicit stereotypical statements that many may find offensive.
RoundTripOCR: A Data Generation Technique for Enhancing Post-OCR Error Correction in Low-Resource Devanagari Languages
Harshvivek Kashid, Pushpak Bhattacharyya
Optical Character Recognition (OCR) technology has revolutionized the digitization of printed text, enabling efficient data extraction and analysis across various domains. Just like Machine Translation systems, OCR systems are prone to errors. In this work, we address the challenge of data generation and post-OCR error correction, specifically for low-resource languages. We propose an approach for synthetic data generation for Devanagari languages, RoundTripOCR, that tackles the scarcity of the post-OCR Error Correction datasets for low-resource languages. We release post-OCR text correction datasets for Hindi, Marathi, Bodo, Nepali, Konkani and Sanskrit. We also present a novel approach for OCR error correction by leveraging techniques from machine translation. Our method involves translating erroneous OCR output into a corrected form by treating the OCR errors as mistranslations in a parallel text corpus, employing pre-trained transformer models to learn the mapping from erroneous to correct text pairs, effectively correcting OCR errors.
Instruct Large Language Models to Generate Scientific Literature Survey Step by Step
Yuxuan Lai, Yupeng Wu, Yidan Wang
et al.
Abstract. Automatically generating scientific literature surveys is a valuable task that can significantly enhance research efficiency. However, the diverse and complex nature of information within a literature survey poses substantial challenges for generative models. In this paper, we design a series of prompts to systematically leverage large language models (LLMs), enabling the creation of comprehensive literature surveys through a step-by-step approach. Specifically, we design prompts to guide LLMs to sequentially generate the title, abstract, hierarchical headings, and the main content of the literature survey. We argue that this design enables the generation of the headings from a high-level perspective. During the content generation process, this design effectively harnesses relevant information while minimizing costs by restricting the length of both input and output content in LLM queries. Our implementation with Qwen-long achieved third place in the NLPCC 2024 Scientific Literature Survey Generation evaluation task, with an overall score only 0.03% lower than the second-place team. Additionally, our soft heading recall is 95.84%, the second best among the submissions. Thanks to the efficient prompt design and the low cost of the Qwen-long API, our method reduces the expense for generating each literature survey to 0.1 RMB, enhancing the practical value of our method.
A Federated Learning Approach to Privacy Preserving Offensive Language Identification
Marcos Zampieri, Damith Premasiri, Tharindu Ranasinghe
The spread of various forms of offensive speech online is an important concern in social media. While platforms have been investing heavily in ways of coping with this problem, the question of privacy remains largely unaddressed. Models trained to detect offensive language on social media are trained and/or fine-tuned using large amounts of data often stored in centralized servers. Since most social media data originates from end users, we propose a privacy preserving decentralized architecture for identifying offensive language online by introducing Federated Learning (FL) in the context of offensive language identification. FL is a decentralized architecture that allows multiple models to be trained locally without the need for data sharing hence preserving users' privacy. We propose a model fusion approach to perform FL. We trained multiple deep learning models on four publicly available English benchmark datasets (AHSD, HASOC, HateXplain, OLID) and evaluated their performance in detail. We also present initial cross-lingual experiments in English and Spanish. We show that the proposed model fusion approach outperforms baselines in all the datasets while preserving privacy.
A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery
Yu Zhang, Xiusi Chen, Bowen Jin
et al.
In many scientific fields, large language models (LLMs) have revolutionized the way text and other modalities of data (e.g., molecules and proteins) are handled, achieving superior performance in various applications and augmenting the scientific discovery process. Nevertheless, previous surveys on scientific LLMs often concentrate on one or two fields or a single modality. In this paper, we aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs regarding their architectures and pre-training techniques. To this end, we comprehensively survey over 260 scientific LLMs, discuss their commonalities and differences, as well as summarize pre-training datasets and evaluation tasks for each field and modality. Moreover, we investigate how LLMs have been deployed to benefit scientific discovery. Resources related to this survey are available at https://github.com/yuzhimanhua/Awesome-Scientific-Language-Models.
Manichaeism: Unity and Divergences
E. Smagina
The article deals with the main components of the Manichaean religion. One of the main questions in the study of Manichaeism is: which component in this teaching is primary, Gnostic-Christian or Zoroastrian. The research of terms, in particular, of the proper names in Coptic, Greek, Latin, Syrian and Middle Iranian languages, allows us to assert that a particular form of the Gnostic Christian teaching was a basis, and Zoroastrian and Buddhist elements were introduced into the doctrine for a very specific purpose. In particular, the identification of Manichaean emanations with Zoroastrian deities turns out to be secondary. According to the Manichaean teaching, which goes back to interpretations of the Bible, there was an original one true church, which periodically degraded, and for restoring it the deity sent every time a true teacher of faith into the world. All existing world religions (Zoroastrianism, Buddhism, Christianity, the Gnostic-Christian doctrine of Elkesai) are distorted forms of this true faith. In the process of spreading, Manichaeism could not help but undergo some regional changes, which appeared already at an early stage. In the process of expansion, Manichaeism inevitably underwent some regional changes, which manifest themselves at an early stage. The Appendix contains translations of two Coptic psalms, which clearly illustrate the exposition of the doctrine and the adaptation of early Christian literature to it.
Main Sources of Origin of Anatomical Terms
A.O. Svitlitsky, A. Chernyavsky, T. Matvieishyna
et al.
the study of both human anatomy and medicine in general is based on knowledge of anatomical and medical terminology. However, for a student of higher medical education, there is a whole series of difficulties associated with memorizing a large number of specialized terms of Latin or Greek origin, which is a big problem when studying, first of all, human anatomy, where the number of terms is about 7.5 thousand. This article is a continuation of the work of the Department of Human Anatomy, Operative Surgery and Topographic Anatomy of ZSMPhU with the study of anatomical terminology, which was started by Doctor of Sciences in Medicine, Professor M. A. Voloshyn. The aim of the study was to conduct an analysis of anatomical terms in order to study, systematize and exclusion of errors. In order to facilitate the understanding and memorization of specialized anatomical terms by the staff of the Department of Human Anatomy, Operative Surgery and Topographic Anatomy of ZSMPhU, together with the teachers of the Department of Foreign Languages of ZSMPhU, a classification of anatomical terms by origin has been developed and proposed. Materials and methods: the search and selection of scientific literature for a systematic review was carried out by the authors independently in the PubMed, Scopus and Cochrane databases using the keywords: "anatomy", "eponyms", "classification", "linguistics", "terminology" in the full texts of articles in English and Ukrainian based on the results of studies with the level of evidence I-III. All terms in human anatomy can be classified by the language of origin (linguistic classification) and by the connection of the term with the object or phenomenon of the surrounding world from which it originates (etymological classification). By language of origin (linguistic classification): 1. Latin (classical, postclassical); 2. Greek; 3. Arabic; 4. Old English; 5. other languages. According to the connection with the object or phenomenon of the surrounding world (etymological classification), terms are divided into anatomical names that reflect ancient ideas about living and inanimate objects of the surrounding world (terms whose origin is connected with objects of the inanimate world: cosmological terms; geological terms, as well as terms, the origin of which is related to the objects of the living world; terms of animalistic origin (terms , the origin of which is related to tableware, the origin of which is related to clothing and jewelry, the origin of which is related to furniture and toys, the origin of which is related to household tools and appliances, the origin of which is related to parts of the human dwelling, which are related to the domestic activity of a person and related to the military activity of a person), terms related to geometric figures, terms related to the names of colors, terms related to mythical or Biblical characters (terms related to Greek mythology, related to Roman mythology, related to Egyptian mythology, of biblical origin), eponyms, terms derived from the names of human body parts, terms related to certain functions of an organ, terms, related to certain characteristics of the object (shape, position, dimensions) and general terms, terms of uncertain origin and anachronisms. The proposed classification of anatomical terms by origin allows a deeper understanding of the historical, cultural, social and scientific meaning of some terms, makes them more understandable for students studying human anatomy.