Knowledge-first epistemology places knowledge at the normative core of epistemological affairs: on this approach, central epistemic phenomena are to be analyzed in terms of knowledge. This Element offers a defence of an integrated, naturalistic knowledge-first account of justified belief, reasons, evidence and defeat, permissible assertion and action, and the epistemic normativity of practical and theoretical reasoning. On this account, the epistemic is an independent normative domain organized around one central etiological epistemic function: generating knowledge. In turn, this epistemic function generates epistemic norms of proper functioning that constitute the epistemic domain, and govern moves in our epistemic practice, such as forming beliefs, asserting, and reasoning. This title is also available as Open Access on Cambridge Core.
Greater theorizing of methods in the computational humanities is needed for epistemological and interpretive clarity, and therefore the maturation of the field. In this paper, we frame such modeling work as engaging in translation work from a cultural, linguistic domain into a computational, mathematical domain, and back again. Translators benefit from articulating the theory of their translation process, and so do computational humanists in their work -- to ensure internal consistency, avoid subtle yet consequential translation errors, and facilitate interpretive transparency. Our contribution in this paper is to lay out a particularly consequential dimension of the lack of theorizing and the sorts of translation errors that emerge in our modeling practices as a result. Along these lines we introduce the idea of semiotic complexity as the degree to which the meaning of some text may vary across interpretive lenses, and make the case that dominant modeling practices -- especially around evaluation -- commit a translation error by treating semiotically complex data as semiotically simple when it seems epistemologically convenient by conferring superficial clarity. We then lay out several recommendations for researchers to better account for these epistemological issues in their own work.
Intelligent Reflecting Surfaces (IRSs) have potential for significant performance gains in next-generation wireless networks but face key challenges, notably severe double-pathloss and complex multi-user scheduling due to hardware constraints. Active IRSs partially address pathloss but still require efficient scheduling in cell-level multi-IRS multi-user systems, whereby the overhead/delay of channel state acquisition and the scheduling complexity both rise dramatically as the user density and channel dimensions increase. Motivated by these challenges, this paper proposes a novel scheduling framework based on neural Channel Knowledge Map (CKM), designing Transformer-based deep neural networks (DNNs) to predict ergodic spectral efficiency (SE) from historical channel/throughput measurements tagged with user positions. Specifically, two cascaded networks, LPS-Net and SE-Net, are designed to predict link power statistics (LPS) and ergodic SE accurately. We further propose a low-complexity Stable Matching-Iterative Balancing (SM-IB) scheduling algorithm. Numerical evaluations verify that the proposed neural CKM significantly enhances prediction accuracy and computational efficiency, while the SM-IB algorithm effectively achieves near-optimal max-min throughput with greatly reduced complexity.
Modern trends in the formation and development of media literacy of military personnel have been recorded. It has been proven that today information is a powerful weapon, therefore, in current realities, media literacy of military personnel is of critical importance. Military personnel must not only possess modern traditional weapons, but also be able to recognize and resist information attacks, manipulations and disinformation. It has been determined that media literacy is a necessary element of combat training and a component of the formation of psychological resilience. Military personnel must be able to critically evaluate information, recognize fakes and manipulations, and also communicate effectively in the information space.
Relevance. Noroviruses are currently considered the most common cause of sporadic cases and community-acquired outbreaks of acute gastroenteritis (AGE) worldwide [1]. However, outbreaks of healthcare-associated infections (HAI) are also often caused by norovirus etiology [2], including in the Russian Federation [3].Aim. Analysis of key aspects of prevention and anti-epidemic measures against norovirus infection (NVI) in the hematology department.Materials and methods. The following epidemiological research methods were used: descriptive (intensity, dynamics, spatial characteristics of the NVI outbreak); analytical – longitudinal cohort study of the epidemic process of HAI – assessment of hypotheses about the causes and conditions, risk factors and routes of transmission of norovirus among patients and caregivers in the oncohematology department of the children's multidisciplinary hospital.Results and discussion. An outbreak of acute norovirus gastroenteritis of an imported nature was identified in the oncohematology department of a children's multidisciplinary hospital in February 2023, an analysis of the spread of NVI was carried out: the presumed source and mechanism of transmission of norovirus infection was established; the chronology of the spread of norovirus among patients and their caregivers is shown, a list of anti-epidemic measures taken to stop the outbreak is presented, and the difficulties of verifying the epidemiological diagnosis of HAI are reflected (based on the efficiency of laboratory testing of material from patients).Conclusion. Based on the results of the activities carried out, recommendations were proposed for improving preventive and anti-epidemic measures for acute gastroenteritis (AGE) in children's oncohematology departments.
The article examines the relationship between classical philosophical models of political ethics and modern instruments of political communication in the context of a globalized information environment. The purpose of the study was to explore the interaction between classical politico-philosophical concepts and contemporary communication practices that shape the ethical framework of modern foreign policy activities. Particular attention is focused on the transformation of methods for achieving political results under the influence of current communication challenges and technologies, including linguistic strategies, rhetorical techniques, and the growing use of artificial intelligence. The research employs an interdisciplinary approach that integrates the methodological foundations of political philosophy, communication theory, and discourse analysis. The study applies theoretical generalization, conceptual modeling, and textual interpretation to reveal the relationship between ideological principles and communicative mechanisms of influence in the sphere of foreign policy. It has been established that the ideological basis of modern foreign policy rests on classical philosophical approaches and traditions of political thought. These conceptual foundations continue to determine strategic orientations, influence the choice of methods for achieving political goals, and form normative perceptions of the limits of permissible political action in the international arena. In this context, political rhetoric, textual technologies, and linguistic strategies increasingly operate as instruments of persuasion, manipulation, and legitimation of political decisions. It has been revealed that algorithmic models of communication, especially those driven by artificial intelligence, can personalize political messages, altering citizens’ attitudes, beliefs, and electoral behavior. The study concludes that, under the conditions of digitalization and technological mediation, traditional notions of morally acceptable political action are being redefined, requiring new approaches to the ethical understanding of political communication.
Daniel Martínez-Ávila, Tarcisio Zandonade, Andrés Fernández-Ramos
et al.
In this paper we review the influence of Jesse Shera and Margaret Egan’s Social Epistemology in the Library and Information Science and Knowledge Organization literature. The study qualitatively complements the study by Guimarães et al. (2018) and focuses on that literature published after Zandonade (2004). We critically discuss the interpretations and relevance of the theory for the literature and highlight its importance for recent developments in line with a more sociological approach. In this sense, believe that Hjørland’s domain analysis can be considered a truly successor of social epistemology and a new paradigm in the field.
Andrei Buliga, Chiara Di Francescomarino, Chiara Ghidini
et al.
Counterfactual explanations suggest what should be different in the input instance to change the outcome of an AI system. When dealing with counterfactual explanations in the field of Predictive Process Monitoring, however, control flow relationships among events have to be carefully considered. A counterfactual, indeed, should not violate control flow relationships among activities (temporal background knowledege). Within the field of Explainability in Predictive Process Monitoring, there have been a series of works regarding counterfactual explanations for outcome-based predictions. However, none of them consider the inclusion of temporal background knowledge when generating these counterfactuals. In this work, we adapt state-of-the-art techniques for counterfactual generation in the domain of XAI that are based on genetic algorithms to consider a series of temporal constraints at runtime. We assume that this temporal background knowledge is given, and we adapt the fitness function, as well as the crossover and mutation operators, to maintain the satisfaction of the constraints. The proposed methods are evaluated with respect to state-of-the-art genetic algorithms for counterfactual generation and the results are presented. We showcase that the inclusion of temporal background knowledge allows the generation of counterfactuals more conformant to the temporal background knowledge, without however losing in terms of the counterfactual traditional quality metrics.
Giovanni Ciatto, Andrea Agiollo, Matteo Magnini
et al.
Background. Endowing intelligent systems with semantic data commonly requires designing and instantiating ontologies with domain-specific knowledge. Especially in the early phases, those activities are typically performed manually by human experts possibly leveraging on their own experience. The resulting process is therefore time-consuming, error-prone, and often biased by the personal background of the ontology designer. Objective. To mitigate that issue, we propose a novel domain-independent approach to automatically instantiate ontologies with domain-specific knowledge, by leveraging on large language models (LLMs) as oracles. Method. Starting from (i) an initial schema composed by inter-related classes and properties and (ii) a set of query templates, our method queries the LLM multiple times, and generates instances for both classes and properties from its replies. Thus, the ontology is automatically filled with domain-specific knowledge, compliant to the initial schema. As a result, the ontology is quickly and automatically enriched with manifold instances, which experts may consider to keep, adjust, discard, or complement according to their own needs and expertise. Contribution. We formalise our method in general way and instantiate it over various LLMs, as well as on a concrete case study. We report experiments rooted in the nutritional domain where an ontology of food meals and their ingredients is automatically instantiated from scratch, starting from a categorisation of meals and their relationships. There, we analyse the quality of the generated ontologies and compare ontologies attained by exploiting different LLMs. Experimentally, our approach achieves a quality metric that is up to five times higher than the state-of-the-art, while reducing erroneous entities and relations by up to ten times. Finally, we provide a SWOT analysis of the proposed method.
Reasoning is an essential component of human intelligence as it plays a fundamental role in our ability to think critically, support responsible decisions, and solve challenging problems. Traditionally, AI has addressed reasoning in the context of logic-based representations of knowledge. However, the recent leap forward in natural language processing, with the emergence of language models based on transformers, is hinting at the possibility that these models exhibit reasoning abilities, particularly as they grow in size and are trained on more data. Despite ongoing discussions about what reasoning is in language models, it is still not easy to pin down to what extent these models are actually capable of reasoning. The goal of this workshop is to create a platform for researchers from different disciplines and/or AI perspectives, to explore approaches and techniques with the aim to reconcile reasoning between language models using transformers and using logic-based representations. The specific objectives include analyzing the reasoning abilities of language models measured alongside KR methods, injecting KR-style reasoning abilities into language models (including by neuro-symbolic means), and formalizing the kind of reasoning language models carry out. This exploration aims to uncover how language models can effectively integrate and leverage knowledge and reasoning with it, thus improving their application and utility in areas where precision and reliability are a key requirement.
Identifying and predicting the factors that contribute to the success of interdisciplinary research is crucial for advancing scientific discovery. However, there is a lack of methods to quantify the integration of new ideas and technological advancements in astronomical research and how these new technologies drive further scientific breakthroughs. Large language models, with their ability to extract key concepts from vast literature beyond keyword searches, provide a new tool to quantify such processes. In this study, we extracted concepts in astronomical research from 297,807 publications between 1993 and 2024 using large language models, resulting in a set of 24,939 concepts. These concepts were then used to form a knowledge graph, where the link strength between any two concepts was determined by their relevance through the citation-reference relationships. By calculating this relevance across different time periods, we quantified the impact of numerical simulations and machine learning on astronomical research. The knowledge graph demonstrates two phases of development: a phase where the technology was integrated and another where the technology was explored in scientific discovery. The knowledge graph reveals that despite machine learning has made much inroad in astronomy, there is currently a lack of new concept development at the intersection of AI and Astronomy, which may be the current bottleneck preventing machine learning from further transforming the field of astronomy.
This chapter argues that the general philosophy of science should learn metaphilosophical lessons from the case of metaphysical underdetermination, as it occurs in non-relativistic quantum mechanics. Section 2 presents the traditional discussion of metaphysical underdetermination regarding the individuality and non-individuality of quantum particles. Section 3 discusses three reactions to it found in the literature: eliminativism about individuality; conservatism about individuality; eliminativism about objects. Section 4 wraps it all up with metametaphysical considerations regarding the epistemology of metaphysics of science.
Even though deep speaker models have demonstrated impressive accuracy in speaker verification tasks, this often comes at the expense of increased model size and computation time, presenting challenges for deployment in resource-constrained environments. Our research focuses on addressing this limitation through the development of small footprint deep speaker embedding extraction using knowledge distillation. While previous work in this domain has concentrated on speaker embedding extraction at the utterance level, our approach involves amalgamating embeddings from different levels of the x-vector model (teacher network) to train a compact student network. The results highlight the significance of frame-level information, with the student models exhibiting a remarkable size reduction of 85%-91% compared to their teacher counterparts, depending on the size of the teacher embeddings. Notably, by concatenating teacher embeddings, we achieve student networks that maintain comparable performance to the teacher while enjoying a substantial 75% reduction in model size. These findings and insights extend to other x-vector variants, underscoring the broad applicability of our approach.
La teoria formulata da Norbert Elias e da Eric Dunnung, secondo cui gli sport moderni hanno preso forma nel secolo diciannovesimo come strumento di controllo degli impulsi aggressivi e per consentire agli Stati-Nazione una centralizzazione delle principali funzioni sociali, ha goduto di grande autorevolezza. Negli ultimi anni tuttavia neuro-scienziati e cognitivisti hanno condotto molti test ottenendo prove indubitabili circa la funzione empatica e interconnessiva esercitata soprattutto dal football nei tifosi: il calcio costituirebbe non una forma di censura della violenza, ma un prezioso agone in cui apprendere gli stili comportamentali e cognitivi della condivisione. L’opera dello scrittore argentino Osvaldo Soriano mostra, prima dei neuroscienziati, come nella rappresentazione del calcio prevalgano sempre le relazioni sugli agenti che le promuovono.
Computational linguistics. Natural language processing, Epistemology. Theory of knowledge
O artigo analisa o recrutamento por parte da Revista Brasiliense, publicação sediada em São Paulo que circulou entre 1955 e 1964, do literato cearense João Clímaco Bezerra (JCB), objetivando entender o espaço de possíveis que permitiu a circulação de um escritor de província para além das fronteiras regionais e sua participação em um projeto político-cultural de caráter nacional-desenvolvimentista e cujos principais articuladores eram nomes consagrados da intelectualidade brasileira. Como recurso metodológico, a partir da sociologia bourdieusiana, se remontou a trajetória de JCB e se elaborou o conjunto de capitais (cultural, social e intelectual) alcançados pelo escritor. Buscou-se, dessa forma, estabelecer os ritos de instituição e nominação que construíram a identidade social do agente analisado, os mecanismos de consagração, as posiçõesocupadas nos espaço sociais onde transitou e por meio das quais estabeleceu suas redes de sociabilidade e de filiação.
Epistemology. Theory of knowledge, History (General)
This paper presents two terms which have differences, namely Islamic religious education and Islamic education. To see these two terms, the author reviews them from two interrelated aspects, namely the epistemological aspect as the theory of knowledge and aspects of content or material which is one of the important points in understanding the curriculum. Meanwhile, in terms of epistemology, Religious Education is more inclined to apply in educating in the context of Islam. While Islamic education speaks at the source level, in theory, the principle recorded is the forerunner of the Islamic Religious Education material itself. As for content or material, basically between Islamic Education with Islamic education as in an epistemological view, there is no difference which means that the terms contained in Islamic Education include aqidah, worship, and morals which are explained in terms of an introduction to Allah SWT, potential, human functions, and morals. Keywords: Education, Islamic Education, Epistemology This paper presents two terms which have differences, namely Islamic religious education and Islamic education. To see these two terms, the author reviews them from two interrelated aspects, namely the epistemological aspect as the theory of knowledge and aspects of content or material which is one of the important points in understanding the curriculum. Meanwhile in terms of epistemology, Religious Education is more inclined to apply in educating in the context of Islam. While Islamic education speaks at the source level, in theory, the principle recorded is the forerunner of the Islamic Religious Education material itself. As for content or material, basically between Islamic Education with Islamic education as in an epistemological view, there is no difference which means that the terms contained in Islamic Education include aqidah, worship, and morals which are explained in terms of introduction to Allah SWT, potential, human functions, and morals. Keywords: Education, Islamic Education, Epistemology
End-to-end multimodal learning on knowledge graphs has been left largely unaddressed. Instead, most end-to-end models such as message passing networks learn solely from the relational information encoded in graphs' structure: raw values, or literals, are either omitted completely or are stripped from their values and treated as regular nodes. In either case we lose potentially relevant information which could have otherwise been exploited by our learning methods. To avoid this, we must treat literals and non-literals as separate cases. We must also address each modality separately and accordingly: numbers, texts, images, geometries, et cetera. We propose a multimodal message passing network which not only learns end-to-end from the structure of graphs, but also from their possibly divers set of multimodal node features. Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities, including images and geometries, which are projected into a joint representation space together with their relational information. We demonstrate our model on a node classification task, and evaluate the effect that each modality has on the overall performance. Our result supports our hypothesis that including information from multiple modalities can help our models obtain a better overall performance.
Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG into f(KG) $\in$ R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. Given implication rules, TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications. With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding. On a benchmark dataset, we outperform the best existing state-of-the-art rule integration embedding methods with significant margins in link Prediction and triple Classification. The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations.
Carlos Badenes-Olmedo, David Chaves-Fraga, MarÍa Poveda-VillalÓn
et al.
In the absence of sufficient medication for COVID patients due to the increased demand, disused drugs have been employed or the doses of those available were modified by hospital pharmacists. Some evidences for the use of alternative drugs can be found in the existing scientific literature that could assist in such decisions. However, exploiting large corpus of documents in an efficient manner is not easy, since drugs may not appear explicitly related in the texts and could be mentioned under different brand names. Drugs4Covid combines word embedding techniques and semantic web technologies to enable a drug-oriented exploration of large medical literature. Drugs and diseases are identified according to the ATC classification and MeSH categories respectively. More than 60K articles and 2M paragraphs have been processed from the CORD-19 corpus with information of COVID-19, SARS, and other related coronaviruses. An open catalogue of drugs has been created and results are publicly available through a drug browser, a keyword-guided text explorer, and a knowledge graph.