Ezzat Molla Ebrahimi, Kosar Bahman Abadi, Saeed Bahman Abadi
The element of time plays a pivotal role in shaping other components of narrative structure and contributes significantly to a deeper understanding of the story’s meaning. Among the most influential theorists in the study of narrative time is the French scholar Gérard Genette, who conceptualizes time in narrative through three key dimensions: order, duration, and frequency. This paper seeks to explore the role of time in Novel "Samaah Garibat men Baytena", a novel by contemporary Syrian author Shahla Ujayli, through the framework of Genette’s theory of anachrony. The novel presents a vivid depiction of Syrian social life during times of regional conflict, portraying the pain, struggles, and challenges faced by the people. Time emerges as a prominent and dynamic element in the novel, as it is central to the unfolding of its varied events. Ujayli navigates the movement of narrative time from chronological (calendar-based) time to textual time, a shift that bears a meaningful relationship to the story’s content. She begins the narrative with a retrospective and extra-diegetic approach, gradually transitions to internal (intra-diegetic) time, and concludes with a forward-looking, anticipatory perspective. In order to facilitate the transition from past to future, and from external to internal temporal dimensions, the author involves her characters in meaningful ways. Their dialogues play a crucial role in shifting the temporal focus from external to internal viewpoints. This temporal structure appears to serve two key functions: first, to reflect the harsh realities of society, and second, to inspire a sense of hope in the reader. Through this technique, Ujayli subtly conveys that hope for change is possible. By streamlining the transition between different temporal layers, she avoids unnecessary narrative delays. Initially, she presents information in a straightforward manner to ease the reader into the story, then strategically introduces anticipatory moments that create suspense and engage the reader’s curiosity and emotional investment.
Abstract A method is presented for computing the Rényi entropy of a perturbed massless vacuum on the ball via a comparison with lattice field theory. If the perturbed state is Gaussian with smoothly varying correlation functions and the perturbation parameter has units of energy, I show the coefficients for Rényi entropy are analytically computable for all values of the Rényi parameter α in odd dimensions and for integer α in even dimensions. I apply this procedure to compute coefficients for the large distant expansion for the Rényi mutual information of distant balls and the low temperature expansion for the entropy of a thermal field.
Nuclear and particle physics. Atomic energy. Radioactivity
Abstract The role of anyonic statistics stands as a cornerstone in the landscape of topological quantum techniques. While recent years have brought forth encouraging and persuasive strides in detecting anyons, a significant facet remains unexplored, especially in view of connecting anyonic physics to quantum information platforms—whether and how entanglement can be generated by anyonic braiding. Here, we demonstrate that even when two anyonic subsystems (represented by anyonic beams) are connected only by electron tunneling, entanglement between them, manifesting fractional statistics, is generated. To demonstrate this physics, we rely on a platform where fractional quantum Hall edges are bridged by a quantum point contact that allows only transmission of fermions (so-called Andreev-like tunneling). This invokes the physics of two-beam collisions in an anyonic Hong-Ou-Mandel collider, accompanied by a process that we dub anyon-quasihole braiding. We define an entanglement pointer—a current-noise-based function tailored to quantify entanglement associated with quasiparticle fractional statistics. Our work, which exposes, both in theory and in experiment, entanglement associated with anyonic statistics and braiding, prospectively paves the way to the exploration of entanglement induced by non-Abelian statistics.
Abstract We employ a probabilistic mesoscopic description to draw conceptual and quantitative analogies between Brownian motion and late-time fluctuations of thermal correlation functions in generic chaotic systems respecting ETH. In this framework, thermal correlation functions of ‘simple’ operators are described by stochastic processes, which are able to probe features of the microscopic theory only in a probabilistic sense. We apply this formalism to the case of semiclassical gravity in AdS3, showing that wormhole contributions can be naturally identified as moments of stochastic processes. We also point out a ‘Matryoshka doll’ recursive structure in which information is hidden in higher and higher moments, and which can be naturally justified within the stochastic framework. We then re-interpret the gravitational results from the boundary perspective, promoting the OPE data of the CFT to probability distributions. The outcome of this study shows that semiclassical gravity in AdS can be naturally interpreted as a mesoscopic description of quantum gravity, and a mesoscopic holographic duality can be framed as a moment-vs.-probability-distribution duality.
Nuclear and particle physics. Atomic energy. Radioactivity
Using the example of the development of two simple dual-band monofocal IR objectives, approaches to the layout and design of their optical schemes are demonstrated, depending on whether compensation for the effects of temperature changes on the optical characteristics of these lenses is required or not. It is shown that in the case when thermal compensation is not required, superior optical characteristics can be achieved in a simple triplet, in which the flat surface of the frontal fractional lens carries a diffractive microstructure. In the case of passive athermalization, the optical scheme of the objective becomes more complicated and consists of refractive two-line power and correction components, in the latter of which the flat surface of one of the lenses carries a diffractive microstructure. Due to highly efficient diffractive microstructures, the longitudinal chromaticism of both objectives is reduced almost to the diffraction limit and, in combination with a low level of residual monochromatic aberrations at high light intensity, the maximum resolution is provided for uncooled microbolometers used as matrix receivers.
Daniel Marcolin, Yasuyuki Todo, Mahendra Piraveenan
This work analyses the interdependent link creation of patent and shareholding links in interfirm networks, and how this dynamics affects the resilience of such networks in the face of cascading failures. Using the Orbis dataset, we construct very large co-patenting and shareholding networks, globally as well as in terms of individual countries. Besides, we construct smaller overlap networks from those firm pairs which have both types of links between them, for nine years between 2008-2016. We use information theoretic measures, such as mutual information, active information storage, and transfer entropy, to characterise the topological similarities and shared topological information between the relevant co-patenting and shareholding networks. We then construct a cascading failure model, and use it to analyse the resilience of interdependent interfirm networks in terms of multiple failure characteristics. We find that there is relatively high level of mutual information between co-patenting networks and the shareholding networks from later years, suggesting that the formation of shareholding links is influenced by the existence of patent links in previous years. We highlight that this phenomena differs between countries. For interfirm networks from certain countries, such as Switzerland and Netherlands, this influence is remarkably higher compared to other countries. We also show that this influence becomes most apparent after a delay of four years between the formation of co-patenting links and shareholding links. Analysing the resilience of shareholding networks against cascading failures, we show that in terms of both mean downtime, and failure proportion of firms, certain countries including Italy, Germany, India, Japan and the United States, have less resilient shareholding networks compared to other countries with significant economies. Based on our results, we postulate that an interfirm network model which considers multiple types of relationships together, uses information theoretic measures to establish information sharing and causality between them, and uses cascading failure simulation to understand the resilience of such networks under economic and financial stress, could be a useful multifaceted model to highlight important features of economic systems around the world.
This article explores the domain of legal analysis and its methodologies, emphasising the significance of generalisation in legal systems. It discusses the process of generalisation in relation to legal concepts and the development of ideal concepts that form the foundation of law. The article examines the role of logical induction and its similarities with semantic generalisation, highlighting their importance in legal decision-making. It also critiques the formal-deductive approach in legal practice and advocates for more adaptable models, incorporating fuzzy logic, non-monotonic defeasible reasoning, and artificial intelligence. The potential application of neural networks, specifically deep learning algorithms, in legal theory is also discussed. The article discusses how neural networks encode legal knowledge in their synaptic connections, while the syllogistic model condenses legal information into axioms. The article also highlights how neural networks assimilate novel experiences and exhibit evolutionary progression, unlike the deductive model of law. Additionally, the article examines the historical and theoretical foundations of jurisprudence that align with the basic principles of neural networks. It delves into the statistical analysis of legal phenomena and theories that view legal development as an evolutionary process. The article then explores Friedrich Hayek’s theory of law as an autonomous self-organising system and its compatibility with neural network models. It concludes by discussing the implications of Hayek’s theory on the role of a lawyer and the precision of neural networks.
This paper examines the innovative intellectual capital variables of SMEs, such as business culture, knowledge intelligence, business communication, and digital business, using the intellectual capital theory as a frame of reference for SMEs in South Africa. The phenomenal rise of SMEs and a gap in the existing literature on innovative intellectual capital served as the driving forces behind this study. Lack of innovative skills brought on by low levels of intellectual capital are seen as the driving force behind this unwelcome phenomenon. This conceptual paper aims to bring a broader understanding innovative competencies which are crucial for businesses to sustain themselves, grow, and perform well SMEs will be helped in the development of platforms to enhance their operations and advance the South African economy. A detailed evaluation of secondary sources of information provided by the University of Johannesburg was used in this paper. This paper analyses the relevant conceptualization of key concepts and the literature on intellectual capital implementation. Based on the conceptualization and literature, this study found that the performance of SMEs is significantly influenced by innovative intellectual capital variables and that for SMEs to have a competitive advantage in the long-run the SMEs should embrace the intellectual capital variables covered in this paper. Adequate knowledge of intellectual capital by SMEs operating in South Africa can be use to improve their capacity for innovation and expand their operations. The literature from the study revealed that intellectual capital contributes to the development of competitiveness in SMEs through the use of business culture, knowledge intelligence, business communication, and digital business processes.
The importance of perspective-taking crosses disciplines and is foundational to diverse phenomena such as point-of-view, scale, mindset, theory of mind, opinion, belief, empathy, compassion, analysis, and problem solving, etc. This publication gives predictions for and a formal description of <i>point-view Perspectives</i> (P) or the “P-rule”. This makes the P-rule foundational to systems, systems thinking and the consilience of knowledge. It is one of four universals of the organization of information as a whole. This paper presents nine empirical studies in which subjects were asked to complete a task and/or answer a question. The samples vary for each study (ranging from N = 407 to N = 34,398) and are generalizable to a normal distribution of the US population. As was evident in Cabrera, “These studies support—with <i>high statistical significance</i>—the predictions made by DSRP Theory (Distinctions, Systems Relationships, Perspectives) point-view Perspectives including its: universality as an observable phenomenon in both mind (cognitive complexity) and nature (material complexity) (i.e., parallelism); internal structures and dynamics; mutual dependencies on other universals (i.e., Distinctions, Systems, and Relationships); role in structural predictions; and, efficacy as a metacognitive skill”. These data suggest that point-view Perspectives (P) observably and empirically exist, and that universality, efficacy, and parallelism (between cognitive and material complexity) exist as well. The impact of this paper is that it provides empirical evidence for the phenomena of point-view perspective taking (“P-rule”) as a universal pattern/structure of systems thinking, a field in which scholarly debate is often based on invalidated opinioned frameworks; this sets the stage for theory building in the field.
Mostafa Maleki, Mohsen Shams, Narges Roustaei
et al.
Background: Skin cancer is one of the most preventable diseases. The purpose of this study is to describe a social marketing-based intervention design protocol to promote sun-protective behaviors among adolescent boys living in urban areas in Yasuj, south west of Iran.Methods: This study will be conducted based on six specific steps including a qualitative study, a systematic review, development of appropriate tools, a cross-sectional study, intervention designing, and a feasibility study. The main objective of the qualitative study is to elicit the views and opinions of adolescent boys, their parents, and teachers about sun-protective behaviors. In the second step, factors affecting sun-protective behaviors will be reviewed systematically. Based on the findings of the first and second steps, an appropriate model/theory of behavior change will be selected, and a standardized questionnaire will then be developed. In the fourth step, a cross-sectional survey will be conducted using the developed questionnaire to assess current sun-protective behavior practices.Results: Findings of the first to fourth stages will provide a comprehensive picture of the issue and the affecting factors. During the fifth step, the structure and the content of the intervention package, as well as educational and promotional materials, will be developed and pre-tested. Finally, in the sixth step, a feasibility study will be conducted.Conclusion: This study will provide practical information on the achieving of content and construct of a community-based social marketing intervention. This protocol reports on how to achieve audience-oriented insights for designing a tailored intervention aimed at promoting sun-protective behaviors among adolescent boys using social marketing.
Information geometry and optimal transport are two distinct geometric frameworks for modeling families of probability measures. During the recent years, there has been a surge of research endeavors that cut across these two areas and explore their links and interactions. This paper is intended to provide an (incomplete) survey of these works, including entropy-regularized transport, divergence functions arising from $c$-duality, density manifolds and transport information geometry, the para-Kähler and Kähler geometries underlying optimal transport and the regularity theory for its solutions. Some outstanding questions that would be of interest to audience of both these two disciplines are posed. Our piece also serves as an introduction to the Special Issue on Optimal Transport of the journal Information Geometry.
I recall my first encounter with Professor Shun-ichi Amari who, once upon a time in Las Vegas, gave me a precious hint about connecting Independent Component Analysis (ICA) to Information Geometry. The paper sketches, rather informally, some of the insights gained in following this lead.
Information leakage to a guessing adversary in index coding is studied, where some messages in the system are sensitive and others are not. The non-sensitive messages can be used by the server like secret keys to mitigate leakage of the sensitive messages to the adversary. We construct a deterministic linear coding scheme, developed from the rank minimization method based on fitting matrices (Bar-Yossef et al. 2011). The linear scheme leads to a novel upper bound on the optimal information leakage rate, which is proved to be tight over all deterministic scalar linear codes. We also derive a converse result from a graph-theoretic perspective, which holds in general over all deterministic and stochastic coding schemes.
This article analyses apps and artificial intelligence chatbots designed to offer survivors of sexual violence with emergency assistance, education, and a means to report and build evidence against perpetrators. Demonstrating how these technologies both confront and constitute forms of oppression, this analysis complicates assumptions about data protection through an intersectional feminist examination of these digital tools. In surveying different anti-violence apps, we interrogate how the racial formation of whiteness manifests in ways that can be understood as the political, representational, and structural intersectional dimensions of data protection.
Patricia Wollstadt, Sebastian Schmitt, Michael Wibral
Selecting a minimal feature set that is maximally informative about a target variable is a central task in machine learning and statistics. Information theory provides a powerful framework for formulating feature selection algorithms -- yet, a rigorous, information-theoretic definition of feature relevancy, which accounts for feature interactions such as redundant and synergistic contributions, is still missing. We argue that this lack is inherent to classical information theory which does not provide measures to decompose the information a set of variables provides about a target into unique, redundant, and synergistic contributions. Such a decomposition has been introduced only recently by the partial information decomposition (PID) framework. Using PID, we clarify why feature selection is a conceptually difficult problem when approached using information theory and provide a novel definition of feature relevancy and redundancy in PID terms. From this definition, we show that the conditional mutual information (CMI) maximizes relevancy while minimizing redundancy and propose an iterative, CMI-based algorithm for practical feature selection. We demonstrate the power of our CMI-based algorithm in comparison to the unconditional mutual information on benchmark examples and provide corresponding PID estimates to highlight how PID allows to quantify information contribution of features and their interactions in feature-selection problems.