Immersive journalism is often discussed as technological innovation meant to bring audiences closer to news and promote empathy though emotional first-person experience. This article proposes an alternative perspective focusing on intimacy as a mediated relationship rather than as emotional intensity or personal exposure. Following existing research on emotion, digital intimacy and empathy, the paper argues that immersive journalism continues a long journalistic effort to reduce distance between lived realities and audiences. This article shows that emotional responses in this type of journalism are not automatic effects of technology, but results of a mixture of factors such as narrative choices, spatial positioning and ethical considerations. While immersive formats can create a strong sense of proximity and being there this closeness may sometimes produce discomfort or resistance. By connecting theories of intimacy with studies on immersive media, the paper proposes the concept of mediated intimacy as a useful framework for understanding how this type of journalism creates, negotiates and limits closeness. Understanding immersive journalism through this lens can open the space for a more careful discussion and future research on its emotional impact and ethical responsibilities.
This article explores how Emmanuel Levinas’s concept of the epiphany of the face can illuminate ethical responsibility in pedagogical encounters under late-modern conditions, where masks and roles dominate intersubjective life. Drawing on a normative-philosophical reading of Levinas and contextualised by sociological diagnoses, the analysis foregrounds how ethical responsibility precedes method, rule, and moral codes. A simple heuristic triad—the lived, the emotional, and the vulnerable—makes visible what masks attempt to conceal yet persistently leak through our (micro-)gestures. The article further examines how “the Third” translates the primary ethical call into justice and institutions without dissolving its asymmetry, and how Levinas’s account of language shapes pedagogical communication. The contribution is conceptual: it articulates conditions for pedagogical judgement and responsibility, and points toward implications for assessment, professional formation, and the design of pedagogical frameworks.
Philosophy (General), Theory and practice of education
Artificial general intelligence (AGI) is an established field of research. Yet some have questioned if the term still has meaning. AGI has been subject to so much hype and speculation it has become something of a Rorschach test. Melanie Mitchell argues the debate will only be settled through long term, scientific investigation. To that end here is a short, accessible and provocative overview of AGI. I compare definitions of intelligence, settling on intelligence in terms of adaptation and AGI as an artificial scientist. Taking my cue from Sutton's Bitter Lesson I describe two foundational tools used to build adaptive systems: search and approximation. I compare pros, cons, hybrids and architectures like o3, AlphaGo, AERA, NARS and Hyperon. I then discuss overall meta-approaches to making systems behave more intelligently. I divide them into scale-maxing, simp-maxing, w-maxing based on the Bitter Lesson, Ockham's and Bennett's Razors. These maximise resources, simplicity of form, and the weakness of constraints on functionality. I discuss examples including AIXI, the free energy principle and The Embiggening of language models. I conclude that though scale-maxed approximation dominates, AGI will be a fusion of tools and meta-approaches. The Embiggening was enabled by improvements in hardware. Now the bottlenecks are sample and energy efficiency.
Ilias Diakonikolas, Daniel M. Kane, Sihan Liu
et al.
We study the task of testable learning of general -- not necessarily homogeneous -- halfspaces with adversarial label noise with respect to the Gaussian distribution. In the testable learning framework, the goal is to develop a tester-learner such that if the data passes the tester, then one can trust the output of the robust learner on the data.Our main result is the first polynomial time tester-learner for general halfspaces that achieves dimension-independent misclassification error. At the heart of our approach is a new methodology to reduce testable learning of general halfspaces to testable learning of nearly homogeneous halfspaces that may be of broader interest.
The status of the equivalence principle in modified symmetric teleparallel gravity is examined. In this theory, minimum length geodesics are distinct from autoparallel geodesics, that is, the ``shortest'' paths are not the ``straightest'' paths. We show that a standard argument that singles out metric geodesics in general relativity does not apply in modified symmetric teleparallel gravity. This is because the latter theory does not obey the equivalence principle in the sense of Weinberg. We argue, however, that the structure of the theory makes it inevitable that a freely falling test particle follows a shortest path, a geodesic of the metric. The geodesic equation that governs the motion of a freely falling test particle involves the Levi-Civita connection, not some other connection obtained by solving the connection field equations of the theory. This also has bearing on whether, under appropriate conditions, modified symmetric teleparallel gravity is fully equivalent to general relativity.
Simplicity is held by many to be the key to general intelligence. Simpler models tend to "generalise", identifying the cause or generator of data with greater sample efficiency. The implications of the correlation between simplicity and generalisation extend far beyond computer science, addressing questions of physics and even biology. Yet simplicity is a property of form, while generalisation is of function. In interactive settings, any correlation between the two depends on interpretation. In theory there could be no correlation and yet in practice, there is. Previous theoretical work showed generalisation to be a consequence of "weak" constraints implied by function, not form. Experiments demonstrated choosing weak constraints over simple forms yielded a 110-500% improvement in generalisation rate. Here we show that all constraints can take equally simple forms, regardless of weakness. However if forms are spatially extended, then function is represented using a finite subset of forms. If function is represented using a finite subset of forms, then we can force a correlation between simplicity and generalisation by making weak constraints take simple forms. If function is determined by a goal directed process that favours versatility (e.g. natural selection), then efficiency demands weak constraints take simple forms. Complexity has no causal influence on generalisation, but appears to due to confounding.
The article considers the phenomenon of unchanging provisions of the constitution, their evolution, different classifications. Immutable provisions are considered narrowly – as provisions that are not subject to any changes, as well as criteria (principles), which the changes made should notcontradict. Synonymous concepts denoting immutability are considered – absolute entrenchment, the clause of eternity, stone provisions. The idea of supra-constitutionality, however, is seen as a different phenomenon from the unchanging provisions of the constitution. It is claimed that thematerial (substantive, substantive) requirements for change were not common from the beginning (although the situation has changed quite a few centuries). As practice shows, immutability can be not only formally established in a positive constitutional text. Existing examples lead us to thinkabout the important role of judges who interpret a positive constitutional text. In this case, the constituent power speaks not only through the positive text of the constitution, but also through the judges (for the Kelzen model of constitutional control – the judges of the constitutional court).The unchanging provisions themselves can be changed, as evidenced by relevant examples from practice (albeit isolated). In addition, history knows examples of departure from the positively enshrined immutable provisions (in the case of a rupture of constitutional continuity). Even if theprovisions remain unchanged, there may be no specific jurisdiction to monitor compliance. The absence of explicit immutability does not mean that the practice has not formed implicit criteria of immutability. However, even combined explicit and implicit immutability can still not claimuniversalism in constitutional law.
PRIMA (The PRobe for-Infrared Mission for Astrophysics) is a concept for a far-infrared (IR) observatory. PRIMA features a cryogenically cooled 1.8 m diameter telescope and is designed to carry two science instruments enabling ultra-high sensitivity imaging and spectroscopic studies in the 24 to 235 microns wavelength range. The resulting observatory is a powerful survey and discovery machine, with mapping speeds better by 2 - 4 orders of magnitude with respect to its far-IR predecessors. The bulk of the observing time on PRIMA should be made available to the community through a General Observer (GO) program offering 75% of the mission time over 5 years. In March 2023, the international astronomy community was encouraged to prepare authored contributions articulating scientific cases that are enabled by the telescope massive sensitivity advance and broad spectral coverage, and that could be performed within the context of GO program. This document, the PRIMA General Observer Science Book, is the edited collection of the 76 received contributions.
Bu araştırma, çocuklarla felsefe konusunda, Türkiye’deki üniversitelerin enstitülerinde tamamlanan lisansüstü tezleri ve Türkiye adresli dergilerde yayımlanmış makaleleri incelemeyi amaçlayan betimsel bir içerik analizidir. Çocuklarla Felsefe Konulu Tezleri ve Makaleleri Sınıflama Formuyla toplanan veriler SPSS 26.0 ve MAXQDA 22 programlarıyla analiz edilmiştir. Araştırma şu bulgulara ulaşılmıştır: Çocuklarla felsefe konusunda Türkiye’de yapılan araştırmaların sayısı son beş yılda artış göstermiştir. Bu konuda daha çok makale yazılmıştır. Doktora tezi en az yapılan çalışma türüdür. Çalışmalarda en çok kuramsal açıklamalara odaklanılmıştır. Çalışmalar genellikle alanyazın / derleme çalışmasıdır. Alanyazın / derleme dışındaki çalışmalar nitel ve nicel yöntemlerle gerçekleştirilmiştir. Karma yöntem ise en az tercih edilen yöntemdir. Çalışmalarda çoğunlukla yarı deneysel, durum çalışması ve eylem araştırması desenleri tercih edilmiştir. Çalışmalar genel olarak okul öncesi ve ilkokul kademelerinde, 11-100 katılımcıyla gerçekleştirilmiştir. Çalışmaların uygulama süresi genellikle 8-14 hafta arasındadır. Çalışmalarda veri toplama aracı olarak görüşme formu sıklıkla kullanılmıştır. Çalışmalarda en çok nitel veri analiz yöntemleri kullanılmıştır. Bunu nicel veri analiz yöntemleri izlemektedir. Hem nitel hem de nicel analiz yöntemlerinin birlikte kullanılması en az tercih edilen analiz yöntemi olmuştur. Çalışmalarda nitel analiz yöntemlerinden içerik analizi en çok tercih edilen teknik olmuştur. Nicel verilerin analizinde ise t testi, Mann Whitney U gibi teknikler başta olmak üzere anlam çıkartıcı istatistikî teknikler kullanılmıştır. Uygulama yapılan çalışmaların hemen hemen yarısı okul öncesinde gerçekleştirilmiştir. Diğer çalışmaların yarısından fazlası bir dersin içeriğinden bağımsız olarak gerçekleştirilmiştir. Bir dersin içeriğiyle birlikte çocuklarla felsefe etkinliklerinin gerçekleştirildiği çalışma sayısı ise çok azdır. Çocuklarla felsefe etkinliklerinin gerçekleştirildiği çalışmaların büyük çoğunluğunda uygulamalarda bir yöntem takip edilmemiştir. Sadece üç çalışmada çocuklarla felsefe etkinlikleri bir yönteme göre yapılandırılmıştır. Çalışmaların önerileri çoğunlukla yeni araştırmalar yapılması olmuştur. Uygulamaya yönelik öneriler ise çocuklarla felsefe konusunda uygulamalarının yaygınlaştırılması, farkındalığın artırılması ve çocuklarla felsefe etkinliklerinin daha planlı, programlı, sistematik olması şeklindedir. Sonuç olarak, Türkiye’de çocuklarla felsefe konusunda daha çok araştırma yapılmasına ihtiyaç vardır. Yeni araştırmalar çocuklarla felsefe etkinliklerini uygulayan / değerlendiren ve çocuklarla felsefe etkinliklerinin eleştirel düşünme becerisi üzerindeki etkisini inceleyen, özellikle “Çocuklar felsefe yapabilir mi?” sorusunu cevaplamak amacıyla nitel araştırma yöntemiyle gerçekleştirilen, çocuklarla felsefe yapma yöntemlerinden birini uygulayan ve bu yöntemi değerlendiren araştırmalar olmalıdır.
Let $X$ be either a general hypersurface of degree $n+1$ in $\mathbb P^n$ or a general $(2,n)$ complete intersection in $\mathbb P^{n+1}, n\geq 4$. We construct balanced rational curves on $X$ of all high enough degrees. If $n=3$ or $g=1$, we construct rigid curves of genus $g$ on $X$ of all high enough degrees. As an application we construct some rigid bundles on Calabi-Yau threefolds. In addition, we construct some low-degree balanced rational curves on hypersurfaces of degree $n + 2$ in $\mathbb P^n$.
Efforts to promote equitable public policy with algorithms appear to be fundamentally constrained by the "impossibility of fairness" (an incompatibility between mathematical definitions of fairness). This technical limitation raises a central question about algorithmic fairness: How can computer scientists and policymakers support equitable policy reforms with algorithms? In this article, I argue that promoting justice with algorithms requires reforming the methodology of algorithmic fairness. First, I diagnose the problems of the current methodology for algorithmic fairness, which I call "formal algorithmic fairness." Because formal algorithmic fairness restricts analysis to isolated decision-making procedures, it leads to the impossibility of fairness and to models that exacerbate oppression despite appearing "fair." Second, I draw on theories of substantive equality from law and philosophy to propose an alternative methodology, which I call "substantive algorithmic fairness." Because substantive algorithmic fairness takes a more expansive scope of analysis, it enables an escape from the impossibility of fairness and provides a rigorous guide for alleviating injustice with algorithms. In sum, substantive algorithmic fairness presents a new direction for algorithmic fairness: away from formal mathematical models of "fair" decision-making and toward substantive evaluations of whether and how algorithms can promote justice in practice.
The interconnectedness and interdependence of modern graphs are growing ever more complex, causing enormous resources for processing, storage, communication, and decision-making of these graphs. In this work, we focus on the task graph sparsification: an edge-reduced graph of a similar structure to the original graph is produced while various user-defined graph metrics are largely preserved. Existing graph sparsification methods are mostly sampling-based, which introduce high computation complexity in general and lack of flexibility for a different reduction objective. We present SparRL, the first generic and effective graph sparsification framework enabled by deep reinforcement learning. SparRL can easily adapt to different reduction goals and promise graph-size-independent complexity. Extensive experiments show that SparRL outperforms all prevailing sparsification methods in producing high-quality sparsified graphs concerning a variety of objectives.
Mir Hameeda, Behnam Pourhassan, Mario C. Rocca
et al.
In this paper, we will study the large scale structure formation using the gravitational partition function. We will assertively argue that the system of gravitating galaxies can be analyzed using the Tsallis statistical mechanics. The divergences in the Tsallis gravitational partition function can be removed using the mathematical riches of the generalization of the dimensional regularization (GDR). The finite gravitational partition function thus obtained will be used to evaluate the thermodynamics of the system of galaxies and thus, to understand the clustering of galaxies in the universe. The correlation function which is believed to contain the information of clustering of galaxies will also be discussed in this formalism.
В статье изучены национальные и международные правовые основы регулирования отношений в сфере физической культуры и спорта. Установлено, что изучение и учет положительного зарубежного опыта эффективного регулирования отношений, возникающих в сфере физической культуры и спорта, важны для совершенствования норм национального спортивного права, в том числедля проведения кодификации спортивного законодательства России, и выступает одной из важныхпричин, обусловливающих развитие международного сотрудничества Российской Федерации с зарубежными странами. Другой важной причиной выступает необходимость проведения гармонизации и унификации национального законодательства России в области физической культуры и спортас нормами международного права. Развитие международного сотрудничества России с зарубежнымистранами и международными организациями в направлении обеспечения национальной безопасности особенно актуально в условиях осуществления давления на российских спортсменов в последнеедесятилетие на спортивных соревнованиях и мероприятиях различного уровня. Сделан вывод о необходимости для России развивать все направления международного сотрудничества, так как физическая культура и спорт на национальном и международном уровнях выступает важным инструментомобеспечения устойчивого социально-экономического развития стран, инструментом личностногоразвития человека, инструментом, обеспечивающим межкультурное, партнерское и дружественноеразвитие межгосударственного общения и служит мощным стимулом повышения конкурентоспособности каждой страны в условиях мировой глобализации.
Comparative law. International uniform law, Jurisprudence. Philosophy and theory of law
<p>En este trabajo me propongo analizar cómo la discusión de Hegel de nociones centrales de la filosofía de Kant (tales como concepto, objeto, verdad, conocimiento, forma y contenido) lleva a replantear el alcance epistemológicamente legítimo de las categorías. Esta revisión se da junto con la advertencia de Hegel de un grave problema metodológico de la filosofía crítica: ésta es incapaz de justificar la pretensión de verdad del presunto autoconocimiento de la razón. Esto llevará a Hegel a reelaborar el método de auto-indagación de la razón, junto con las nociones antes indicadas. Finalmente, indicaremos cuáles son los requisitos –desde una perspectiva hegeliana– para la verdad de un discurso filosófico: (1) admisión del conocimiento conceptual; (2) deducción inmanente basada en el concepto entendido como estructura sistemática; (3) deducción de la subjetividad; Y finalmente, el más importante, (4) la <em>impensabilidad </em>de un modelo alternativo de la razón y la refutación de todo realismo. Con esto, se estará en condiciones de probar la legitimidad de la auto-indagación de la razón o <em>transparencia</em> de la razón.</p>
In the constrained synchronization problem we ask if a given automaton admits a synchronizing word coming from a fixed regular constraint language. We show that intersecting a given constraint language with an ideal language decreases the computational complexity. Additionally, we state a theorem giving PSPACE-hardness that broadly generalizes previously used constructions and a result on how to combine languages by concatenation to get polynomial time solvable constrained synchronization problems. We use these results to give a classification of the complexity landscape for small constraint automata of up to three states.
This work may be defined as a modern philosophical approach to theoretical physics. Since ancient times science and philosophy evolved in parallel, thus renewing from time to time the epochal paradigms of human thought. We could not understand how the scientists of the past could have achieved so many goals, if we neglect the philosophical ideas that inspired their minds. Today, despite the spectacular successes of the Standard Models of Elementary Particles (SMEP) and Modern Cosmology (SMMC), theoretical physics seems to be run into a mess of contradictions that preclude the access to higher views. We are still unable to explain why it is so difficult to include gravitation into the SMEP, although General Relativity (GR) works so well in the SMMC, why it is so difficult to get rid of all the divergences of the SMEP, and "why there is something rather than nothing". This paper aims to answer these and other questions by starting from a novel fundamental principle: the spontaneous breaking of conformal symmetry down to the metric symmetry of GR. This statement is very simple but its implementation is a little bit complicated. To facilitate the reading, the paper is divided in a main sequence of sections and subsections and a collection of Appendices. The first acting as a sort of Ariadne's wire for guiding the reader through the labyrinth of specialized topics that are necessary to understand the work.