Background. In the context of the digitalisation of everyday life, digital wellbeing is a concept that has recently emerged. It signifies the need to reflect on the impact of digital transformations on various spheres of human life, and is becoming the most important type of a person’s wellbeing. Objectives. The study is devoted to the analysis of modern approaches to psychological wellbeing in the digital world and digital wellbeing as a socio-psychological phenomenon. Methods. The study involved a theoretical analysis and systematisation of modern scientific approaches to digital wellbeing. The socio-cognitive concept of digital socialisation served as the methodological framework to the study. Results. The key areas of research on the relationship between wellbeing and different aspects of digital technology use are identified. These aspects are: digital access, digital inequality and digital competence; problematic internet use, screen time and gaming; the impact of digital technologies on cognitive development; social media use and digital practices as factors of wellbeing; the development of artificial intelligence technologies as a new challenge to wellbeing. The existing concepts of digital wellbeing have been analysed and a formula for digital wellbeing has been proposed, comprised of three components. These are firstly satisfaction with connectedness and management of mixed reality, secondly self-efficacy and management of digital extended personality, and thirdly satisfaction and management of digital sociality. Conclusions. The development of a formula for digital wellbeing contributes to the understanding of constructive strategies of human adaptation and pre-adaptation in the context of increasing digitalisation of everyday life. These are necessary both to maintain an optimal level of stability of society and to ensure its development in the near future in response to new socio-technological challenges.
Liesbeth De Mol, Yuri V. Matiyasevich, Eugenio G. Omodeo
et al.
In his autobiographic essay written in 1999, ``From logic to computer science and back'', Martin David Davis (3/8/1928--1/1/2023) indicated that he viewed himself as a logician \emph{and} a computer scientist. He expanded the essay in 2016 and expressed a new perspective through a changed title, ``My life as a logician''. He points out that logic was the unifying theme underlying his scientific career. Our paper attempts to provide a consistent vision that illuminates Davis' successive contributions leading to his landmark writings on computability, unsolvable problems, automated reasoning, as well as the history and philosophy of computing.
El presente texto muestra el desarrollo y la evaluación de un proceso educativo con profesores de matemáticas en formación, centrado en el uso de estrategias didácticas con características inductivas, en el marco de la
asignatura Matemáticas y Física de la Licenciatura en Enseñanza de las Matemáticas de la Universidad de Colima,
México. La experiencia incluye a 19 estudiantes que han tenido formación disciplinar en matemáticas y dominan el conocimiento procedimental y el lenguaje algebraico que implica la física en los niveles básicos, pero con poco acercamiento al conocimiento conceptual, tanto de física clásica como de física moderna. Bajo este principio, el curso se centró en la reflexión y la resolución de problemas. Desde esta lógica, la propuesta incluyó buscar,
tanto el dominio procedimental como conceptual, siendo este último el objetivo central de esta investigación.
Los métodos inductivos incluyeron la utilización de materiales audiovisuales y lecturas que van en un sentido de divulgación. Los resultados de sus trabajos muestran que, sin omitir un proceso formativo de corte más tradicional, como la resolución de problemas o libros de texto de física clásicos, la incorporación de estrategias inductivas sobre las particularidades de conceptos como “movimiento” o “luz” permite una comprensión más profunda de principios fundamentales, siendo un complemento funcional para una formación más integral.
The early history of string theory is marked by a shift from strong interaction physics to quantum gravity. The first string models and associated theoretical framework were formulated in the late 1960s and early 1970s in the context of the S-matrix program for the strong interactions. In the mid-1970s, the models were reinterpreted as a potential theory unifying the four fundamental forces. This paper provides a historical analysis of how string theory was developed out of S-matrix physics, aiming to clarify how modern string theory, as a theory detached from experimental data, grew out of an S-matrix program that was strongly dependent upon observable quantities. Surprisingly, the theoretical practice of physicists already turned away from experiment before string theory was recast as a potential unified quantum gravity theory. With the formulation of dual resonance models (the "hadronic string theory"), physicists were able to determine almost all of the models' parameters on the basis of theoretical reasoning. It was this commitment to "non-arbitrariness", i.e., a lack of free parameters in the theory, that initially drove string theorists away from experimental input, and not the practical inaccessibility of experimental data in the context of quantum gravity physics. This is an important observation when assessing the role of experimental data in string theory.
One justification for preregistering research hypotheses, methods, and analyses is that it improves the transparent evaluation of the severity of hypothesis tests. In this article, I consider two cases in which preregistration does not improve this evaluation. First, I argue that, although preregistration may facilitate the transparent evaluation of severity in Mayo's error statistical philosophy of science, it does not facilitate this evaluation in Popper's theory-centric approach. To illustrate, I show that associated concerns about Type I error rate inflation are only relevant in the error statistical approach and not in a theory-centric approach. Second, I argue that a test procedure that is preregistered but that also allows deviations in its implementation (i.e., "a plan, not a prison") does not provide a more transparent evaluation of Mayoian severity than a non-preregistered procedure. In particular, I argue that sample-based validity-enhancing deviations cause an unknown inflation of the test procedure's Type I error rate and, consequently, an unknown reduction in its capability to license inferences severely. I conclude that preregistration does not improve the transparent evaluation of severity (a) in Popper's philosophy of science or (b) in Mayo's approach when deviations are allowed.
Research in cybersecurity may seem reactive, specific, ephemeral, and indeed ineffective. Despite decades of innovation in defense, even the most critical software systems turn out to be vulnerable to attacks. Time and again. Offense and defense forever on repeat. Even provable security, meant to provide an indubitable guarantee of security, does not stop attackers from finding security flaws. As we reflect on our achievements, we are left wondering: Can security be solved once and for all? In this paper, we take a philosophical perspective and develop the first theory of cybersecurity that explains what precisely and *fundamentally* prevents us from making reliable statements about the security of a software system. We substantiate each argument by demonstrating how the corresponding challenge is routinely exploited to attack a system despite credible assurances about the absence of security flaws. To make meaningful progress in the presence of these challenges, we introduce a philosophy of cybersecurity.
This paper describes the underlying philosophy, design, and implementation of a course on "Nuclear Technology, Policy, and Society" taught in the Department of Nuclear Engineering and Radiological Sciences at the University of Michigan. The course explores some of nuclear technology's most pressing challenges or its 'wicked problems'. Through this course students explore the origins of these problems be they social or technical and, they are offered tools, both conceptual and methodological to make sense of these problems, and guided through a semester-long exploration of how scientists engineers can work towards their resolution, and to what degree these problems can be solved through institutional transformation or a transformation in our own practices and norms as a field. The underlying pedagogical philosophy, implementation, and response to the course are described here for other instructors who might wish to create a similar course, or for non-academic nuclear scientists and engineers, who might perhaps, in these pages, find a vocabulary for articulating and reflecting on the nature of these problems as encountered in their praxis.
Introduction. The health of centenarians is a major focus in global studies. Dyslipidemia is directly linked to the risk of cardiovascular diseases, which pose a growing burden on healthcare due to the increasing elderly population. Studying the lipid profiles of centenarians is important for preventing circulatory system diseases and promoting healthy aging. This research aims to compare the prevalence of dyslipidemia in centenarians (median age 96 [95-97]) with elderly individuals (median age was 69 [64 – 74]) in the Republic of Kazakhstan and examine potential predictors of dyslipidemia in the centenarian group.
Methods. The study involved 46 centenarians (study group) and 82 elderly individuals (control group). Statistical analysis was used to process the data, including blood markers and demographic variables, to identify factors contributing to dyslipidemia.
Results and conclusion. The prevalence of hypercholesterolemia in centenarians was 32.6% (15 people - 3 men; 12 women), with elevated LDL levels in 4.3% (2 women). In the control group, hypercholesterolemia prevalence was 29.3% (24 people - 6 men; 18 women) and elevated triglycerides in 6.1% (3 women; 2 men). The study and control groups were compared based on their lipid profile characteristics, which showed similarities as indicated by all p-values being above 0.05: Cholesterol (p=0.348), HDL (p=0.975), LDL (p=0.161), and Triglycerides (p=0.159). Decreased physical activity was a predictor of dyslipidemia in centenarians. Excessive cholesterol levels were significantly higher among women than men in both groups. The primary factor for dyslipidemia was low physical activity, with other predictors having no significant impact on the lipid profiles of centenarians. This factor should be considered when assessing cardiovascular disease risks and all-cause mortality.
The paper re-examines the principal methodological questions, arising in the debate over the cosmological standard model's postulate of Dark Matter vs. rivalling proposals that modify standard (Newtonian and general-relativistic) gravitational theory, the so-called Modified Newtonian Dynamics (MOND) and its subsequent extensions. What to make of such seemingly radical challenges of cosmological orthodoxy? In the first part of our paper, we assess MONDian theories through the lens of key ideas of major 20th century philosophers of science (Popper, Kuhn, Lakatos, and Laudan), thereby rectifying widespread misconceptions and misapplications of these ideas common in the pertinent MOND-related literature. None of these classical methodological frameworks, which render precise and systematise the more intuitive judgements prevalent in the scientific community, yields a favourable verdict on MOND and its successors -- contrary to claims in the MOND-related literature by some of these theories' advocates; the respective theory appraisals are largely damning. Drawing on these insights, the paper's second part zooms in on the most common complaint about MONDian theories, their ad-hocness. We demonstrate how the recent coherentist model of ad-hocness captures, and fleshes out, the underlying -- but too often insufficiently articulated -- hunches underlying this critique. MONDian theories indeed come out as severely ad hoc: they do not cohere well with either theoretical or empirical-factual background knowledge. In fact, as our complementary comparison with the cosmological standard model's Dark Matter postulate shows, with respect to ad-hocness, MONDian theories fare worse than the cosmological standard model.
Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no-time.
Problems with uniform probabilities on an infinite support show up in contemporary cosmology. This paper focuses on the context of inflation theory, where it complicates the assignment of a probability measure over pocket universes. The measure problem in cosmology, whereby it seems impossible to pick out a uniquely well-motivated measure, is associated with a paradox that occurs in standard probability theory and crucially involves uniformity on an infinite sample space. This problem has been discussed by physicists, albeit without reference to earlier work on this topic. The aim of this article is both to introduce philosophers of probability to these recent discussions in cosmology and to familiarize physicists and philosophers working on cosmology with relevant foundational work by Kolmogorov, de Finetti, Jaynes, and other probabilists. As such, the main goal is not to solve the measure problem, but to clarify the exact origin of some of the current obstacles. The analysis of the assumptions going into the paradox indicates that there exist multiple ways of dealing consistently with uniform probabilities on infinite sample spaces. Taking a pluralist stance towards the mathematical methods used in cosmology shows there is some room for progress with assigning probabilities in cosmological theories.
This article examines the transformation of mythical, biblical and apocryphal narratives in the Surah Maryam (Surah 19) from the perspective of René Girard’s mimetic theory. It postulates that this theory adds value to the interpretation of the aforementioned surah. From a mimetic perspective, it can be shown that the new, nascent, early Islamic community tried to read the religious narratives structuring its environment in terms of a nonviolent relationship between creator and creature, and thus to distance itself from a sacrificial understanding of God.
Matteo Tuveri, Daniela Fadda, Viviana Fanti
et al.
Gravity is, by far, one of the scientific themes that have most piqued the curiosity of scientists and philosophers over the centuries. The history of science tells us that when the creative effort of physicists and philosophers to solve the main puzzles of the understanding of our universe met, a new conceptual revolution has started. However, since Einstein's relativistic theories and the subsequent advent of quantum mechanics, physicists and philosophers have taken different paths, both kidnapped by the intrinsic conceptual and mathematical difficulties inherited by their studies. Is it possible to restore a unitary vision of knowledge, overcoming the scientific-humanistic dichotomy that has established itself over time? The answer is certainly not trivial, but we can start from school to experience a new vision of a unified knowledge. From this need, the Gravitas project has born. Gravitas is a multidisciplinary outreach and educational program devoted to high school students (17-19 years old) that mixes contemporary physics and the philosophy of science. Coordinated by the Cagliari Section of the National Institute of Nuclear Physics, in Italy, Gravitas has started on December 2021 with an unconventional online format: two researchers coming from different fields of research meet a moderator and informally discuss about gravity and related phenomena. The public can chat and indirectly interact with them during the YouTube live. The project involved about 250 students from 16 high schools in Sardinia, Italy. Students should also create posts thought for social media whose content is based on the seminars they attended during the project. We present the project and discuss its possible outcomings concerning the introduction of a multidisciplinary approach in teaching physics, philosophy, and the history of contemporary physics in high schools.
To what extent does the black hole information paradox lead to violations of quantum mechanics? I explain how black hole complementarity provides a framework to articulate how quantum characterizations of black holes can remain consistent despite the information paradox. I point out that there are two ways to cash out the notion of consistency in play here: an operational notion and a descriptive notion. These two ways of thinking about consistency lead to (at least) two principles of black hole complementarity: an operational principle and a descriptive principle. Our background philosophy of science regarding realism/instrumentalism might initially lead us to prefer one principle over the other. However, the recent physics literature, which applies tools from quantum information theory and quantum computational complexity theory to various thought experiments involving quantum systems in or around black holes, implies that the operational principle is successful where the descriptive principle is not. This then lets us see that for operationalists the black hole information paradox might no longer be pressing.
We develop and apply a multi-dimensional account of explanatory depth towards a comparative analysis of inflationary and bouncing paradigms in primordial cosmology. Our analysis builds on earlier work due to Azhar and Loeb (2021) that establishes initial conditions fine-tuning as a dimension of explanatory depth relevant to debates in contemporary cosmology. We propose dynamical fine-tuning and autonomy as two further dimensions of depth in the context of problems with instability and trans-Planckian modes that afflict bouncing and inflationary approaches respectively. In the context of the latter issue, we argue that the recently formulated trans-Planckian censorship conjecture leads to a trade-off for inflationary models between dynamical fine-tuning and autonomy. We conclude with the suggestion that explanatory preference with regard to the different dimensions of depth is best understood in terms of differing attitudes towards heuristics for future model building.
Durante o período em que foi maniqueu, Agostinho compartilhou da ideia de que o mundo tem sua origem na junção de duas substâncias ontológicas, Deus e as Trevas, ambas de natureza corpórea. Ao encontrar Ambrósio, em Milão, este o fez pensar na possibilidade de se falar numa substância não corpórea, puramente espiritual, e que o mundo tem um único princípio – Deus, que criou tudo ex nihilo. Entretanto, no que concerne à origem do mal, isso não resolvia o problema; pelo contrário, aumentava ainda mais, pois, se há apenas uma única origem ontológica de tudo - Deus, que criou tudo do nada, como não atribuir a Este a origem do mal? Foi só no encontro com o neoplatonismo, também em Milão, que Agostinho confirmaria, filosoficamente, a noção de “substância espiritual”, que ouvira de Ambrósio, e, mais do que isto, despertaria para possibilidade de se falar ontologicamente do mal, não como ser, mas como não-ser ou nada. Entretanto, apesar de Plotino ter definido o não-ser (ou o nada) como o “ilimitado”, o “informe”, o “indeterminado”, isso, para Agostinho, ainda não resolvia plenamente o problema do mal, por tratar-se ainda de uma explicação natural, quando o coloca na matéria. De qualquer maneira, a partir daí, começou a pensar o mal como que “um tirar fora”, uma privação. Finalmente, no cristianismo, encontrou um lugar para o mal como algo totalmente imaterial, na livre vontade humana, que acontecer como ausência, defecção, do Bem - o não-ser.
Explaining the emergence of stochastic irreversible macroscopic dynamics from time-reversible deterministic microscopic dynamics is one of the key problems in philosophy of physics. The Mori-Zwanzig projection operator formalism, which is one of the most important methods of modern nonequilibrium statistical mechanics, allows for a systematic derivation of irreversible transport equations from reversible microdynamics and thus provides a useful framework for understanding this issue. However, discussions of the Mori-Zwanzig formalism in philosophy of physics tend to focus on simple variants rather than on the more sophisticated ones used in modern physical research. In this work, I will close this gap by studying the problems of probability and irreversibility using the example of Grabert's time-dependent projection operator formalism. This allows to give a more solid mathematical foundation to various concepts from the philosophical literature, in particular Wallace's simple dynamical conjecture and Robertson's theory of autonomous macrodynamics. Moreover, I will explain how the Mori-Zwanzig formalism allows to resolve the tension between epistemic and ontic approaches to probability in statistical mechanics. Finally, I argue that the debate which interventionists and coarse-grainers should really be having is related not to the question why there is equilibration at all, but why it has the quantitative form it is found to have in experiments.