Ji-Hyeon Jeon, Masayuki Maki, Yu-Chung Chiang
et al.
Abstract Background The section Synstylae of the genus Rosa (Rosaceae) is predominantly distributed across the Eastern Asiatic Floristic Region and is characterized by increased species diversity and natural hybrids. These characteristics render species within this section exemplary for studying phenotypic variability and easy crossbreeding, which hold potential for advancements in the rose-breeding industry. However, genetic introgression and hybridization have posed challenges to our understanding of their phylogenetic relationships. Despite recurrent interspecific introgression, chloroplast DNA can still aid in phylogenetic inference within the section Synstylae due to its uniparental inheritance and high conservation. Results Phylogenetic inferences and haplotype network analysis identified seven distinct chloroplast haplotype groups within the East Asian Synstylae. Clear differentiation was observed between the chloroplast haplotypes of the Sino-Himalayan series Brunonianae and Sino-Japanese series Multiflorae lineages. The chloroplast haplotypes within each lineage aligned more closely with geographic gradients than with species boundaries. Consequently, various chloroplast haplotypes were shared among Sino-Japanese Synstylae species with broader distributions, whereas unique haplotypes were found in the species with restricted distribution ranges. Similarly, geographically specific haplotype groups were identified in the Japanese Archipelago, Taiwan, and Eastern China of the Sino-Japanese Subregion, respectively. Conclusions The chloroplast genomes of Sino-Japanese Synstylae species may have diverged along geographic gradients, influenced by the geographical and ecological complexity of East Asia and the climate oscillations during the Pleistocene. The recurring cycles of fragmentation and rejoining in Sino-Japanese Synstylae populations have allowed founder effects and genetic drift to drive divergence and diversification of their chloroplast genomes along these geographic gradients. The substantial incongruence between the chloroplast and nuclear phylogenies evidenced the prevalent genetic introgression within the Sino-Japanese Synstylae lineage. Additionally, two putative hybrid speciation events highlighted the role of genetic introgression in species diversification of the East Asian Synstylae lineage. This study substantiates the value of chloroplast genomes in elucidating genetic introgression and the unique evolutionary history of recently diverged and closely related East Asian Synstylae species.
Mark Hudson, Junzo Uchiyama, Claudia Zancan
et al.
Maritime networks have been proposed as a mechanism for early agricultural and, by extension, language dispersals in several coastal and island regions. In Island Southeast Asia, such networks have sometimes been discussed as an alternative to the farming/language dispersal hypothesis. However, the relationships between Neolithic maritime networks and maritime economies are poorly known. Here, we summarise published information for three regions where Neolithic maritime networks are thought to have been associated with language dispersals (whether hypothetical or directly attested): the Mediterranean, Island Southeast Asia and Japan. We conclude that while maritime networks played an important role in the Neolithic dispersals considered here, maritime trade and resources did not necessarily represent alternative or opposing economic strategies to agriculture. It was only from the Bronze Age that long-distance trade integrated maritime exchange and resources into a broader economic system. Our review illustrates the complex relations between subsistence, technology and mobility in prehistoric maritime networks and the paper concludes with suggestions for future research.
Ching-Hui Sia, Kee Yuan Ngiam, Xiaohong Wang
et al.
Purpose Coronary CT angiography (CCTA) is well established for the diagnostic evaluation and prognostication of coronary artery disease (CAD). The growing burden of CAD in Asia and the emergence of novel CT-based risk markers highlight the need for an automated platform that integrates patient data with CCTA findings to provide tailored, accurate cardiovascular risk assessments. This study aims to develop an artificial intelligence (AI)-driven platform for CAD assessment using CCTA in Singapore’s multiethnic population. We will conduct a hybrid retrospective-prospective recruitment of patients who have undergone CCTA as part of the diagnostic workup for CAD, along with prospective follow-up for clinical endpoints. CCTA images will be analysed locally and by a core lab for coronary stenosis grading, Agatston scoring, epicardial adipose tissue evaluation and plaque analysis. The images and analyses will also be uploaded to an AI platform for deidentification, integration and automated reporting, generating precision AI toolkits for each parameter.Participants CCTA images and baseline characteristics have been collected and verified for 4196 recruited patients, comprising 75% Chinese, 6% Malay, 10% Indian and 9% from other ethnic groups. Among the participants, 41% are female, with a mean age of 55±11 years. Additionally, 41% have hypertension, 51% have dyslipidaemia, 15% have diabetes and 22% have a history of smoking.Findings to date The cohort data have been used to develop four AI modules for training, testing and validation. During the development process, data preprocessing standardised the format, resolution and other relevant attributes of the images.Future plans We will conduct prospective follow-up on the cohort to track clinical endpoints, including cardiovascular events, hospitalisations and mortality. Additionally, we will monitor the long-term impact of the AI-driven platform on patient outcomes and healthcare delivery.Trial registration number NCT05509010.
This article explores the amplification of challenges to sexual and reproductive healthcare provision during Nepal’s COVID-19 pandemic response and lockdown in 2020. In Nepal, the provision of essential primary healthcare is compromised by systemic weaknesses, infrastructure, and the economy. This includes healthcare and services supporting women’s sexual and reproductive health and rights (SRHR). During the pandemic, the government instituted a lockdown to control the spread of COVID-19. The government’s focus on controlling the disease, or on ‘pandemic preparedness’, amplified the pre-existing vulnerabilities in the healthcare system. Policy triage caused SRHR to be under-prioritized, widened the pre-existing gaps in the healthcare infrastructure, and compelled healthcare providers to rely more on improvisation. The article concludes by calling for a re-imagination of ‘pandemic preparedness’ as ‘lockdown preparedness’. In Nepal and in other low- and middle-income countries, ‘lockdown preparedness’ should inform pandemic responses and secure the prioritization of essential primary healthcare. Furthermore, ‘lockdown preparedness’ should direct political attention and priority towards decreasing systemic weaknesses and social inequalities, to counteract their amplification during future lockdowns.
Diffusion on graphs is ubiquitous with numerous high-impact applications. In these applications, complete diffusion histories play an essential role in terms of identifying dynamical patterns, reflecting on precaution actions, and forecasting intervention effects. Despite their importance, complete diffusion histories are rarely available and are highly challenging to reconstruct due to ill-posedness, explosive search space, and scarcity of training data. To date, few methods exist for diffusion history reconstruction. They are exclusively based on the maximum likelihood estimation (MLE) formulation and require to know true diffusion parameters. In this paper, we study an even harder problem, namely reconstructing Diffusion history from A single SnapsHot} (DASH), where we seek to reconstruct the history from only the final snapshot without knowing true diffusion parameters. We start with theoretical analyses that reveal a fundamental limitation of the MLE formulation. We prove: (a) estimation error of diffusion parameters is unavoidable due to NP-hardness of diffusion parameter estimation, and (b) the MLE formulation is sensitive to estimation error of diffusion parameters. To overcome the inherent limitation of the MLE formulation, we propose a novel barycenter formulation: finding the barycenter of the posterior distribution of histories, which is provably stable against the estimation error of diffusion parameters. We further develop an effective solver named DIffusion hiTting Times with Optimal proposal (DITTO) by reducing the problem to estimating posterior expected hitting times via the Metropolis--Hastings Markov chain Monte Carlo method (M--H MCMC) and employing an unsupervised graph neural network to learn an optimal proposal to accelerate the convergence of M--H MCMC. We conduct extensive experiments to demonstrate the efficacy of the proposed method.
Devin R. de Zwaan, Davide Scridel, Tomás A. Altamirano
et al.
Measurement(s) Breeding specialization • Breeding status • Migration behaviour • Nest type • Nest site • Data reliability • Endemism • IUCN status Technology Type(s) Literature review, field monitoring, expert knowledge • Literature review, expert knowledge • Literature review • IUCN red list Sample Characteristic - Organism Aves Sample Characteristic - Environment alpine • nival
An increasing number of Japanese women are joining the economy, but the Japanese labor market is still more friendly to men than to women. Although much has been done in recent years in terms of creating conditions for a fuller participation of women in the economic life of the country, in general, their employment model has not undergone major changes. It is more difficult for women than for men to get a position of regular worker, in terms of working conditions (salary level, opportunities for professional growth, job content, etc.) they are in a worse position than men. As before, due to the harsh conditions of lifetime employment, many female regular employees of Japanese firms after a child’s birth interrupt their careers, and returning to the labor market after a few years, they prefer to take jobs as non-regular workers. Housekeeping and childcare are also still mainly entrusted to women. The “glass ceiling” that Japanese women, making a career, clash with, “1 million yen wall” that pushes women into a non-regular employment zone - these are also today’s realities. The situation in the field of female employment has become one of the main factors of the emergence of a number of painful social phenomena, such as a reduction of the number of marriages, an increase in the share of unmarried young women, in the age of first marriage and first child birth, a decrease in the number of children in a family and fertility rate, etc. Meanwhile, perhaps the main reason for the preservation of the situation in the field of women’s employment is that ideas about the separation of the roles of women and men in the family, society, and at work are widespread in society, including among the Japanese women themselves. Therefore, it seems that it will take quite a long time till real changes become tangible.
Machine learning (ML) is the science of credit assignment. It seeks to find patterns in observations that explain and predict the consequences of events and actions. This then helps to improve future performance. Minsky's so-called "fundamental credit assignment problem" (1963) surfaces in all sciences including physics (why is the world the way it is?) and history (which persons/ideas/actions have shaped society and civilisation?). Here I focus on the history of ML itself. Modern artificial intelligence (AI) is dominated by artificial neural networks (NNs) and deep learning, both of which are conceptually closer to the old field of cybernetics than what was traditionally called AI (e.g., expert systems and logic programming). A modern history of AI & ML must emphasize breakthroughs outside the scope of shallow AI text books. In particular, it must cover the mathematical foundations of today's NNs such as the chain rule (1676), the first NNs (circa 1800), the first practical AI (1914), the theory of AI and its limitations (1931-34), and the first working deep learning algorithms (1965-). From the perspective of 2025, I provide a timeline of the most significant events in the history of NNs, ML, deep learning, AI, computer science, and mathematics in general, crediting the individuals who laid the field's foundations. The text contains numerous hyperlinks to relevant overview sites. With a ten-year delay, it supplements my 2015 award-winning deep learning survey which provides hundreds of additional references. Finally, I will put things in a broader historical context, spanning from the Big Bang to when the universe will be many times older than it is now.
Do we really understand how machine classifies art styles? Historically, art is perceived and interpreted by human eyes and there are always controversial discussions over how people identify and understand art. Historians and general public tend to interpret the subject matter of art through the context of history and social factors. Style, however, is different from subject matter. Given the fact that Style does not correspond to the existence of certain objects in the painting and is mainly related to the form and can be correlated with features at different levels.(Ahmed Elgammal et al. 2018), which makes the identification and classification of the characteristics artwork's style and the "transition" - how it flows and evolves - remains as a challenge for both human and machine. In this work, a series of state-of-art neural networks and manifold learning algorithms are explored to unveil this intriguing topic: How does machine capture and interpret the flow of Art History?
Predicting the future trajectory of a person remains a challenging problem, due to randomness and subjectivity of human movement. However, the moving patterns of human in a constrained scenario typically conform to a limited number of regularities to a certain extent, because of the scenario restrictions and person-person or person-object interactivity. Thus, an individual person in this scenario should follow one of the regularities as well. In other words, a person's subsequent trajectory has likely been traveled by others. Based on this hypothesis, we propose to forecast a person's future trajectory by learning from the implicit scene regularities. We call the regularities, inherently derived from the past dynamics of the people and the environment in the scene, scene history. We categorize scene history information into two types: historical group trajectory and individual-surroundings interaction. To exploit these two types of information for trajectory prediction, we propose a novel framework Scene History Excavating Network (SHENet), where the scene history is leveraged in a simple yet effective approach. In particular, we design two components: the group trajectory bank module to extract representative group trajectories as the candidate for future path, and the cross-modal interaction module to model the interaction between individual past trajectory and its surroundings for trajectory refinement. In addition, to mitigate the uncertainty in ground-truth trajectory, caused by the aforementioned randomness and subjectivity of human movement, we propose to include smoothness into the training process and evaluation metrics. We conduct extensive evaluations to validate the efficacy of our proposed framework on ETH, UCY, as well as a new, challenging benchmark dataset PAV, demonstrating superior performance compared to state-of-the-art methods.
Sean W. Hixon, Kristina G. Douglass, Kristina G. Douglass
et al.
Introduced predators currently threaten endemic animals on Madagascar through predation, facilitation of human-led hunts, competition, and disease transmission, but the antiquity and past consequences of these introductions are poorly known. We use directly radiocarbon dated bones of introduced dogs (Canis familiaris) to test whether dogs could have aided human-led hunts of the island’s extinct megafauna. We compare carbon and nitrogen isotope data from the bone collagen of dogs and endemic “fosa” (Cryptoprocta spp.) in central and southwestern Madagascar to test for competition between introduced and endemic predators. The distinct isotopic niches of dogs and fosa suggest that any past antagonistic relationship between these predators did not follow from predation or competition for shared prey. Radiocarbon dates confirm that dogs have been present on Madagascar for over a millennium and suggest that they at least briefly co-occurred with the island’s extinct megafauna, which included giant lemurs, elephant birds, and pygmy hippopotamuses. Today, dogs share a mutualism with pastoralists who also occasionally hunt endemic vertebrates, and similar behavior is reflected in deposits at several Malagasy paleontological sites that contain dog and livestock bones along with butchered bones of extinct megafauna and extant lemurs. Dogs on Madagascar have had a wide range of diets during the past millennium, but relatively high stable carbon isotope values suggest few individuals relied primarily on forest bushmeat. Our newly generated data suggest that dogs were part of a suite of animal introductions beginning over a millennium ago that coincided with widespread landscape transformation and megafaunal extinction.
Users' detailed browsing activity - such as what sites they are spending time on and for how long, and what tabs they have open and which one is focused at any given time - is useful for a number of research and practical applications. Gathering such data, however, requires that users install and use a monitoring tool over long periods of time. In contrast, browser extensions can gain instantaneous access months of browser history data. However, the browser history is incomplete: it records only navigation events, missing important information such as time spent or tab focused. In this work, we aim to reconstruct time spent on sites with only users' browsing histories. We gathered three months of browsing history and two weeks of ground-truth detailed browsing activity from 185 participants. We developed a machine learning algorithm that predicts whether the browser window is focused and active at one second-level granularity with an F1-score of 0.84. During periods when the browser is active, the algorithm can predict which the domain the user was looking at with 76.2% accuracy. We can use these results to reconstruct the total time spent online for each user with an R^2 value of 0.96, and the total time each user spent on each domain with an R^2 value of 0.92.
Alfredo Garbuno-Inigo, F. Alejandro DiazDelaO, Konstantin M. Zuev
The scientific understanding of real-world processes has dramatically improved over the years through computer simulations. Such simulators represent complex mathematical models that are implemented as computer codes which are often expensive. The validity of using a particular simulator to draw accurate conclusions relies on the assumption that the computer code is correctly calibrated. This calibration procedure is often pursued under extensive experimentation and comparison with data from a real-world process. The problem is that the data collection may be so expensive that only a handful of experiments are feasible. History matching is a calibration technique that, given a simulator, it iteratively discards regions of the input space using an implausibility measure. When the simulator is computationally expensive, an emulator is used to explore the input space. In this paper, a Gaussian process provides a complete probabilistic output that is incorporated into the implausibility measure. The identification of regions of interest is accomplished with recently developed annealing sampling techniques. Active learning functions are incorporated into the history matching procedure to refocus on the input space and improve the emulator. The efficiency of the proposed framework is tested in well-known examples from the history matching literature, as well as in a proposed testbed of functions of higher dimensions.
Radio frequency (RF) superconductivity has become a key technology for many modern particle accelerators. One of its most salient features of this technology is the ability of superconducting RF cavities to deliver high accelerating gradients in continuous-wave and long-pulse modes of operation. However, reaching the current state of the technology was not an easy fit. Over many years scientists and engineers had to overcome several serous performance limitations. In this paper, I attempt to the best of my knowledge to trace the history of accelerating gradients evolution in the field of superconducting radio frequency. I will restrict the scope to primary innovations along with some of the ensuing developments in developing cavities made of bulk niobium. But I will not cover all the many applications and findings over the subsequent decades of progress that were based on the primary discoveries and inventions. I will also not cover a number of other important topics in the history of cavity developments, such as the drive for higher Q values, or the push for lower cavity costs via Nb/Cu cavities or large grain Nb cavities.
Branislav Kveton, Csaba Szepesvari, Mohammad Ghavamzadeh
et al.
We propose a new online algorithm for cumulative regret minimization in a stochastic linear bandit. The algorithm pulls the arm with the highest estimated reward in a linear model trained on its perturbed history. Therefore, we call it perturbed-history exploration in a linear bandit (LinPHE). The perturbed history is a mixture of observed rewards and randomly generated i.i.d. pseudo-rewards. We derive a $\tilde{O}(d \sqrt{n})$ gap-free bound on the $n$-round regret of LinPHE, where $d$ is the number of features. The key steps in our analysis are new concentration and anti-concentration bounds on the weighted sum of Bernoulli random variables. To show the generality of our design, we generalize LinPHE to a logistic model. We evaluate our algorithms empirically and show that they are practical.
A light higgsino is strongly favored by the naturalness, while as a dark matter candidate it is usually under-abundant. We consider the higgsino production in a non-standard history of the universe, caused by a scalar field with an initially displaced vacuum. We find that given a proper reheating temperature induced by the scalar decay, a light higgsino could provide the correct dark matter relic abundance. On the other hand, a sub-TeV higgsino dark matter, once observed, would be a strong hint of the non-standard thermal history of the universe.
Conversational machine comprehension requires the understanding of the conversation history, such as previous question/answer pairs, the document context, and the current question. To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply. Our model, FlowQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of Flow also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.