In this study, a procedure to build Bayesian optimal designs using utility functions and exploiting existing data is proposed. The procedure is illustrated through a case study in the field of reliability, by applying a hierarchical Bayesian model and performing Markov Chain Monte Carlo simulations. Two innovative contributions are introduced: (i) the definition of specific utility functions that involve several key issues and (ii) the use of observational data. The use of observational data makes it possible to build the optimal design without additional costs for the company, while the definition of the utility functions accounts for the specific characteristics of the reliability study. Features like model residuals, i.e., discrepancies between observed and predicted response values, and the costs of the electronic component are addressed. Costs are also weighted considering the environmental impact. Satisfactory results are obtained and subsequently validated through an in-depth sensitivity analysis.
Rubia Truppel, Anderson D’Oliveira, Laura Canale
et al.
This review investigates and analyzes the state of the art on scientific evidence related to educational interventions to improve air quality indoors and outdoors through a mapping review. The review followed proposed guidelines for mapping reviews in environmental sciences and the steps described in the Template for a Mapping Study Protocol. The search was conducted in PubMed, Web of Science, Embase, Cinahl, and Google Scholar with no language restrictions, and was completed in January 2025. Three filters were applied: search, selection with inclusion and exclusion criteria (PECOS strategy), and data extraction. Two independent reviewers assessed article eligibility, and disagreements were resolved by a third researcher. Twenty-four studies that met the eligibility criteria were included. Five research questions were answered. Studies published between 1977 and 2024 were included, totaling 7289 participants aged 12 to 85. The geographic distribution was concentrated in China (five studies) and the United States (four studies), followed by South Korea, India, Australia, and other countries, with fewer publications. The methodological predominance was experimental studies; observational studies were also analyzed, although less frequently. The period with the greatest increase in the number of publications was between 2020 and 2024. The educational methods most commonly used in the studies were lectures and the delivery of information leaflets. Particulate matter with diameters of 2.5 μm and 10 μm (PM<sub>2.5</sub> and PM<sub>10</sub>) were the most widely investigated pollutants in the studies. From our analyses, it was observed that the educational interventions to improve air quality, adopted in the selected studies, resulted in the acquisition of knowledge about the environmental effects and the importance of individual actions. The changes in behavior included the adoption of more sustainable practices and an improvement in air quality in the environment, with a significant reduction in pollutant emissions. We conclude that interventions through environmental education demonstrate great potential to improve air quality. Based on the mapped evidence, governments and global policymakers can use this information to develop new strategies or improve existing ones to reduce air pollution in affected environments and regions.
Understanding the broad impact of science and science funding is critical to ensuring that science investments and policies align with societal needs. Existing research links science funding to the output of scientific publications but largely leaves out the downstream uses of science and the myriad ways in which investing in science may impact human society. As funders seek to allocate scarce funding resources across a complex research landscape, there is an urgent need for informative and transparent tools that allow for comprehensive assessments and visualization of the impact of funding. Here we present Funding the Frontier (FtF), a visual analysis system for researchers, funders, policymakers, university leaders, and the broad public to analyze multidimensional impacts of funding and make informed decisions regarding research investments and opportunities. The system is built on a massive data collection that connects 7M research grants to 140M scientific publications, 160M patents, 10.9M policy documents, 800K clinical trials, and 5.8M newsfeeds, with 1.8B citation linkages among these entities, systematically linking science funding to its downstream impacts. As such, Funding the Frontier is distinguished by its multifaceted impact analysis framework. The system incorporates diverse impact metrics and predictive models that forecast future investment opportunities into an array of coordinated views, allowing for easy exploration of funding and its outcomes. We evaluate the effectiveness and usability of the system using case studies and expert interviews. Feedback suggests that our system not only fulfills the primary analysis needs of its target users, but the rich datasets of the complex science ecosystem and the proposed analysis framework also open new avenues for both visualization and the science of science research.
CI/CD pipelines are widely used in software development, yet their environmental impact, particularly carbon and water footprints (CWF), remains largely unknown to developers, as CI service providers typically do not disclose such information. With the growing environmental impact of cloud computing, understanding the CWF of CI/CD services has become increasingly important. This work investigates the CWF of using GitHub Actions, focusing on open-source repositories where usage is free and unlimited for standard runners. We build upon a methodology from the Cloud Carbon Footprint framework and we use the largest dataset of workflow runs reported in the literature to date, comprising over 2.2 million workflow runs from more than 18,000 repositories. Our analysis reveals that the GitHub Actions ecosystem results in a substantial CWF. Our estimates for the carbon footprint in 2024 range from 150.5 MTCO2e in the most optimistic scenario to 994.9 MTCO2e in the most pessimistic scenario, while the water footprint ranges from 1,989.6 to 37,664.5 kiloliters. The most likely scenario estimates are 456.9 MTCO2e for carbon footprint and 5,738.2 kiloliters for water footprint. To provide perspective, the carbon footprint in the most likely scenario is equivalent to the carbon captured by 7,615 urban trees in a year, and the water footprint is comparable to the water consumed by an average American family over 5,053 years. We explore strategies to mitigate this impact, primarily by reducing wasted computational resources. Key recommendations include deploying runners in regions whose energy production has a low environmental impact such as France and the United Kingdom, implementing stricter deactivation policies for scheduled runs and aligning their execution with periods when the regional energy mix is more environmentally favorable, and reducing the size of repositories.
Friedrich Boeing, Thorsten Wagener, Andreas Marx
et al.
Central Europe, including Germany, has faced exceptional multi-year terrestrial water storage (TWS) deficits since 2018, negatively impacting various sectors such as forestry, energy production, and drinking water supply. Currently, the understanding of the recovery dynamics behind such extreme events is limited, which hampers accurate water management decisions. We used a simulation of the mesoscale hydrological model (mHM) over the last 257 years (1766–2022) to provide the first long-term perspective on the dynamics of the TWS deficit recovery in Germany. The results show that severe TWS deficits surpassing a peak deficit of −42 mm (−15 km ^3 ) exhibit large variability in recovery times (3–31 months). The 2018–2021 TWS deficit period was unprecedented in terms of recovery time (31 months), mean intensity and the associated negative 30-year TWS trend. In recent decades, we identified increased evapotranspiration ( E ) fluxes that have impacted TWS dynamics in Germany. Increased E flux anomalies contributed to prolonged TWS recovery, given that the TWS deficit did not quickly recover through above-average precipitation ( P ). An extreme TWS deficit similar to that in 2018 was recovered by above-average P within three months in the winter of 1947–1948. Our research contributes to an improved understanding of the dynamics and drivers of TWS deficit recovery.
During its annual conference in 2024, the French Society of Astronomy & Astrophysics (SF2A) hosted a special session dedicated to discussing the environmental transition within the scope of our occupation. Since 2021, thinking on this subject has progressed significantly, both quantitatively and qualitatively. This year was an opportunity to take stock of the main areas of reflection that we need to keep in mind in order to implement a fair, collective and effective environmental transition. This proceeding summarizes the key points from the plenary session related to the environmental transition special session. The purpose of the messages disseminated here is to suggest ideas for reflection and inspiration, so as to initiate, stimulate, and foster discussions within the A&A research community, towards the implementation of concrete measures to mitigate our environmental footprint.
Philippe Gris, Humna Awan, Matthew R. Becker
et al.
The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will image billions of astronomical objects in the wide-fast-deep primary survey and in a set of minisurveys including intensive observations of a group of deep drilling fields (DDFs). The DDFs are a critical piece of three key aspects of the LSST Dark Energy Science Collaboration (DESC) cosmological measurements: they provide a required calibration for photometric redshifts and weak gravitational lensing measurements and they directly contribute to cosmological constraints from the most distant type Ia supernovae. We present a set of cohesive DDF strategies fulfilling science requirements relevant to DESC and following the guidelines of the Survey Cadence Optimization Committee. We propose a method to estimate the observing strategy parameters and we perform simulations of the corresponding surveys. We define a set of metrics for each of the science case to assess the performance of the proposed observing strategies. We show that the most promising results are achieved with deep rolling surveys characterized by two sets of fields: ultradeep fields (z<1.1) observed at a high cadence with a large number of visits over a limited number of seasons; deep fields (z<0.7), observed with a cadence of ~3 nights for ten years. These encouraging results should be confirmed with realistic simulations using the LSST scheduler. A DDF budget of ~8.5% is required to design observing strategies satisfying all the cosmological requirements. A lower DDF budget lead to surveys that either do not fulfill photo-z/WL requirements or are not optimal for SNe Ia cosmology.
Nikki Choudhary, Akansha Rai, Jagdish Chandra Kuniyal
et al.
This study presents the source apportionment of coarse-mode particulate matter (PM<sub>10</sub>) extracted by 3 receptor models (PCA/APCS, UNMIX, and PMF) at semi-urban sites of the Indian Himalayan region (IHR) during August 2018–December 2019. In this study, water-soluble inorganic ionic species (WSIIS), water-soluble organic carbon (WSOC), carbon fractions (organic carbon (OC) and elemental carbon (EC)), and trace elements of PM<sub>10</sub> were analyzed over the IHR. Nainital (62 ± 39 µg m<sup>−3</sup>) had the highest annual average mass concentration of PM<sub>10</sub> (average ± standard deviation at 1 σ), followed by Mohal Kullu (58 ± 32 µg m<sup>−3</sup>) and Darjeeling (54 ± 18 µg m<sup>−3</sup>). The annual total ∑WSIIS concentration order was as follows: Darjeeling (14.02 ± 10.01 µg m<sup>−3</sup>) > Mohal-Kullu (13.75 ± 10.21 µg m<sup>−3</sup>) > Nainital (10.20 ± 6.30 µg m<sup>−3</sup>), contributing to 15–30% of the PM<sub>10</sub> mass. The dominant secondary ions (NH<sub>4</sub><sup>+</sup>, SO<sub>4</sub><sup>2−</sup>, and NO<sub>3</sub><sup>−</sup>) suggest that the study sites were strongly influenced by anthropogenic sources from regional and long-range transport. Principal component analysis (PCA) with an absolute principal component score (APCS), UNMIX, and Positive Matrix Factorization (PMF) were used for source identification of PM<sub>10</sub> at the study sites of the IHR. All three models showed relatively similar results of source profiles for all study sites except their source number and percentage contribution. Overall, soil dust (SD), secondary aerosols (SAs), combustion (biomass burning (BB) + fossil fuel combustion (FFC): BB+FFC), and vehicular emissions (VEs) are the major sources of PM<sub>10</sub> identified by these models at all study sites. Air mass backward trajectories illustrated that PM<sub>10</sub>, mainly attributed to dust-related aerosols, was transported from the Thar Desert, Indo-Gangetic Plain (IGP), and northwestern region of India (i.e., Punjab and Haryana) and Afghanistan to the IHR. Transported agricultural or residual burning plumes from the IGP and nearby areas significantly contribute to the concentration of carbonaceous aerosols (CAs) at study sites.
Shota Nishiyama, Takuma Saito, Ryo Nakamura
et al.
A large dataset of annotated traffic accidents is necessary to improve the accuracy of traffic accident recognition using deep learning models. Conventional traffic accident datasets provide annotations on traffic accidents and other teacher labels, improving traffic accident recognition performance. However, the labels annotated in conventional datasets need to be more comprehensive to describe traffic accidents in detail. Therefore, we propose V-TIDB, a large-scale traffic accident recognition dataset annotated with various environmental information as multi-labels. Our proposed dataset aims to improve the performance of traffic accident recognition by annotating ten types of environmental information as teacher labels in addition to the presence or absence of traffic accidents. V-TIDB is constructed by collecting many videos from the Internet and annotating them with appropriate environmental information. In our experiments, we compare the performance of traffic accident recognition when only labels related to the presence or absence of traffic accidents are trained and when environmental information is added as a multi-label. In the second experiment, we compare the performance of the training with only contact level, which represents the severity of the traffic accident, and the performance with environmental information added as a multi-label. The results showed that 6 out of 10 environmental information labels improved the performance of recognizing the presence or absence of traffic accidents. In the experiment on the degree of recognition of traffic accidents, the performance of recognition of car wrecks and contacts was improved for all environmental information. These experiments show that V-TIDB can be used to learn traffic accident recognition models that take environmental information into account in detail and can be used for appropriate traffic accident analysis.
Udayan Khurana, Kavitha Srinivas, Sainyam Galhotra
et al.
The recent efforts in automation of machine learning or data science has achieved success in various tasks such as hyper-parameter optimization or model selection. However, key areas such as utilizing domain knowledge and data semantics are areas where we have seen little automation. Data Scientists have long leveraged common sense reasoning and domain knowledge to understand and enrich data for building predictive models. In this paper we discuss important shortcomings of current data science and machine learning solutions. We then envision how leveraging "semantic" understanding and reasoning on data in combination with novel tools for data science automation can help with consistent and explainable data augmentation and transformation. Additionally, we discuss how semantics can assist data scientists in a new manner by helping with challenges related to trust, bias, and explainability in machine learning. Semantic annotation can also help better explore and organize large data sources.
1Institute of Ecology and Botany, MTA Centre for Ecological Research, Vácrátót, Hungary 2Faculty of Agriculture and Natural Sciences, Düzce University, Konuralp, Turkey 3Institute of General and Experimental Biology SB RAS, Ulan-Ude, Russia 4Department of Ecology, University of Szeged, Szeged, Hungary 5Department of Botany, University of Veterinary Medicine, Budapest, Hungary 6Department of Climatology and Landscape Ecology, University of Szeged, Szeged, Hungary 7College of Urban and Environmental Sciences, Peking University, Beijing, China 8Institute of Plant Sciences, University of Graz, Graz, Austria 9Department of Biology, Faculty of Basic Sciences, University of Mazandaran, Mazandaran, Iran 10Department of Biology, I. G. Petrovsky Bryansk State University, Bryansk, Russia 11MTA-DE Lendület Functional and Restoration Ecology Research Group, Debrecen, Hungary
This study evaluated the effects of pretreatments (blanching (60 and 95 °C) and boiling) and drying methods (freeze-drying and oven drying) on the quality characteristics of potato flour derived from three potato varieties, namely, Shangi, Unica, and Dutch Robjin. The percentage flour yield, color, particle size distribution, flow characteristics, microstructural and functional properties of the potato flour were determined. Unica recorded the least peeling loss, while the Dutch Robjin variety had the highest. Color parameters were significantly affected (<i>p</i> < 0.05) by the pretreatments and drying methods. Freeze drying produced lighter potato flour (L* = 92.86) compared to the other methods. Boiling and blanching at 95 °C followed by oven drying recorded a low angle of repose and compressibility index, indicating better flow characteristics. The smallest particle size (56.5 µm) was recorded for the freeze-drying treatment, while boiling followed by oven drying had the largest particle size (307.5 µm). Microstructural results indicate that boiling and blanching at 95 °C, followed by oven drying resulted in damaged starch granules, while freeze-drying and low-temperature blanching (60 °C) maintained the native starch granule. Particle size and the solubility index of potato flour showed strong positive correlation. This study revealed that the pretreatments and drying methods affected potato flour’s physical and microstructural parameters differently, resulting in changes in their functionality.
In this study, an effective approach has developed to solidify phosphorus, fluorine and heavy metals in phosphogypsum by using carbide slag(CS) and polymer, and the effect of carbide slag and polymer addition on the pH and the speciations of phosphorus, fluorine in phosphogypsum has discussed in detail. The experimental results showed that in the carbide slag modified phosphogypsum system, soluble phosphorus and fluoride were significant fixed and converted into Ca3(PO4)2, CaF2 and other stable precipitates. The pH-related leaching test results indicated that there was little significant relationship between the total phosphorus leaching concentration and pH under neutral and alkaline conditions. Fluorapatite is the control phase of phosphorus in the phosphogypsum system, and the fluorite and fluorapatite are the main control phases of fluorine leaching. After long-term leaching, the proportion of the control phase of fluorite was reduced. In the polymer modified phosphogypsum system, large amount of soluble phosphorus and fluorine were solidified, which achieve high effective solidification of phosphorus and fluorine. The pH-related leaching test showed that the polymer has a significant effect on phosphorus and fluorine under different pH conditions. Besides, the curing effect has a significant relationship with pH. At the same time, carbide slag and polymer have a certain effect on the conversion between the speciations of heavy metals in the system. The experimental results showed that the pH, phosphorus, fluorine and heavy metals in the carbide slag-phosphogypsum (CS-PG) system meet the requirements of the ''Integrated Wastewater Discharge Standard (GB 8978–1996)'' of China.
ObjectivesHuman leucocyte antigen B27 (HLA-B27) is an important biomarker for ankylosing spondylitis (AS). However, delay in the diagnosis of AS is still common in clinical practice. Several single nucleotide polymorphisms (SNPs) in the coding gene of tumor necrosis factor alpha (TNFα) have been reported to be AS susceptibility loci. Our aim was to explore whether SNPs in TNFα could be used to improve the performance of HLA-B27 for predicting AS.MethodsFive SNPs (rs1799964, rs1800630, rs1799724, rs1800629, and rs361525) spanning TNFα were genotyped by qPCR-Invader assay in 93 AS patients and 107 healthy controls for association analysis and linkage disequilibrium (LD) analysis. Random forest algorithm was utilized to construct the predictive classifiers for AS. HLA-B was genotyped by PCR-sequence-based typing in a subset of the HLA-B27-positive subjects (38 AS patients and 5 healthy controls).ResultsThe T allele of rs1799724 was verified to significantly increase the risk of AS (OR = 4.583, p < 0.0001), while the A allele of rs361525 showed an association with the reduced AS risk (OR = 0.168, p = 0.009). In addition, the rs1799964T-rs1800630C-rs1799724T-rs1800629G-rs361525G haplotype was significantly associated with a higher risk of AS (p < 0.0001). The optimal set of variables for classifiers to predict AS only consisted of HLA-B27. Strong associations with HLA-B27 status were found in both rs1799724 (p < 0.0001) and rs361525 (p = 0.001), and all the analyzed HLA-B27-positive subjects carried HLA-B*27:04 or HLA-B*27:05.ConclusionIn the Chinese Han population, the minor allele T of rs1799724 could increase the risk of AS, while the minor allele A of rs361525 protects individuals from AS. However, the contributions of rs1799724 and rs361525 to AS risk were dependent on HLA-B27 status, suggesting the importance of taking the independence and specificity into consideration in AS susceptibility loci studies.
This study aimed to examine the impact of chronic (30 days) exposure to polystyrene microplastics (PS-MPs) of different sizes (50 nm and 2 µm) and at different concentrations (0.5 μg/L and 100 mg/L) to marine copepod Tigriopus japonicus. Polystyrene microplastics affected survival rates in size- and concentration-dependent manners. The LC50s values of 50 nm and 2 µm PS-MPs were 0.10 mg/L and 3.92 mg/L, respectively. The developmental time was delayed by 50 nm PS-MPs, and Usp expression was downregulated. Reproduction was negatively affected by 2 µm PS-MPs even at environmentally relevant concentrations; however, the expression of Vtg was not altered. The production rates of reactive oxygen species and nitric oxide also increased after exposure to PS-MPs; but this effect was independent of particle size. The expression levels of Cat and Tnf, genes related to oxidative stress and inflammation, respectively, were upregulated by exposure to PS-MPs, independently of particle size. Meanwhile, the level of oxidative stress in T. japonicus was not significantly affected by PS-MPs at environmentally relevant concentrations. This study suggests that nano-sized PS-MPs are not always more toxic than micro-sized PS-MPs, and that oxidative stress is a key factor in determining the toxic effect on T. japonicus at high concentrations.
There is a growing interest in understanding the energy and environmental footprint of digital currencies, specifically in cryptocurrencies such as Bitcoin and Ethereum. These cryptocurrencies are operated by a geographically distributed network of computing nodes, making it hard to accurately estimate their energy consumption. Existing studies, both in academia and industry, attempt to model the cryptocurrencies energy consumption often based on a number of assumptions for instance about the hardware in use or geographic distribution of the computing nodes. A number of these studies has already been widely criticized for their design choices and subsequent over or under-estimation of the energy use. In this study, we evaluate the reliability of prior models and estimates by leveraging existing scientific literature from fields cognizant of blockchain such as social energy sciences and information systems. We first design a quality assessment framework based on existing research, we then conduct a systematic literature review examining scientific and non-academic literature demonstrating common issues and potential avenues of addressing these issues. Our goal with this article is to to advance the field by promoting scientific rigor in studies focusing on Blockchain's energy footprint. To that end, we provide a novel set of codes of conduct for the five most widely used research methodologies: quantitative energy modeling, literature reviews, data analysis \& statistics, case studies, and experiments. We envision that these codes of conduct would assist in standardizing the design and assessment of studies focusing on blockchain-based systems' energy and environmental footprint.