Economics is the science of decision making. The rigorous and general approach that characterizes economics lends itself to a remarkably wide field of practical application. Economists are noted for major contributions in a number of fields including government policy, taxation, law, economy, international trade and finance, international and U.S. development, marketing, environmental studies, medical policy, portfolio management and banking. The broad and rigorous training of economics majors accounts for their significant demand in virtually every industry and government agency. Economics provides excellent preparation for advanced graduate study as well. Recent studies indicate that economics is a preferred degree for prestigious MBA programs and law schools.
Cooperative collective dynamics is a principal determinant of the ability of synthetic micromotors to perform specific functions. However, realizing controllable and predictable collective behavior in complex physiological environments remains a significant challenge. Here, we show that collections of enzyme-coated colloids can be designed as various chemical logic gates, which subsequently can be organized into functional logic circuits. These circuits take environmental information as input signals and process it to produce output chemical species needed to achieve specific goals. The chemical computation performed by the circuit endows the active colloidal system with the ability to sense its surroundings and autonomously coordinate its collective motion. The results of simulations of several examples are presented, where self-assembled colloidal circuits can identify invasive threats by their signals, produce and deliver chemicals to the targets to suppress their activity. The results of this work can aid in the design of experimental chemical logic circuits through micromotor self-assembly that autonomously respond to environmental cues to execute specific tasks.
Youssef Sabiri, Walid Houmaidi, Ouail El Maadi
et al.
Smart aquaculture systems depend on rich environmental data streams to protect fish welfare, optimize feeding, and reduce energy use. Yet public datasets that describe the air surrounding indoor tanks remain scarce, limiting the development of forecasting and anomaly-detection tools that couple head-space conditions with water-quality dynamics. We therefore introduce AQUAIR, an open-access public dataset that logs six Indoor Environmental Quality (IEQ) variables--air temperature, relative humidity, carbon dioxide, total volatile organic compounds, PM2.5 and PM10--inside a fish aquaculture facility in Amghass, Azrou, Morocco. A single Awair HOME monitor sampled every five minutes from 14 October 2024 to 9 January 2025, producing more than 23,000 time-stamped observations that are fully quality-controlled and publicly archived on Figshare. We describe the sensor placement, ISO-compliant mounting height, calibration checks against reference instruments, and an open-source processing pipeline that normalizes timestamps, interpolates short gaps, and exports analysis-ready tables. Exploratory statistics show stable conditions (median CO2 = 758 ppm; PM2.5 = 12 micrograms/m3) with pronounced feeding-time peaks, offering rich structure for short-horizon forecasting, event detection, and sensor drift studies. AQUAIR thus fills a critical gap in smart aquaculture informatics and provides a reproducible benchmark for data-centric machine learning curricula and environmental sensing research focused on head-space dynamics in recirculating aquaculture systems.
Ensemble techniques in recommender systems have demonstrated accuracy improvements of 10-30%, yet their environmental impact remains unmeasured. While deep learning recommendation algorithms can generate up to 3,297 kg CO2 per paper, ensemble methods have not been sufficiently evaluated for energy consumption. This thesis investigates how ensemble techniques influence environmental impact compared to single optimized models. We conducted 93 experiments across two frameworks (Surprise for rating prediction, LensKit for ranking) on four datasets spanning 100,000 to 7.8 million interactions. We evaluated four ensemble strategies (Average, Weighted, Stacking/Rank Fusion, Top Performers) against simple baselines and optimized single models, measuring energy consumption with a smart plug. Results revealed a non-linear accuracy-energy relationship. Ensemble methods achieved 0.3-5.7% accuracy improvements while consuming 19-2,549% more energy depending on dataset size and strategy. The Top Performers ensemble showed best efficiency: 0.96% RMSE improvement with 18.8% energy overhead on MovieLens-1M, and 5.7% NDCG improvement with 103% overhead on MovieLens-100K. Exhaustive averaging strategies consumed 88-270% more energy for comparable gains. On the largest dataset (Anime, 7.8M interactions), the Surprise ensemble consumed 2,005% more energy (0.21 Wh vs. 0.01 Wh) for 1.2% accuracy improvement, producing 53.8 mg CO2 versus 2.6 mg CO2 for the single model. This research provides one of the first systematic measurements of energy and carbon footprint for ensemble recommender systems, demonstrates that selective strategies offer superior efficiency over exhaustive averaging, and identifies scalability limitations at industrial scale. These findings enable informed decisions about sustainable algorithm selection in recommender systems.
Artificial intelligence (AI) is reshaping scientific discovery, evolving from specialized computational tools into autonomous research partners. We position Agentic Science as a pivotal stage within the broader AI for Science paradigm, where AI systems progress from partial assistance to full scientific agency. Enabled by large language models (LLMs), multimodal systems, and integrated research platforms, agentic AI shows capabilities in hypothesis generation, experimental design, execution, analysis, and iterative refinement -- behaviors once regarded as uniquely human. This survey provides a domain-oriented review of autonomous scientific discovery across life sciences, chemistry, materials science, and physics. We unify three previously fragmented perspectives -- process-oriented, autonomy-oriented, and mechanism-oriented -- through a comprehensive framework that connects foundational capabilities, core processes, and domain-specific realizations. Building on this framework, we (i) trace the evolution of AI for Science, (ii) identify five core capabilities underpinning scientific agency, (iii) model discovery as a dynamic four-stage workflow, (iv) review applications across the above domains, and (v) synthesize key challenges and future opportunities. This work establishes a domain-oriented synthesis of autonomous scientific discovery and positions Agentic Science as a structured paradigm for advancing AI-driven research.
Giacomo Rosin, Muhammad Rameez Ur Rahman, Sebastiano Vascon
Human trajectory forecasting is crucial in applications such as autonomous driving, robotics and surveillance. Accurate forecasting requires models to consider various factors, including social interactions, multi-modal predictions, pedestrian intention and environmental context. While existing methods account for these factors, they often overlook the impact of the environment, which leads to collisions with obstacles. This paper introduces ECAM (Environmental Collision Avoidance Module), a contrastive learning-based module to enhance collision avoidance ability with the environment. The proposed module can be integrated into existing trajectory forecasting models, improving their ability to generate collision-free predictions. We evaluate our method on the ETH/UCY dataset and quantitatively and qualitatively demonstrate its collision avoidance capabilities. Our experiments show that state-of-the-art methods significantly reduce (-40/50%) the collision rate when integrated with the proposed module. The code is available at https://github.com/CVML-CFU/ECAM.
Mohammad Alamgir Hossain, Md Moklesur Rahman, Shaikh Shamim Hasan
et al.
The southwestern Bangladesh is predominantly agrarian region, and recurrence of drought impacts directly on crops, reducing farmers' incomes and threatening national food security. This study analyzes and forecasts meteorological droughts for southwestern region using Probabilistic Forecasting with Structural Time Series (PROPHET) and Seasonal Autoregressive Integrated Moving Average (SARIMA) models. PROPHET excels at capturing long-term, non-linear trends, while SARIMA efficiently models seasonal variations. The essential climatic variables (e.g., temperature, rainfall, soil moisture) from Jashore, Jhenaidah, Kushtia, and Chuadanga were collected from 1994 to 2018 to enhance drought prediction accuracy up to 2050. In both cases, the calculation was performed by deploying the Machine Learning Techniques. Both models showed that drought intensity varies spatiotemporally and that frequent drought occurrences are common in all districts. Almost 50 % of the projected years (2019–2050) will be considered drought years (≥6 dry months in a year). 2049 and 2050 are nearly dry years in all districts. Considering the total months and years of drought, Chuadanga is followed by the Jashore, Jhenaidah, and Kushtia districts. Moreover, a strong correlation was found between the predicted and observed drought, whereas the R2 values were 0.83, 0.75, 0.88, and 0.76 for Jashore, Jhenaidah, Kushtia, and Chuadanga, respectively. Hence, these models would provide robust forecasts, helping to identify increasing drought severity in the region. Model validation using key performance metrics demonstrates their reliability in supporting water resource management. The findings underscore the importance of proactive drought mitigation strategies and suggest that future research should incorporate additional climate variables to improve prediction accuracy.
José T. Moreira-Filho, Dhruv Ranganath, Mike Conway
et al.
AbstractWith the increased availability of chemical data in public databases, innovative techniques and algorithms have emerged for the analysis, exploration, visualization, and extraction of information from these data. One such technique is chemical grouping, where chemicals with common characteristics are categorized into distinct groups based on physicochemical properties, use, biological activity, or a combination. However, existing tools for chemical grouping often require specialized programming skills or the use of commercial software packages. To address these challenges, we developed a user-friendly chemical grouping workflow implemented in KNIME, a free, open-source, low/no-code, data analytics platform. The workflow serves as an all-encompassing tool, expertly incorporating a range of processes such as molecular descriptor calculation, feature selection, dimensionality reduction, hyperparameter search, and supervised and unsupervised machine learning methods, enabling effective chemical grouping and visualization of results. Furthermore, we implemented tools for interpretation, identifying key molecular descriptors for the chemical groups, and using natural language summaries to clarify the rationale behind these groupings. The workflow was designed to run seamlessly in both the KNIME local desktop version and KNIME Server WebPortal as a web application. It incorporates interactive interfaces and guides to assist users in a step-by-step manner. We demonstrate the utility of this workflow through a case study using an eye irritation and corrosion dataset.Scientific contributionsThis work presents a novel, comprehensive chemical grouping workflow in KNIME, enhancing accessibility by integrating a user-friendly graphical interface that eliminates the need for extensive programming skills. This workflow uniquely combines several features such as automated molecular descriptor calculation, feature selection, dimensionality reduction, and machine learning algorithms (both supervised and unsupervised), with hyperparameter optimization to refine chemical grouping accuracy. Moreover, we have introduced an innovative interpretative step and natural language summaries to elucidate the underlying reasons for chemical groupings, significantly advancing the usability of the tool and interpretability of the results.
Environmental Sound Classification is an important problem of sound recognition and is more complicated than speech recognition problems as environmental sounds are not well structured with respect to time and frequency. Researchers have used various CNN models to learn audio features from different audio features like log mel spectrograms, gammatone spectral coefficients, mel-frequency spectral coefficients, generated from the audio files, over the past years. In this paper, we propose a new methodology : Two-Level Classification; the Level 1 Classifier will be responsible to classify the audio signal into a broader class and the Level 2 Classifiers will be responsible to find the actual class to which the audio belongs, based on the output of the Level 1 Classifier. We have also shown the effects of different audio filters, among which a new method of Audio Crop is introduced in this paper, which gave the highest accuracies in most of the cases. We have used the ESC-50 dataset for our experiment and obtained a maximum accuracy of 78.75% in case of Level 1 Classification and 98.04% in case of Level 2 Classifications.
Theophilo Benedicto Ottoni Filho, Anderson Rodrigues Caetano, Marta Vasconcelos Ottoni
In soil hydraulics, it is crucial to establish an accurate representation of the relative hydraulic conductive curve (rHCC), K_r(h). This paper proposes a simple way to determine K_r(h), called the Modified Gardner Dual model (MGD), using a logarithmic extension of the classical Gardner exponential representation and including macropore flow effects. MGD has five parameters which are hydraulic constants clearly identified in the bilogarithmic representation of K_r(h). Two of them are related to the main inflection point coordinates of rHCC; from them, it is possible to determine the macroscopic capillary length of the infiltration theory. The model was tested in the suction interval 0 < h < 15,000 cm with a total of 249 soil samples from two databases, and employing a flexible representation of the Mualem-van Genuchten (MVG) equation as a reference. Using the RMSE statistics (with log base) to measure the fitting errors, we obtained a 31% reduction in errors (RMSE_MGD = 0.27, RMSE_MVG = 0.39). In 74% of the soils, including samples from the two databases, the reduction was 53% (RMSE_MGD = 0.19, RMSE_MVG = 0.40); the rHCC data fitting of this group was accurate over all the suction h intervals, with RMSE_MGD < 0.32 in each soil sample. In the remaining 26% of the samples, the quality of the MGD fitting degraded due mainly to the presence of multiple rHCC data inflection points. Therefore, in soils without this structural peculiarity, the proposed model revealed to be quite accurate in addition to being analytically simple. Another advantage of MGD is that its parameters depend mainly on the data with h around and lower than the main inflection suction value, which, in turn, never exceeded the 300-cm limit in this study. Hence, in soils that do not have multiple inflections, the extrapolations of the model in drier intervals (1000 cm < h < 15,000 cm) are reliable. The MGD parameter optimization software has been called KUNSAT. It is available in the Supplementary Material or from the corresponding author on request.
Jinpeng Zhang, Michal Tomczak, Andrzej Witkowski
et al.
Marine transgressions-regressions have profoundly shaped marginal seas following global sea-level fluctuations driven by climate change. This study on a sedimentary core profile SO219/31-4 from the Beibu Gulf, northwestern South China Sea (SCS), reveals information about paleoenvironment, paleoceanography and paleoclimate changes through fossil diatom assemblages and grain size distributions during the last ca. 12900 cal. yr. BP. Eight local diatom assemblage zones were distinguished and assigned to paleoenvironmental fluctuations recording sea-level and depositional environment changes in eight stages, ca. 12900–11700 (stage 1), ca. 11700–9500 (stage 2), ca. 9500–7200 (stage 3), ca. 7200–5800 (stage 4), ca. 5800–3800 (stage 5), ca. 3800–2400 (stage 6), ca. 2400–800 (stage 7) and ca. 800–0 (stage 8), cal. yr. BP. After the low sea level of stage 1 within the last deglaciation, rapid increases in sea level in stages 2 and 3 were recorded as meltwater events pulse-1B and pulse-1C resulting in marine transgression rates of ca. 16 m/kyr and 8 m/kyr, respectively. The high sea level, above the present level, in stages 4 and 5, in the Middle Holocene Climatic Optimum period, was clearly documented by more significant open sea/tropical diatom species and coastal planktonic species percentages, respectively. The late Holocene regression of sea levels was marked by a pronounced reversion of diatom taphocoenosis, responding to neoglacial climate. Fossil diatom assemblages outlined responded to paleoclimate of global warming in the deglacial and early Holocene. This study provides additional insights into the late Pleistocene and Post-glacial history of a tropical-subtropical shallow water gulf, in the NW-SCS.
Marin Kneib, Catriona L. Fyffe, Evan S. Miles
et al.
Abstract Ice cliff distribution plays a major role in determining the melt of debris‐covered glaciers but its controls are largely unknown. We assembled a data set of 37,537 ice cliffs and determined their characteristics across 86 debris‐covered glaciers within High Mountain Asia (HMA). We find that 38.9% of the cliffs are stream‐influenced, 19.5% pond‐influenced and 19.7% are crevasse‐originated. Surface velocity is the main predictor of cliff distribution at both local and glacier scale, indicating its dependence on the dynamic state and hence evolution stage of debris‐covered glacier tongues. Supraglacial ponds contribute to maintaining cliffs in areas of thicker debris, but this is only possible if water accumulates at the surface. Overall, total cliff density decreases exponentially with debris thickness as soon as the debris layer reaches a thickness of over 10 cm.
Abstract Epidural anesthesia is an effective pain relief modality, widely used for labor analgesia. Childhood asthma is one of the commonest chronic medical illnesses in the USA which places a significant burden on the health-care system. We recently demonstrated a negative association between the duration of epidural anesthesia and the development of childhood asthma; however, the underlying molecular mechanisms still remain unclear. In this study of 127 mother–child pairs comprised of 75 Non-Hispanic Black (NHB) and 52 Non-Hispanic White (NHW) from the Newborn Epigenetic Study, we tested the hypothesis that umbilical cord blood DNA methylation mediates the association between the duration of exposure to epidural anesthesia at delivery and the development of childhood asthma and whether this differed by race/ethnicity. In the mother–child pairs of NHB ancestry, the duration of exposure to epidural anesthesia was associated with a marginally lower risk of asthma (odds ratio = 0.88, 95% confidence interval = 0.76–1.01) for each 1-h increase in exposure to epidural anesthesia. Of the 20 CpGs in the NHB population showing the strongest mediation effect, 50% demonstrated an average mediation proportion of 52%, with directional consistency of direct and indirect effects. These top 20 CpGs mapped to 21 genes enriched for pathways engaged in antigen processing, antigen presentation, protein ubiquitination and regulatory networks related to the Major Histocompatibility Complex (MHC) class I complex and Nuclear Factor Kappa-B (NFkB) complex. Our findings suggest that DNA methylation in immune-related pathways contributes to the effects of the duration of exposure to epidural anesthesia on childhood asthma risk in NHB offspring.
Environmental problems are receiving increasing attention in socio-economic and health studies. This in turn fosters advances in recording and data collection of many related real-life processes. Available tools for data processing are often found too restrictive as they do not account for the rich nature of such data sets. In this paper, we propose a new statistical perspective on forecasting spatial environmental data collected sequentially over time. We treat this data set as a surface (functional) time series with a possibly complicated geographical domain. By employing novel techniques from functional data analysis we develop a new forecasting methodology. Our approach consists of two steps. In the first step, time series of surfaces are reconstructed from measurements sampled over some spatial domain using a finite element spline smoother. In the second step, we adapt the dynamic functional factor model to forecast a surface time series. The advantage of this approach is that we can account for and explore simultaneously spatial as well as temporal dependencies in the data. A forecasting study of ground-level ozone concentration over the geographical domain of Germany demonstrates the practical value of this new perspective, where we compare our approach with standard functional benchmark models.
Jennifer J. Sun, Megan Tjandrasuwita, Atharva Sehgal
et al.
Neurosymbolic Programming (NP) techniques have the potential to accelerate scientific discovery. These models combine neural and symbolic components to learn complex patterns and representations from data, using high-level concepts or known constraints. NP techniques can interface with symbolic domain knowledge from scientists, such as prior knowledge and experimental context, to produce interpretable outputs. We identify opportunities and challenges between current NP models and scientific workflows, with real-world examples from behavior analysis in science: to enable the use of NP broadly for workflows across the natural and social sciences.
Stephanie O. Sangalang, Allen Lemuel G. Lemence, Zheina J. Ottong
et al.
Abstract Background The impacts of multicomponent school water, sanitation, and hygiene (WaSH) interventions on children’s health are unclear. We conducted a cluster-randomized controlled trial to test the effects of a school WaSH intervention on children’s malnutrition, dehydration, health literacy (HL), and handwashing (HW) in Metro Manila, Philippines. Methods The trial lasted from June 2017 to March 2018 and included children, in grades 5, 6, 7, and 10, from 15 schools. At baseline 756 children were enrolled. Seventy-eight children in two clusters were purposively assigned to the control group (CG); 13 clusters were randomly assigned to one of three intervention groups: low-intensity health education (LIHE; two schools, n = 116 children), medium-intensity health education (MIHE; seven schools, n = 356 children), and high-intensity health education (HIHE; four schools, n = 206 children). The intervention consisted of health education (HE), WaSH policy workshops, provision of hygiene supplies, and WaSH facilities repairs. Outcomes were: height-for-age and body mass index-for-age Z scores (HAZ, BAZ); stunting, undernutrition, overnutrition, dehydration prevalence; HL and HW scores. We used anthropometry to measure children’s physical growth, urine test strips to measure dehydration, questionnaires to measure HL, and observation to measure HW practice. The same measurements were used during baseline and endline. We used multilevel mixed-effects logistic and linear regression models to assess intervention effects. Results None of the interventions reduced undernutrition prevalence or improved HAZ, BAZ, or overall HL scores. Low-intensity HE reduced stunting (adjusted odds ratio [aOR] 0.95; 95% CI 0.93 to 0.96), while low- (aOR 0.57; 95% CI 0.34 to 0.96) and high-intensity HE (aOR 0.63; 95% CI 0.42 to 0.93) reduced overnutrition. Medium- (adjusted incidence rate ratio [aIRR] 0.02; 95% CI 0.01 to 0.04) and high-intensity HE (aIRR 0.01; 95% CI 0.00 to 0.16) reduced severe dehydration. Medium- (aOR 3.18; 95% CI 1.34 to 7.55) and high-intensity HE (aOR 3.89; 95% CI 3.74 to 4.05) increased observed HW after using the toilet/urinal. Conclusion Increasing the intensity of HE reduced prevalence of stunting, overnutrition, and severe dehydration and increased prevalence of observed HW. Data may be relevant for school WaSH interventions in the Global South. Interventions may have been more effective if adherence was higher, exposure to interventions longer, parents/caregivers were more involved, or household WaSH was addressed. Trial registration number DRKS00021623.
Reinhard Saborowski, Špela Korez, Sarah Riesbeck
et al.
Many invertebrate species inhabit coastal areas where loads of plastic debris and microplastics are high. In the current case study, we exemplarily illustrate the principal processes taking place in the Atlantic ditch shrimp, Palaemon varians, upon ingestion of microplastics. In the laboratory, shrimp readily ingested fluorescent polystyrene microbeads of 0.1–9.9 µm, which could be tracked within the widely translucent body. Ingested food items as well as micro-particles cumulate in the stomach where they are macerated and mixed with digestive enzymes. Inside the stomach, ingested particles are segregated by size by a complex fine-meshed filter system. Liquids and some of the smallest particles (0.1 µm) pass the filter and enter the midgut gland where resorption of nutrients as well as synthesis and release of digestive enzymes take place. Large particles and most of the small particles are egested with the feces through the hindgut. Small particles, which enter the midgut gland, may interact with the epithelial cells and induce oxidative stress, as indicated by elevated activities of superoxide dismutase and cellular markers of reactive oxygen species. The shrimp indiscriminately ingest microparticles but possess efficient mechanisms to protect their organs from overloading with microplastics and other indigestible particles. These include an efficient sorting mechanism within the stomach and the protection of the midgut gland by the pyloric filter. Formation of detrimental radical oxygen species is counteracted by the induction of enzymatic antioxidants.