Prediction markets offer a natural testbed for trading agents: contracts have binary payoffs, prices can be interpreted as probabilities, and realized performance depends critically on market microstructure, fees, and settlement risk. We introduce PredictionMarketBench, a SWE-bench-style benchmark for evaluating algorithmic and LLM-based trading agents on prediction markets via deterministic, event-driven replay of historical limit-order-book and trade data. PredictionMarketBench standardizes (i) episode construction from raw exchange streams (orderbooks, trades, lifecycle, settlement), (ii) an execution-realistic simulator with maker/taker semantics and fee modeling, and (iii) a tool-based agent interface that supports both classical strategies and tool-calling LLM agents with reproducible trajectories. We release four Kalshi-based episodes spanning cryptocurrency, weather, and sports. Baseline results show that naive trading agents can underperform due to transaction costs and settlement losses, while fee-aware algorithmic strategies remain competitive in volatile episodes.
Nathan De Carvalho, Youssef Ouazzani Chahdi, Grégoire Szymanski
We consider an optimal trading problem under a market impact model with endogenous market resistance generated by a sophisticated trader who (partially) detects metaorders and trades against them to exploit price overreactions induced by the order flow. The model features a concave transient impact driven by a power-law propagator with a resistance term responding to the trader's rate via a fixed-point equation involving a general resistance function. We derive a (non)linear stochastic Fredholm equation as the first-order optimality condition satisfied by optimal trading strategies. Existence and uniqueness of the optimal control are established when the resistance function is linear, and an existence result is obtained when it is strictly convex using coercivity and weak lower semicontinuity of the associated profit-and-loss functional. We also propose an iterative scheme to solve the nonlinear stochastic Fredholm equation and prove an exponential convergence rate. Numerical experiments confirm this behavior and illustrate optimal round-trip strategies under "buy" signals with various decay profiles and different market resistance specifications.
Janis Arents, Andrius Dzedzickis, Vytautas Bučinskas
Digital technologies are increasingly changing the way modern industries are structured, governed, and operated, influencing almost all phases of the industrial value chain, from early product conception and resource planning to manufacturing execution, logistics organization, final delivery, and long-term maintenance activities [...]
Abstract Constant power loads (CPLs) introduce negative impedance in direct current microgrids (DCMGs), which is a major challenge. This negative impedance can significantly reduce the overall damping of the system, making it less stable and harder to control. To address this issue, output virtual resistance (VR) shaping is commonly employed to enhance system damping and improve power‐sharing amongst distributed generators (DGs). The technique proposed in this work involves an adaptive variation of the DG virtual output resistance (RV) linearly with the output current. This shows improved power sharing between sources. The work compares the small signal stability criteria and the minor loop gain methods for linear, non‐linear, and inverse droop controllers to determine the controller parameters with constant power loads. The control scheme is extensively tested through simulations for four different droop control schemes. The work also validates the DCMG performance when the DERs work with different droop controllers (heterogenous of controllers) to assess constant power load penetration, performance in meshed configurations, and DG plug‐and‐play operations. Additionally, improved power sharing performance was validated through a controller hardware in the loop (CHIL) based implementation.
Energy industries. Energy policy. Fuel trade, Production of electric energy or power. Powerplants. Central stations
Recent advancements in large language models (LLMs) and agentic systems have shown exceptional decision-making capabilities, revealing significant potential for autonomic finance. Current financial trading agents predominantly simulate anthropomorphic roles that inadvertently introduce emotional biases and rely on peripheral information, while being constrained by the necessity for continuous inference during deployment. In this paper, we pioneer the harmonization of strategic depth in agents with the mechanical rationality essential for quantitative trading. Consequently, we present TiMi (Trade in Minutes), a rationality-driven multi-agent system that architecturally decouples strategy development from minute-level deployment. TiMi leverages specialized LLM capabilities of semantic analysis, code programming, and mathematical reasoning within a comprehensive policy-optimization-deployment chain. Specifically, we propose a two-tier analytical paradigm from macro patterns to micro customization, layered programming design for trading bot implementation, and closed-loop optimization driven by mathematical reflection. Extensive evaluations across 200+ trading pairs in stock and cryptocurrency markets empirically validate the efficacy of TiMi in stable profitability, action efficiency, and risk control under volatile market dynamics.
Industrial carbon emissions are a major driver of climate change, yet modeling these emissions is challenging due to multicollinearity among factors and complex interdependencies across sectors and time. We propose a novel graph-based deep learning framework DGL to analyze and forecast industrial CO_2 emissions, addressing high feature correlation and capturing industrial-temporal interdependencies. Unlike traditional regression or clustering methods, our approach leverages a Graph Neural Network (GNN) with attention mechanisms to model relationships between industries (or regions) and a temporal transformer to learn long-range patterns. We evaluate our framework on public global industry emissions dataset derived from EDGAR v8.0, spanning multiple countries and sectors. The proposed model achieves superior predictive performance - reducing error by over 15% compared to baseline deep models - while maintaining interpretability via attention weights and causal analysis. We believe that we are the first Graph-Temporal architecture that resolves multicollinearity by structurally encoding feature relationships, along with integration of causal inference to identify true drivers of emissions, improving transparency and fairness. We also stand a demonstration of policy relevance, showing how model insights can guide sector-specific decarbonization strategies aligned with sustainable development goals. Based on the above, we show high-emission "hotspots" and suggest equitable intervention plans, illustrating the potential of state-of-the-art AI graph learning to advance climate action, offering a powerful tool for policymakers and industry stakeholders to achieve carbon reduction targets.
Gauthier Roussilhe, Thibault Pirson, David Bol
et al.
Growing attention is given to the environmental impacts of the digital sector, exacerbated by the increase of digital products and services in our globalized societies. The materiality of the digital sector is often presented through the environmental impacts of mining activities to point out that digitization does not mean dematerialization. Despite its importance, such a narrative is often restricted to a few minerals (e.g., cobalt, lithium) that have become the symbols of extractive industries. In this paper, we further explore the materiality of the digital sector with an approach based on the diversity of elements and their purity requirements in the semiconductor industry. Semiconductors are responsible for manufacturing the key building blocks of the digital sector, i.e., microchips. Given that the need for ultra-high purity materials is very specific to the semiconductor industry, a few companies around the world have been studied, revealing new critical actors in complex supply chains. This highlights strong dependencies towards other industrial sectors with mass production and the need for a deeper investigation of interactions with the chemical industry, complementary to the mining industry.
Accurate monitoring of agricultural droughts in data-scarce areas remains a challenge due to their intricate spatiotemporal patterns. Deep learning represents a promising approach for developing efficient drought monitoring models. In this study, a hybrid deep learning model, combining convolutional neural network and random forest (CNN-RF), is proposed to monitor agricultural droughts in a mountainous region located in Southwest China. The model integrates multisource data obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite sensor, Global Land Data Assimilation System (GLDAS), Climate Hazards Group InfraRed Precipitation with Station Data (CHIRPS) and digital elevation model (DEM) to reproduce a station-based 3-month Standardized Precipitation Evapotranspiration Index (SPEI-3) during 2001–2020. Performance evaluation of the proposed model utilized an in situ soil moisture dataset and grain yields as benchmarks. The results demonstrated the superiority of the CNN-RF model over both the CNN and RF models in terms of estimating SPEI-3 and forecasting drought categories, as quantified by the lowest root mean square error (RMSE<0.4), the highest correlation coefficient (CC>0.9) and the multi-class receiver operating characteristic (ROC) based area under curves (AUC) (AUC=0.86). The CNN-RF model successfully reproduced the spatial heterogeneity of the drought pattern while maintaining temporal and spatial consistency with actual drought conditions. Notably, strong consistency was observed between the simulated SPEI-3 and the 3-month Standardized Soil Moisture Index (SSMI-3) (CC=0.42, p < 0.01). Moreover, the model-estimated drought areas of cropland in the winter and early spring months exhibited a significant correlation with summer harvest grain yields (CC<−0.45, p < 0.05). Another advantage of the CNN-RF model is its ability to generalize well with limited training samples. This study introduces a scalable, simple, and efficient method for reliably monitoring agricultural droughts over large areas by leveraging freely available multisource data, which can also be easily adapted for monitoring agricultural droughts in other vegetated regions with limited ground observations.
The Chinese economy is facing the impact of soaring energy prices, including the prices of coal, electricity and oil. The impacts of energy price fluctuations on general prices have a significant delayed effect. A novel price-temporal input-output (I–O) method is proposed to measure these delayed effects. A series of time-delay functions caused by a single price fluctuation and continuous price fluctuations is obtained through polynomial fitting. Then, the impact of price regulation and price delay adjustment on the delayed effect is further examined. Finally, China's latest 2017 I–O table, 4186 listed companies, and actual oil price adjustment data for 2020 are used to conduct empirical research. The delayed effect of oil, coal, electricity and gas price fluctuations on general prices and price indices, such as the consumer price index (CPI) and producer price index (PPI), are comprehensively investigated, and a corresponding time-delay ratio table for rapid querying is provided. The results indicate that the delayed impact of energy price fluctuations on the prices of various sectors lasts for half a year or even longer; additionally, these effects are very different. Logistics prices and the PPI are the most affected by oil price fluctuations, while trade prices and the CPI are the least affected by oil price fluctuations. China's oil price adjustments in 2020 led to a decline in general prices, and prices rebounded at the end of the year. Price regulation, especially electricity price regulation, reduces the impact of energy price fluctuations on general prices, and price delay adjustments extend the length of the time delay. This study can help improve how governments and enterprises address the impact of energy price fluctuations.
Aromatase is an enzyme that converts androgens (like testosterone) to estrogens (like 17- estradiol). It is also a highly successful therapeutic target for endocrine-responsive breast cancer. Aromatase inhibitors, which suppress estrogen synthesis in postmenopausal women, have been useful in the treatment of individuals with estrogen receptor-positive breast cancer. Anastrozole is an aromatase inhibitor medication that is used in the management and treatment of breast cancer. Flavonoids inhibit cancer cell proliferation by causing apoptosis, encouraging autophagy, and changing the cell cycle. Although several dietary flavonoids (like in parsley, celery and Broccoli) can inhibit aromatase, the tissue specificity and mechanism of binding are uncertain. According to several researches, flavonoids (apigenin and luteolin) dramatically suppress estrogen production. The study aims to examine binding of 3EQM (Aromatase) in the A chain with both of flavonoid (Apigenin and luteolin) and Anastrozole, using an in silico approach. PyRx default program was used to detect docking accuracy. Virtually, results showed that flavonoids have higher binding strength for apigenin and luteolin (which was -8.2 and -8.3) than Anastrozole (which was -7.6) with chain A of Aromatase. So, flavonoids can potentially be used as a natural medication to reduce breast cancer incidence. However, clinical trial studies are needed to investigate the role of apigenin and luteolin in the treatment of breast cancer.
Special industries and trades, Industrial engineering. Management engineering
Luca D'Amico-Wong, Yannai A. Gonczarowski, Gary Qiurui Ma
et al.
We model the role of an online platform disrupting a market with unit-demand buyers and unit-supply sellers. Each seller can transact with a subset of the buyers whom she already knows, as well as with any additional buyers to whom she is introduced by the platform. Given these constraints on trade, prices and transactions are induced by a competitive equilibrium. The platform's revenue is proportional to the total price of all trades between platform-introduced buyers and sellers. In general, we show that the platform's revenue-maximization problem is computationally intractable. We provide structural results for revenue-optimal matchings and isolate special cases in which the platform can efficiently compute them. Furthermore, in a market where the maximum increase in social welfare that the platform can create is $ΔW$, we prove that the platform can attain revenue $Ω(ΔW/\log(\min\{n,m\}))$, where $n$ and $m$ are the numbers of buyers and sellers, respectively. When $ΔW$ is large compared to welfare without the platform, this gives a polynomial-time algorithm that guarantees a logarithmic approximation of the optimal welfare as revenue. We also show that even when the platform optimizes for revenue, the social welfare is at least an $O(\log(\min\{n,m\}))$-approximation to the optimal welfare. Finally, we prove significantly stronger bounds for revenue and social welfare in homogeneous-goods markets.
Employee turnover remains a pressing issue within high-tech sectors such as IT firms and research centers, where organizational success heavily relies on the skills of their workforce. Intense competition and a scarcity of skilled professionals in the industry contribute to a perpetual demand for highly qualified employees, posing challenges for organizations to retain talent. While numerous studies have explored various factors affecting employee turnover in these industries, their focus often remains on overarching trends rather than specific organizational contexts. In particular, within the software industry, where projectspecific risks can significantly impact project success and timely delivery, understanding their influence on job satisfaction and turnover intentions is crucial. This study aims to investigate the influence of project risks in the IT industry on job satisfaction and employee turnover intentions. Furthermore, it examines the role of both external and internal social links in shaping perceptions of job satisfaction.
Martin Nagl, Oskar Haske-Cornelius, Wolfgang Bauer
et al.
Abstract Background Pulp refining is an energy consuming, but integral part of paper production with the aim to increase tensile strength and smoothness of paper sheets. Commercial enzyme formulations are used to lower the energy requirements by pre-treatment of pulp before refining. However, a high number of different commercial enzyme products are available on the market containing enzymes of varying origin and composition, which complicates the prediction of their behavior, especially using different pulp types. Results Endoglucanase-rich enzyme formulations were characterized regarding enzyme activity at different temperatures, resulting in a significant decrease of activity above 70 °C. Some enzyme preparations additionally contained arabinosidase, xylanase and β-glucosidase activity consequently resulting in a release of xylose and glucose from pulp as determined by high-performance liquid chromatography. Interestingly, one enzyme formulation even showed lytic polysaccharide monooxygenase (LPMO) activity of 3.05 nkat mg−1. A correlation between enzyme activity using the endoglucanase specific derivatized cellopentaose (CellG5) substrate and enzyme performance in laboratory PFI (Papirindustriens forskningsinstitut) refining trials was observed on softwood pulp resulting in a maximum increase in the degree of refining values from 27.7°SR to 32.7°SR. When added to a purified endoglucanase enzyme (31.6°SR), synergistic effects were found for cellobiohydrolase II (34.7°SR) or β-glucosidase enzymes (35.7°SR) in laboratory refining. Comparison with previously obtained laboratory refining results on hardwood pulp allowed differences in enzyme performance based on varying pulp types to be elucidated. Conclusions Interestingly, the individual enzymes indeed showed different refining effects on softwood and hardwood pulp. This difference could be predicted after development of an adapted enzyme activity assay by combination of the derivatized cellopentaose CellG5 substrate with either softwood or hardwood sulfate pulp.
Renewable energy sources, Energy industries. Energy policy. Fuel trade
Introdução: A farmacogenômica é uma associação do conhecimento da farmacologia clássica e ciência genômica para a medicina personalizada e o uso racional de medicamentos. A análise farmacogenômica permite verificar a expressão de proteínas receptoras e enzimas moduladas por fármacos de modo a otimizar a terapia medicamentosa e evitar problemas relacionados aos medicamentos (PRM). Objetivo: O objetivo deste trabalho foi avaliar estudos que evidenciam os benefícios da farmacogenômica na otimização da terapia medicamentosa. Metodologia: Foi realizada uma revisão sistemática na base de dados (PubMed), no período de 5 anos (2014 a 2019), usando os seguintes descritores: pharmacogenomic testing, therapy e clinical. Os artigos identificados foram avaliados individualmente por 4 autores, conforme os critérios de inclusão: (1) dados comparativos entre as terapias guiada e usual e (2) Resultados: favoráveis na terapia guiada. Já os critérios de exclusão: (1) revisões sistemáticas e (2) estudos qualitativos sem dados comparativos e/ou desfavoráveis. Resultados: Foram encontrados 440 artigos, dos quais 15 atenderam aos critérios de inclusão. Em um estudo realizado na Espanha, durante 12 semanas, com 316 pacientes de 18 hospitais públicos, foi avaliada a efetividade da terapia guiada pelo Neuropharmagen (PGx) em relação a terapia usual (TAU). A taxa de resposta foi de 47,8% (PGx) vs. 36,1% (TAU). Em outro estudo a terapia antidepressiva à base da farmacogenômica (PGATx) resulta na precisão do tratamento do transtorno depressivo maior (TDM). Um ensaio clínico realizado, por 8 semanas, na avaliação da efetividade e tolerabilidade do PGATx em 100 pacientes com TDM, a taxa de resposta de 71,7% (PGATx) foi superior à 43,6% (TAU). Também se avaliou em outro estudo, os efeitos da terapia guiada em 237 pacientes com transtornos neuropsiquiátricos, por 90 dias, em uma clínica psiquiátrica ambulatorial. No grupo controle (TAU), 53% dos pacientes relataram eventos adversos comparados aos 28% dos pacientes guiados pelo PGx. Um estudo randomizado controlado foi feito com 1167 pacientes ambulatoriais com TDM, que tinham resposta inadequada ao antidepressivo. Na semana 8, houve melhora dos sintomas 27,2% (PGx) vs 24,4% (TAU), assim como aumento da resposta (26,0% vs. 19,9%). Além disso, os pacientes que mudaram para o medicamento congruente tiveram melhora dos sintomas (33,5% vs. 21,1%) e resposta (28,5% vs. 16,7%) em relação aos pacientes que permaneceram utilizando os medicamentos incongruentes. Conclusão: A farmacogenômica pode contribuir com a resposta farmacoterapêutica ao identificar a expressão gênica de proteínas que modulam a farmacocinética e a farmacodinâmica dos medicamentos, prevenindo o aparecimento de PRM.
Pharmacy and materia medica, Pharmaceutical industry
Companies like OpenAI, Google DeepMind, and Anthropic have the stated goal of building artificial general intelligence (AGI) - AI systems that perform as well as or better than humans on a wide variety of cognitive tasks. However, there are increasing concerns that AGI would pose catastrophic risks. In light of this, AGI companies need to drastically improve their risk management practices. To support such efforts, this paper reviews popular risk assessment techniques from other safety-critical industries and suggests ways in which AGI companies could use them to assess catastrophic risks from AI. The paper discusses three risk identification techniques (scenario analysis, fishbone method, and risk typologies and taxonomies), five risk analysis techniques (causal mapping, Delphi technique, cross-impact analysis, bow tie analysis, and system-theoretic process analysis), and two risk evaluation techniques (checklists and risk matrices). For each of them, the paper explains how they work, suggests ways in which AGI companies could use them, discusses their benefits and limitations, and makes recommendations. Finally, the paper discusses when to conduct risk assessments, when to use which technique, and how to use any of them. The reviewed techniques will be obvious to risk management professionals in other industries. And they will not be sufficient to assess catastrophic risks from AI. However, AGI companies should not skip the straightforward step of reviewing best practices from other industries.
The traditions and planting techniques of oil palm cultivation by smallholders in the Muaro Jambi region, and the yields of fresh fruit bunches from smallholders vary. This result is because the plant’s number and the age of plant affects the amount of oil palm production and the farmer’s profit. The objective of research is to (1) analyze the smallholder replanting oil palm plantations’ feasibility evaluated from the financial point of views using investment criteria, (2) analyze his, her, their, etc. smallholder. Sensitivity of replanting oil palm plantations to changes in input and output prices. The study was conducted in Muaro Jambi district. The data analysis method usages investment criteria using NPV, IRR, BCR, PBP and BEP. Research data used data obtained from questionnaires. In the Muaro Jambi region, up to 60 pure independent smallholder farmers replanted, oil palm samples were obtained through snowball sampling. The results of the study show that smallholder oil palm planting is feasible through conventional replanting and understory replanting. The results of the sensitivity analysis showed that when the price of production factors increases by 15% and the selling price of FFB is considered constant, and the selling price of FFB decreased by 15% and the price of production factors remains, both types of replanting of oil palm plantations are still feasible. Changes in FFB prices are more sensitive to changes in the value of investment criteria than changes in production factor prices.
The Industrial Internet of Things (IIoT) is a developing research area with potential global Internet connectivity, turning everyday objects into intelligent devices with more autonomous activities. IIoT services and applications are not only being used in smart homes and smart cities, but they have also become an essential element of the Industry 4.0 concept. The emergence of the IIoT helps traditional industries simplify production processes, reduce production costs, and improve industrial efficiency. However, the involvement of many heterogeneous devices, the use of third-party software, and the resource-constrained nature of the IoT devices bring new security risks to the production chain and expose vulnerabilities to the systems. The Distributed Denial of Service (DDoS) attacks are significant, among others. This article analyzes the threats and attacks in the IIoT and discusses how DDoS attacks impact the production process and communication dysfunctions with IIoT services and applications. This article also proposes a reference security framework that enhances the advantages of fog computing to demonstrate countermeasures against DDoS attacks and possible strategies to mitigate such attacks at scale.