Hasil untuk "Hazardous substances and their disposal"

Menampilkan 20 dari ~190298 hasil · dari DOAJ, arXiv, CrossRef

JSON API
arXiv Open Access 2026
Operational Mass Measurement for Flyby Reconnaissance Missions of Potentially Hazardous Asteroid

Justin A. Atchison, Gael Cascioli, Anivid Pedros-Faura et al.

This study evaluates a technique for determining the mass of a potentially hazardous asteroid from a high-speed flyby in the context of a rapid reconnaissance planetary defense scenario. We consider a host spacecraft that dispenses a small CubeSat, which acts as a test-mass. Both spacecraft perform approach maneuvers to target their flyby locations, with the host targeting a close proximity flyby and the CubeSat targeting a distant flyby. By incorporating short-range intersatellite measurements between the host and the CubeSat, the mass measurement sensitivity is substantially improved. We evaluate a set of proposed host and CubeSat hardware options against the 2023 and 2025 Planetary Defense Conference hypothetical threats, as well as a hypothetical flyby of 2024 YR4. These scenarios differ predominantly in their flyby speeds, which span from 1.7 to 22 km/s. Based on these scenarios, we demonstrate that a typical radio-frequency intersatellite measurement is ineffective for asteroids with diameters relevant to planetary defense (i.e., 50 - 500 m). However, we find that augmenting the system with a laser-based intersatellite ranging system or a high-precision Doppler system can enable mass measurements of asteroids as small as 100 m across all cases, and as small as 50 m for the slower (< 8 km/s) cases. The results are very sensitive to the timing of the final maneuver, which is used to target the low-altitude flyby point. This presents an operational challenge for the smallest objects, where optical detection times are comparatively late and the optical navigation targeting knowledge converges too slowly.

en astro-ph.IM, astro-ph.EP
arXiv Open Access 2026
Focused Information Criteria for Semiparametric Linear Hazard Regression

Axel Gandy, Nils Lid Hjort

The semiparametric linear hazard regression model introduced by McKeague and Sasieni (1994) is an extension of the linear hazard regression model developed by Aalen (1980). Methods of model selection for this type of model are still underdeveloped. In the process of fitting a semiparametric linear hazard regression model one usually starts with a given set of covariates. For each covariate one has at least the following three choices: allow it to have time-varying effect; allow it to have constant effect over time; or exclude it from the model. In this paper we discuss focused information criteria (FIC) to help with this choice. In the spirit of Claeskens and Hjort (2003, 2008), `focused' means that one is interested in one specific quantity, e.g. the probability of survival of a patient with a certain set of covariates up to a given time. The FIC involves estimating the mean squared error of the estimator of the quantity one is interested in, and the chosen model is the one minimising this estimated mean squared error. The focused model selection machinery is extended to allow for weighted versions, leading to a suitable wFIC method that aims at finding models that lead to good estimates of a given list of parameters, such as survival probabilities for a subset of patients or for a specified region of covariate vectors. In addition to developing model selection criteria, methods associated with averaging across the best models are also discussed. We illustrate these methods of model selection in a real data situation.

en stat.ME
arXiv Open Access 2025
ZeroFlood: Flood Hazard Mapping from Single-Modality SAR Using Geo-Foundation Models

Hyeongkyun Kim, Orestis Oikonomou

Flood hazard mapping is essential for disaster prevention but remains challenging in data-scarce regions, where traditional hydrodynamic models require extensive geophysical inputs. This paper introduces \textit{ZeroFlood}, a framework that leverages Geo-Foundation Models (GeoFMs) to predict flood hazard maps using single-modality Earth Observation (EO) data, specifically SAR imagery. We construct a dataset that pairs EO data with flood hazard simulations across the European continent. Using this dataset, we evaluate several recent GeoFMs for the flood hazard segmentation task. Experimental results show that the best-performing model, TerraMind, achieves an F1-score of 88.36\%, outperforming supervised learning baselines by more than 3 percentage points. We shows the performance can be further improved by applying the Thinking-in-Modality (TiM) mechanism. These results demonstrate the potential of Geo-Foundation Models for data-driven flood hazard mapping using limited observational inputs. The dataset and experiment code are publicly available at https://github.com/khyeongkyun/zeroflood.

en cs.LG, cs.AI
arXiv Open Access 2025
Continuously updated estimation of conditional hazard functions

Daphné Aurouet, Valentin Patilea

Motivated by the need to analyze continuously updated data sets in the context of time-to-event modeling, we propose a novel nonparametric approach to estimate the conditional hazard function given a set of continuous and discrete predictors. The method is based on a representation of the conditional hazard as a ratio between a joint density and a conditional expectation determined by the distribution of the observed variables. It is shown that such ratio representations are available for uni- and bivariate time-to-events, in the presence of common types of random censoring, truncation, and with possibly cured individuals, as well as for competing risks. This opens the door to nonparametric approaches in many time-to-event predictive models. To estimate joint densities and conditional expectations we propose the recursive kernel smoothing, which is well suited for online estimation. Asymptotic results for such estimators are derived and it is shown that they achieve optimal convergence rates. Simulation experiments show the good finite sample performance of our recursive estimator with right censoring. The method is applied to a real dataset of primary breast cancer.

en stat.ME
arXiv Open Access 2025
Turning a Disposable Bronchoscope into a Dynamic Speckle Imaging Tool: Yes, It Works

Aurélien plyer, Elise Colin, Enrique Garcia-Caurel

Dynamic speckle imaging, typically used in laser-illuminated surface diagnostics, has proven valuable for assessing biological activity. In this work, we demonstrate its feasibility in an endoscopic context using a disposable bronchoscope. Despite technical limitations and aliasing artifacts, our preliminary results show discernible vascular structures, indicating potential for minimally invasive diagnostic applications. It is important to note that the imaging systems used in this study are designed primarily for clinical robustness and classical imaging, including single-use sterility, ease of handling, and real-time visualization, and not for scientific fidelity of visual data or computational post-processing. As such, they are not inherently suited to dynamic speckle analysis, which requires precise control over temporal acquisition parameters, linear response characteristics of the imaging sensor, and stable illumination conditions, particularly from the coherent laser source. Nevertheless, our results demonstrate that, even within these constraints, dynamic speckle imaging is indeed achievable. This opens the door to further adaptation and optimization of such clinical imaging tools for functional biomedical investigations.

en physics.ins-det, physics.med-ph
arXiv Open Access 2025
Quantum transistors for heat flux in and out of working substance parts: harmonic vs transmon and Kerr environs

Deepika Bhargava, Paranjoy Chaki, Aparajita Bhattacharyya et al.

Quantum thermal transistors have been widely studied in the context of three-qubit systems, where each qubit interacts separately with a Markovian harmonic bath. Markovianity is an assumption that is imposed on a system if the environment loses its memory within short while, while non-Markovianity is a general feature, inherently present in a large fraction of realistic scenarios. Instead of Markovian environments, here we propose a transistor in which the interaction between the working substance and an environment comprising of an infinite chain of qutrits is based on periodic collisions. We refer to the device as a working-substance thermal transistor, since the model focuses on heat currents flowing in and out of each individual qubit of the working substance to and from different parts of the system and environment. We find that the transistor effect prevails in this apparatus and we depict how the amplification of heat currents depends on the temperature of the modulating environment, the system-environment coupling strength and the interaction time. We further show that there exists a non-zero amplification even if one of the environments, that is not the modulating one, is detached from the system. Additionally, the environment, being comprised of three-level systems, allows us to consider the effects of frail perturbations in the energy-spacings of the qutrit, leading to a non-linearity in the environment. We consider non-linearities that are either of transmon- or of Kerr-type. We find parameter ranges where there is a significant amplification for both transmon- and Kerr-type non-linearities in the environment. Finally, we detect the non-Markovianity induced in the system from a non-monotonic behavior of the amplification observed with respect to time, and quantify it using the distinguishability-based measure of non-Markovianity.

en quant-ph
arXiv Open Access 2024
Covariate selection for the estimation of marginal hazard ratios in high-dimensional data

Guilherme W. F. Barros, Jenny Häggström

Hazard ratios are frequently reported in time-to-event and epidemiological studies to assess treatment effects. In observational studies, the combination of propensity score weights with the Cox proportional hazards model facilitates the estimation of the marginal hazard ratio (MHR). The methods for estimating MHR are analogous to those employed for estimating common causal parameters, such as the average treatment effect. However, MHR estimation in the context of high-dimensional data remain unexplored. This paper seeks to address this gap through a simulation study that consider variable selection methods from causal inference combined with a recently proposed multiply robust approach for MHR estimation. Additionally, a case study utilizing stroke register data is conducted to demonstrate the application of these methods. The results from the simulation study indicate that the double selection covariate selection method is preferable to several other strategies when estimating MHR. Nevertheless, the estimation can be further improved by employing the multiply robust approach to the set of propensity score models obtained during the double selection process.

en stat.ME
arXiv Open Access 2024
Fairness in Computational Innovations: Identifying Bias in Substance Use Treatment Length of Stay Prediction Models with Policy Implications

Ugur Kursuncu, Aaron Baird, Yusen Xia

Predictive machine learning (ML) models are computational innovations that can enhance medical decision-making, including aiding in determining optimal timing for discharging patients. However, societal biases can be encoded into such models, raising concerns about inadvertently affecting health outcomes for disadvantaged groups. This issue is particularly pressing in the context of substance use disorder (SUD) treatment, where biases in predictive models could significantly impact the recovery of highly vulnerable patients. In this study, we focus on the development and assessment of ML models designed to predict the length of stay (LOS) for both inpatients (i.e., residential) and outpatients undergoing SUD treatment. We utilize the Treatment Episode Data Set for Discharges (TEDS-D) from the Substance Abuse and Mental Health Services Administration (SAMHSA). Through the lenses of distributive justice and socio-relational fairness, we assess our models for bias across variables related to demographics (e.g., race) as well as medical (e.g., diagnosis) and financial conditions (e.g., insurance). We find that race, US geographic region, type of substance used, diagnosis, and payment source for treatment are primary indicators of unfairness. From a policy perspective, we provide bias mitigation strategies to achieve fair outcomes. We discuss the implications of these findings for medical decision-making and health equity. We ultimately seek to contribute to the innovation and policy-making literature by seeking to advance the broader objectives of social justice when applying computational innovations in health care.

en cs.CY, cs.HC
arXiv Open Access 2024
A Review of EMA Public Assessment Reports where Non-Proportional Hazards were Identified

Florian Klinglmueller, Norbert Benda, Tim Friede et al.

While well-established methods for time-to-event data are available when the proportional hazards assumption holds, there is no consensus on the best approach under non-proportional hazards. A wide range of parametric and non-parametric methods for testing and estimation in this scenario have been proposed. In this review we identified EMA marketing authorization procedures where non-proportional hazards were raised as a potential issue in the risk-benefit assessment and extract relevant information on trial design and results reported in the corresponding European Assessment Reports (EPARs) available in the database at paediatricdata.eu. We identified 16 Marketing authorization procedures, reporting results on a total of 18 trials. Most procedures covered the authorization of treatments from the oncology domain. For the majority of trials NPH issues were related to a suspected delayed treatment effect, or different treatment effects in known subgroups. Issues related to censoring, or treatment switching were also identified. For most of the trials the primary analysis was performed using conventional methods assuming proportional hazards, even if NPH was anticipated. Differential treatment effects were addressed using stratification and delayed treatment effect considered for sample size planning. Even though, not considered in the primary analysis, some procedures reported extensive sensitivity analyses and model diagnostics evaluating the proportional hazards assumption. For a few procedures methods addressing NPH (e.g.~weighted log-rank tests) were used in the primary analysis. We extracted estimates of the median survival, hazard ratios, and time of survival curve separation. In addition, we digitized the KM curves to reconstruct close to individual patient level data. Extracted outcomes served as the basis for a simulation study of methods for time to event analysis under NPH.

en stat.AP
arXiv Open Access 2023
Hazards from Increasingly Accessible Fine-Tuning of Downloadable Foundation Models

Alan Chan, Ben Bucknall, Herbie Bradley et al.

Public release of the weights of pretrained foundation models, otherwise known as downloadable access \citep{solaiman_gradient_2023}, enables fine-tuning without the prohibitive expense of pretraining. Our work argues that increasingly accessible fine-tuning of downloadable models may increase hazards. First, we highlight research to improve the accessibility of fine-tuning. We split our discussion into research that A) reduces the computational cost of fine-tuning and B) improves the ability to share that cost across more actors. Second, we argue that increasingly accessible fine-tuning methods may increase hazard through facilitating malicious use and making oversight of models with potentially dangerous capabilities more difficult. Third, we discuss potential mitigatory measures, as well as benefits of more accessible fine-tuning. Given substantial remaining uncertainty about hazards, we conclude by emphasizing the urgent need for the development of mitigations.

en cs.LG, cs.CY
arXiv Open Access 2023
Normalizing flow-based deep variational Bayesian network for seismic multi-hazards and impacts estimation from InSAR imagery

Xuechun Li, Paula M. Burgi, Wei Ma et al.

Onsite disasters like earthquakes can trigger cascading hazards and impacts, such as landslides and infrastructure damage, leading to catastrophic losses; thus, rapid and accurate estimates are crucial for timely and effective post-disaster responses. Interferometric Synthetic aperture radar (InSAR) data is important in providing high-resolution onsite information for rapid hazard estimation. Most recent methods using InSAR imagery signals predict a single type of hazard and thus often suffer low accuracy due to noisy and complex signals induced by co-located hazards, impacts, and irrelevant environmental changes (e.g., vegetation changes, human activities). We introduce a novel stochastic variational inference with normalizing flows derived to jointly approximate posteriors of multiple unobserved hazards and impacts from noisy InSAR imagery.

en cs.LG, cs.CV
arXiv Open Access 2023
Extended Excess Hazard Models for Spatially Dependent Survival Data

André Victor Ribeiro Amaral, Francisco Javier Rubio, Manuela Quaresma et al.

Relative survival represents the preferred framework for the analysis of population cancer survival data. The aim is to model the survival probability associated to cancer in the absence of information about the cause of death. Recent data linkage developments have allowed for incorporating the place of residence into the population cancer data bases; however, modeling this spatial information has received little attention in the relative survival setting. We propose a flexible parametric class of spatial excess hazard models (along with inference tools), named "Relative Survival Spatial General Hazard" (RS-SGH), that allows for the inclusion of fixed and spatial effects in both time-level and hazard-level components. We illustrate the performance of the proposed model using an extensive simulation study, and provide guidelines about the interplay of sample size, censoring, and model misspecification. We present a case study using real data from colon cancer patients in England. This case study illustrates how a spatial model can be used to identify geographical areas with low cancer survival, as well as how to summarize such a model through marginal survival quantities and spatial effects.

en stat.ME, stat.AP
arXiv Open Access 2022
A new hazard event classification model via deep learning and multifractal

Zhenhua Wang, Bin Wang, Ming Ren et al.

Hazard and operability analysis (HAZOP) is the paradigm of industrial safety that can reveal the hazards of process from its node deviations, consequences, causes, measures and suggestions, and such hazards can be considered as hazard events (HaE). The classification research on HaE has much irreplaceable pragmatic values. In this paper, we present a novel deep learning model termed DLF through multifractal to explore HaE classification where the motivation is that HaE can be naturally regarded as a kind of time series. Specifically, first HaE is vectorized to get HaE time series by employing BERT. Then, a new multifractal analysis method termed HmF-DFA is proposed to win HaE fractal series by analyzing HaE time series. Finally, a new hierarchical gating neural network (HGNN) is designed to process HaE fractal series to accomplish the classification of HaE from three aspects: severity, possibility and risk. We take HAZOP reports of 18 processes as cases, and launch the experiments on this basis. Results demonstrate that compared with other classifiers, DLF classifier performs better under metrics of precision, recall and F1-score, especially for the severity aspect. Also, HmF-DFA and HGNN effectively promote HaE classification. Our HaE classification system can serve application incentives to experts, engineers, employees, and other enterprises. We hope our research can contribute added support to the daily practice in industrial safety.

en cs.CL
arXiv Open Access 2021
Data-driven Design of Context-aware Monitors for Hazard Prediction in Artificial Pancreas Systems

Xugui Zhou, Bulbul Ahmed, James H. Aylor et al.

Medical Cyber-physical Systems (MCPS) are vulnerable to accidental or malicious faults that can target their controllers and cause safety hazards and harm to patients. This paper proposes a combined model and data-driven approach for designing context-aware monitors that can detect early signs of hazards and mitigate them in MCPS. We present a framework for formal specification of unsafe system context using Signal Temporal Logic (STL) combined with an optimization method for patient-specific refinement of STL formulas based on real or simulated faulty data from the closed-loop system for the generation of monitor logic. We evaluate our approach in simulation using two state-of-the-art closed-loop Artificial Pancreas Systems (APS). The results show the context-aware monitor achieves up to 1.4 times increase in average hazard prediction accuracy (F1-score) over several baseline monitors, reduces false-positive and false-negative rates, and enables hazard mitigation with a 54% success rate while decreasing the average risk for patients.

en cs.AI
arXiv Open Access 2020
Survival analysis under non-proportional hazards: investigating non-inferiority or equivalence in time-to-event data

Kathrin Möllenhoff, Achim Tresch

The classical approach to analyze time-to-event data, e.g. in clinical trials, is to fit Kaplan-Meier curves yielding the treatment effect as the hazard ratio between treatment groups. Afterwards commonly a log-rank test is performed in order to investigate whether there is a difference in survival, or, depending on additional covariates, a Cox proportional hazard model is used. However, in numerous trials these approaches fail due to the presence of non-proportional hazards, resulting in difficulties of interpreting the hazard ratio and a loss of power. When considering equivalence or non-inferiority trials, the commonly performed log-rank based tests are similarly affected by a violation of this assumption. Here we propose a parametric framework to assess equivalence or non-inferiority for survival data. We derive pointwise confidence bands for both, the hazard ratio and the difference of the survival curves. Further we propose a test procedure addressing non-inferiority and equivalence by directly comparing the survival functions at certain time points or over an entire range of time. We demonstrate the validity of the methods by a clinical trial example and by numerous simulation results.

en stat.ME, stat.AP
arXiv Open Access 2019
Proportional hazards model with partly interval censoring and its penalized likelihood estimation

Jun Ma, Dominique-Laurent Couturier, Stephane Heritier et al.

This paper considers the problem of semi-parametric proportional hazards model fitting for interval, left and right censored survival times. We adopt a more versatile penalized likelihood method to estimate the baseline hazard and the regression coefficients simultaneously, where the penalty is introduced in order to regularize the baseline hazard estimate. We present asymptotic properties of our estimate, allowing for the possibility that it may lie on the boundary of the parameter space. We also provide a computational method based on marginal likelihood, which allows the regularization parameter to be determined automatically. Comparisons of our method with other approaches are given in simulations which demonstrate that our method has favourable performance. A real data application involving a model for melanoma recurrence is presented and an R package implementing the methods is available.

en stat.ME
arXiv Open Access 2018
Characterizing the deep uncertainties surrounding coastal flood hazard projections: A case study for Norfolk, VA

Kelsey L. Ruckert, Vivek Srikrishnan, Klaus Keller

Coastal planners and decision makers design risk management strategies based on hazard projections. However, projections can differ drastically. What causes this divergence and which projection(s) should a decision maker adopt to create plans and adaptation efforts for improving coastal resiliency? Using Norfolk, Virginia, as a case study, we start to address these questions by characterizing and quantifying the drivers of differences between published sea-level rise and storm surge projections, and how these differences can impact efforts to improve coastal resilience. We find that assumptions about the complex behavior of ice sheets are the primary drivers of flood hazard diversity. Adopting a single hazard projection neglects key uncertainties and can lead to overconfident projections and downwards biased hazard estimates. These results highlight key avenues to improve the usefulness of hazard projections to inform decision-making such as (i) representing complex ice sheet behavior, (ii) covering decision-relevant timescales beyond this century, (iii) resolving storm surges with a low chance of occurring (e.g., a 0.2% chance per year), (iv) considering that storm surge projections may deviate from the historical record, and (v) communicating the considerable deep uncertainty.

en physics.ao-ph
arXiv Open Access 2017
To Wait or Not to Wait: Two-way Functional Hazards Model for Understanding Waiting in Call Centers

Gen Li, Jianhua Z. Huang, Haipeng Shen

Telephone call centers offer a convenient communication channel between businesses and their customers. Efficient management of call centers needs accurate modeling of customer waiting behavior, which contains important information about customer patience (how long a customer is willing to wait) and service quality (how long a customer needs to wait to get served). Hazard functions offer dynamic characterization of customer waiting behavior, and provide critical inputs for agent scheduling. Motivated by this application, we develop a two-way functional hazards (tF-Hazards) model to study customer waiting behavior as a function of two timescales, waiting duration and the time of day that a customer calls in. The model stems from a two-way piecewise constant hazard function, and imposes low-rank structure and smoothness on the hazard rates to enhance interpretability. We exploit an alternating direction method of multipliers (ADMM) algorithm to optimize a penalized likelihood function of the model. We carefully analyze the data from a US bank call center, and provide informative insights about customer patience and service quality patterns along waiting time and across different times of a day. The findings provide primitive inputs for call center agent staffing and scheduling, as well as for call center practitioners to understand the effect of system protocols on customer waiting behavior.

en stat.AP
arXiv Open Access 2016
HADES: Microprocessor Hazard Analysis via Formal Verification of Parameterized Systems

Lukáš Charvát, Aleš Smrčka, Tomáš Vojnar

HADES is a fully automated verification tool for pipeline-based microprocessors that aims at flaws caused by improperly handled data hazards. It focuses on single-pipeline microprocessors designed at the register transfer level (RTL) and deals with read-after-write, write-after-write, and write-after-read hazards. HADES combines several techniques, including data-flow analysis, error pattern matching, SMT solving, and abstract regular model checking. It has been successfully tested on several microprocessors for embedded applications.

arXiv Open Access 2014
Moral Hazard in Dynamic Risk Management

Jakša Cvitanić, Dylan Possamaï, Nizar Touzi

We consider a contracting problem in which a principal hires an agent to manage a risky project. When the agent chooses volatility components of the output process and the principal observes the output continuously, the principal can compute the quadratic variation of the output, but not the individual components. This leads to moral hazard with respect to the risk choices of the agent. We identify a family of admissible contracts for which the optimal agent's action is explicitly characterized, and, using the recent theory of singular changes of measures for Itô processes, we study how restrictive this family is. In particular, in the special case of the standard Homlström-Milgrom model with fixed volatility, the family includes all possible contracts. We solve the principal-agent problem in the case of CARA preferences, and show that the optimal contract is linear in these factors: the contractible sources of risk, including the output, the quadratic variation of the output and the cross-variations between the output and the contractible risk sources. Thus, like sample Sharpe ratios used in practice, path-dependent contracts naturally arise when there is moral hazard with respect to risk management. In a numerical example, we show that the loss of efficiency can be significant if the principal does not use the quadratic variation component of the optimal contract.

en q-fin.PM, math.OC

Halaman 21 dari 9515