Hasil untuk "Hazardous substances and their disposal"

Menampilkan 20 dari ~190298 hasil · dari CrossRef, DOAJ, arXiv

JSON API
arXiv Open Access 2026
The Feasibility of Potentially Hazardous Asteroids Flybys Using Multiple Venus Gravity Assists

Vladislav Zubko

This work develops low-energy spacecraft (SC) trajectories using Venus gravity assists to study asteroids during heliocentric transfer segments between planetary encounters. The study focuses on potentially hazardous asteroids (PHAs) as primary exploration targets. This paper proposes a method for calculating SC trajectories that enable asteroid flybys after a Venus gravity assist. The method involves formulating and solving an optimization problem to design trajectories incorporating flybys of selected asteroids and Venus. Trajectories are calculated using two-body dynamics by solving the Lambert problem. A preliminary search for candidate asteroids uses an algorithm to narrow the search space of the optimization problem. This algorithm uses the V-infinity globe technique to connect planetary gravity assists with resonant orbits. The resonant orbit in this case serves as an initial approximation for the SC's trajectory between two successive planetary flybys. Four flight schemes were analyzed, including multiple flybys of Venus and asteroids, with the possibility of an SC returning to Earth. The proposed solutions reduce flight time between asteroid approaches, increase gravity assist frequency, and enhance mission design flexibility. The use of Venus gravity assists and resonant orbits ensures a close encounter with at least one asteroid during the SC's trajectory between two consecutive flybys of Venus, and demonstrates the feasibility of periodic Venus gravity assists and encounters with PHAs. The developed method was applied to construct trajectories that allow an SC to approach both Venus-resonant asteroids and PHAs via multiple Venus gravity assists. An additional study was carried out to identify asteroids accessible during the Earth-Venus segment in launch windows between 2029 and 2050.

en astro-ph.EP, astro-ph.IM
arXiv Open Access 2025
Knowledge-Grounded Agentic Large Language Models for Multi-Hazard Understanding from Reconnaissance Reports

Chenchen Kuai, Zihao Li, Braden Rosen et al.

Post-disaster reconnaissance reports contain critical evidence for understanding multi-hazard interactions, yet their unstructured narratives make systematic knowledge transfer difficult. Large language models (LLMs) offer new potential for analyzing these reports, but often generate unreliable or hallucinated outputs when domain grounding is absent. This study introduces the Mixture-of-Retrieval Agentic RAG (MoRA-RAG), a knowledge-grounded LLM framework that transforms reconnaissance reports into a structured foundation for multi-hazard reasoning. The framework integrates a Mixture-of-Retrieval mechanism that dynamically routes queries across hazard-specific databases while using agentic chunking to preserve contextual coherence during retrieval. It also includes a verification loop that assesses evidence sufficiency, refines queries, and initiates targeted searches when information remains incomplete. We construct HazardRecQA by deriving question-answer pairs from GEER reconnaissance reports, which document 90 global events across seven major hazard types. MoRA-RAG achieves up to 94.5 percent accuracy, outperforming zero-shot LLMs by 30 percent and state-of-the-art RAG systems by 10 percent, while reducing hallucinations across diverse LLM architectures. MoRA-RAG also enables open-weight LLMs to achieve performance comparable to proprietary models. It establishes a new paradigm for transforming post-disaster documentation into actionable, trustworthy intelligence for hazard resilience.

en cs.CL, cs.AI
arXiv Open Access 2025
Geospatial AI for Liquefaction Hazard and Impact Forecasting: A Demonstrative Study in the U.S. Pacific Northwest

Morgan D. Sanger, Brett W. Maurer

Recent large-magnitude earthquakes have demonstrated the damaging consequences of soil liquefaction and reinforced the need to understand and plan for liquefaction hazards at a regional scale. In the United States, the Pacific Northwest is uniquely vulnerable to such consequences given the potential for crustal, intraslab, and subduction zone earthquakes. In this study, the liquefaction hazard is predicted geospatially at high resolution and across regional scales for 85 scenario earthquakes in the states of Washington and Oregon. This is accomplished using an emergent geospatial model that is driven by machine learning, and which predicts the probability of damaging ground deformation by surrogating state-of-practice geotechnical models. The adopted model shows improved performance and has conceptual advantages over prior regional-scale modeling approaches in that predictions (i) are informed by mechanics, (ii) employ more geospatial information using machine learning, and (iii) are geostatistically anchored to known subsurface conditions. The utility of the resulting predictions for the 85 scenarios is then demonstrated via asset and network infrastructure vulnerability assessments. The liquefaction hazard forecasts are published in a GIS-ready, public repository and are suitable for disaster simulations, evacuation route planning, network vulnerability analysis, land-use planning, insurance loss modeling, hazard communication, public investment prioritization, and other regional-scale applications.

arXiv Open Access 2025
Prompt to Protection: A Comparative Study of Multimodal LLMs in Construction Hazard Recognition

Nishi Chaudhary, S M Jamil Uddin, Sathvik Sharath Chandra et al.

The recent emergence of multimodal large language models (LLMs) has introduced new opportunities for improving visual hazard recognition on construction sites. Unlike traditional computer vision models that rely on domain-specific training and extensive datasets, modern LLMs can interpret and describe complex visual scenes using simple natural language prompts. However, despite growing interest in their applications, there has been limited investigation into how different LLMs perform in safety-critical visual tasks within the construction domain. To address this gap, this study conducts a comparative evaluation of five state-of-the-art LLMs: Claude-3 Opus, GPT-4.5, GPT-4o, GPT-o3, and Gemini 2.0 Pro, to assess their ability to identify potential hazards from real-world construction images. Each model was tested under three prompting strategies: zero-shot, few-shot, and chain-of-thought (CoT). Zero-shot prompting involved minimal instruction, few-shot incorporated basic safety context and a hazard source mnemonic, and CoT provided step-by-step reasoning examples to scaffold model thinking. Quantitative analysis was performed using precision, recall, and F1-score metrics across all conditions. Results reveal that prompting strategy significantly influenced performance, with CoT prompting consistently producing higher accuracy across models. Additionally, LLM performance varied under different conditions, with GPT-4.5 and GPT-o3 outperforming others in most settings. The findings also demonstrate the critical role of prompt design in enhancing the accuracy and consistency of multimodal LLMs for construction safety applications. This study offers actionable insights into the integration of prompt engineering and LLMs for practical hazard recognition, contributing to the development of more reliable AI-assisted safety systems.

en cs.CV, cs.AI
arXiv Open Access 2025
Systematic Hazard Analysis for Frontier AI using STPA

Simon Mylius

All of the frontier AI companies have published safety frameworks where they define capability thresholds and risk mitigations that determine how they will safely develop and deploy their models. Adoption of systematic approaches to risk modelling, based on established practices used in safety-critical industries, has been recommended, however frontier AI companies currently do not describe in detail any structured approach to identifying and analysing hazards. STPA (Systems-Theoretic Process Analysis) is a systematic methodology for identifying how complex systems can become unsafe, leading to hazards. It achieves this by mapping out controllers and controlled processes then analysing their interactions and feedback loops to understand how harmful outcomes could occur (Leveson & Thomas, 2018). We evaluate STPA's ability to broaden the scope, improve traceability and strengthen the robustness of safety assurance for frontier AI systems. Applying STPA to the threat model and scenario described in 'A Sketch of an AI Control Safety Case' (Korbak et al., 2025), we derive a list of Unsafe Control Actions. From these we select a subset and explore the Loss Scenarios that lead to them if left unmitigated. We find that STPA is able to identify causal factors that may be missed by unstructured hazard analysis methodologies thereby improving robustness. We suggest STPA could increase the safety assurance of frontier AI when used to complement or check coverage of existing AI governance techniques including capability thresholds, model evaluations and emergency procedures. The application of a systematic methodology supports scalability by increasing the proportion of the analysis that could be conducted by LLMs, reducing the burden on human domain experts.

en cs.CY, cs.AI
arXiv Open Access 2025
A Flexible Partially Linear Single Index Proportional Hazards Regression Model for Multivariate Survival Data

Na Lei, Mark A. Wolters, Wenqing He

We address the problem of survival regression modelling with multivariate responses and nonlinear covariate effects. Our model extends the proportional hazards model by introducing several weakly-parametric elements: the marginal baseline hazard functions are expressed as piecewise constants, association is modelled with copulas, and nonlinear covariate effects are handled by a single-index structure using a spline. The model permits a full likelihood approach to inference, making it possible to obtain individual-level survival or hazard function estimates. Performance of the new model is evaluated through simulation studies and application to the Busselton health study data. The results suggest that the proposed method can capture nonlinear covariate effects well, and that there is benefit to modeling the association between the correlated responses.

en stat.ME
arXiv Open Access 2024
Robust Fuel-Optimal Landing Guidance for Hazardous Terrain using Multiple Sliding Surfaces

Sheikh Zeeshan Basar, Satadal Ghosh

In any spacecraft landing mission, fuel-efficient precision soft landing while avoiding nearby hazardous terrain is of utmost importance. Very few existing literature have attempted addressing both the problems of precision soft landing and terrain avoidance simultaneously. To this end, an optimal terrain avoidance landing guidance (OTALG) was recently developed, which showed promising performance in avoiding the terrain while consuming near-minimum fuel. However, its performance significantly degrades in the face of external disturbances, indicating lack of robustness. To mitigate this problem, in this paper, a near fuel-optimal guidance law is developed to avoid terrain and achieve precision soft landing at the desired landing site. Expanding the OTALG formulation using sliding mode control with multiple sliding surfaces (MSS), the presented guidance law, named `MSS-OTALG', improves precision soft landing accuracy. Further, the sliding parameter is designed to allow the lander to avoid terrain by leaving the trajectory enforced by the sliding mode and eventually returning to it when the terrain avoidance phase is completed. And finally, the robustness of the MSS-OTALG is established by proving practical fixed-time stability. Extensive numerical simulations are also presented to showcase its performance in terms of terrain avoidance, low fuel consumption, and accuracy of precision soft landing under bounded atmospheric perturbations, thrust deviations, and constraints. Comparative studies against existing relevant literature validate a balanced trade-off of all these performance measures achieved by the developed MSS-OTALG.

arXiv Open Access 2022
Brain informed transfer learning for categorizing construction hazards

Xiaoshan Zhou, Pin-Chao Liao

A transfer learning paradigm is proposed for "knowledge" transfer between the human brain and convolutional neural network (CNN) for a construction hazard categorization task. Participants' brain activities are recorded using electroencephalogram (EEG) measurements when viewing the same images (target dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned on the construction scene images. The results reveal that the EEG-pretrained CNN achieves a 9 % higher accuracy compared with a network with same architecture but randomly initialized parameters on a three-class classification task. Brain activity from the left frontal cortex exhibits the highest performance gains, thus indicating high-level cognitive processing during hazard recognition. This work is a step toward improving machine learning algorithms by learning from human-brain signals recorded via a commercially available brain-computer interface. More generalized visual recognition systems can be effectively developed based on this approach of "keep human in the loop".

en q-bio.NC, cs.LG
arXiv Open Access 2022
Likelihood-based Instrumental Variable Methods for Cox Proportional Hazard Models

Shunichiro Orihara

In biometrics and related fields, the Cox proportional hazards model are widely used to analyze with covariate adjustment. However, when some covariates are not observed, an unbiased estimator usually cannot be obtained. Even if there are some unmeasured covariates, instrumental variable methods can be applied under some assumptions. In this paper, we propose the new instrumental variable estimator for the Cox proportional hazards model. The estimator is the similar feature as Martinez-Camblor et al., 2019, but not the same exactly; we use an idea of limited-information maximum likelihood. We show that the estimator has good theoretical properties. Also, we confirm properties of our method and previous methods through simulations datasets.

en stat.ME
arXiv Open Access 2022
A Hazard Analysis Framework for Code Synthesis Large Language Models

Heidy Khlaaf, Pamela Mishkin, Joshua Achiam et al.

Codex, a large language model (LLM) trained on a variety of codebases, exceeds the previous state of the art in its capacity to synthesize and generate code. Although Codex provides a plethora of benefits, models that may generate code on such scale have significant limitations, alignment problems, the potential to be misused, and the possibility to increase the rate of progress in technical fields that may themselves have destabilizing impacts or have misuse potential. Yet such safety impacts are not yet known or remain to be explored. In this paper, we outline a hazard analysis framework constructed at OpenAI to uncover hazards or safety risks that the deployment of models like Codex may impose technically, socially, politically, and economically. The analysis is informed by a novel evaluation framework that determines the capacity of advanced code generation techniques against the complexity and expressivity of specification prompts, and their capability to understand and execute them relative to human ability.

en cs.SE, cs.AI
arXiv Open Access 2022
Factor-Augmented Regularized Model for Hazard Regression

Pierre Bayle, Jianqing Fan

A prevalent feature of high-dimensional data is the dependence among covariates, and model selection is known to be challenging when covariates are highly correlated. To perform model selection for the high-dimensional Cox proportional hazards model in presence of correlated covariates with factor structure, we propose a new model, Factor-Augmented Regularized Model for Hazard Regression (FarmHazard), which builds upon latent factors that drive covariate dependence and extends Cox's model. This new model generates procedures that operate in two steps by learning factors and idiosyncratic components from high-dimensional covariate vectors and then using them as new predictors. Cox's model is a widely used semi-parametric model for survival analysis, where censored data and time-dependent covariates bring additional technical challenges. We prove model selection consistency and estimation consistency under mild conditions. We also develop a factor-augmented variable screening procedure to deal with strong correlations in ultra-high dimensional problems. Extensive simulations and real data experiments demonstrate that our procedures enjoy good performance and achieve better results on model selection, out-of-sample C-index and screening than alternative methods.

en stat.ME, math.ST
arXiv Open Access 2022
On Modeling Bivariate Left Censored Data using Reversed Hazard Rates

Durga Vasudevan, G. Asha

When the observations are not quantified and are known to be less than a threshold value, the concept of left censoring needs to be included in the analysis of such datasets. In many real multi component lifetime systems left censored data is very common. The usual assumption that components which are part of a system, work independently seems not appropriate in a number of applications. For instance it is more realistic to acknowledge that the working status of a component affects the remaining components. When you have left-censored data, it is more meaningful to use the reversed hazard rate, proposed as a dual to the hazard rate. In this paper, we propose a model for left-censored bivariate data incorporating the dependence enjoyed among the components, based on a dynamic bivariate vector reversed hazard rate proposed in Gurler (1996). The properties of the proposed model is studied. The maximum likelihood method of estimation is shown to work well for moderately large samples. The Bayesian approach to the estimation of parameters is also presented. The complexity of the likelihood function is handled through the Metropolis - Hastings algorithm. This is executed with the MH adaptive package in r. Different interval estimation techniques of the parameters are also considered. Applications of this model is demonstrated by illustrating the usefulness of the model in analyzing real data.

en stat.ME
arXiv Open Access 2021
Kernel regression for cause-specific hazard models with time-dependent coefficients

Xiaomeng Qi, Zhangsheng Yu

Competing risk data appear widely in modern biomedical research. Cause-specific hazard models are often used to deal with competing risk data in the past two decades. There is no current study on the kernel likelihood method for the cause-specific hazard model with time-varying coefficients. We propose to use the local partial log-likelihood approach for nonparametric time-varying coefficient estimation. Simulation studies demonstrate that our proposed nonparametric kernel estimator has a good performance under assumed finite sample settings. Finally, we apply the proposed method to analyze a diabetes dialysis study with competing death causes.

en stat.ME
arXiv Open Access 2021
Multi-agent Modeling of Hazard-Household-Infrastructure Nexus for Equitable Resilience Assessment

Amir Esmalian, Wanqiu Wang, Ali Mostafavi

To enable integrating social equity considerations in infrastructure resilience assessments, this study created a new computational multi-agent simulation model which enables integrated assessment of hazard, infrastructure system, and household elements and their interactions. With a focus on hurricane-induced power outages, the model consists of three elements: 1) the hazard component simulates exposure of the community to a hurricane with varying intensity levels; 2) the physical infrastructure component simulates the power network and its probabilistic failures and restoration under different hazard scenarios; and 3) the households component captures the dynamic processes related to preparation, information seeking, and response actions of households facing hurricane-induced power outages. We used empirical data from household surveys in conjunction with theoretical decision-making models to abstract and simulate the underlying mechanisms affecting experienced hardship of households. The multi-agent simulation model was then tested in the context of Harris County, Texas, and verified and validated using empirical results from Hurricane Harvey in 2017. Then, the model was used to examine effects of different factors such as forewarning durations, social network types, and restoration and resource allocation strategies on reducing the societal impacts of service disruptions in an equitable manner. The results show that improving the restoration prioritization strategy to focus on vulnerable populations is an effective approach, especially during high-intensity events. The results show the capability of the proposed computational model for capturing the dynamic and complex interactions in the nexus of humans, hazards, and infrastructure systems to better integrate human-centric aspects in resilience planning and into assessment of infrastructure systems in disasters.

en cs.CE
arXiv Open Access 2020
Simulating longitudinal data from marginal structural models using the additive hazard model

Ruth H. Keogh, Shaun R. Seaman, Jon Michael Gran et al.

Observational longitudinal data on treatments and covariates are increasingly used to investigate treatment effects, but are often subject to time-dependent confounding. Marginal structural models (MSMs), estimated using inverse probability of treatment weighting or the g-formula, are popular for handling this problem. With increasing development of advanced causal inference methods, it is important to be able to assess their performance in different scenarios to guide their application. Simulation studies are a key tool for this, but their use to evaluate causal inference methods has been limited. This paper focuses on the use of simulations for evaluations involving MSMs in studies with a time-to-event outcome. In a simulation, it is important to be able to generate the data in such a way that the correct form of any models to be fitted to those data is known. However, this is not straightforward in the longitudinal setting because it is natural for data to be generated in a sequential conditional manner, whereas MSMs involve fitting marginal rather than conditional hazard models. We provide general results that enable the form of the correctly-specified MSM to be derived based on a conditional data generating procedure, and show how the results can be applied when the conditional hazard model is an Aalen additive hazard or Cox model. Using conditional additive hazard models is advantageous because they imply additive MSMs that can be fitted using standard software. We describe and illustrate a simulation algorithm. Our results will help researchers to effectively evaluate causal inference methods via simulation.

en stat.ME
arXiv Open Access 2016
Isotonized smooth estimators of a monotone baseline hazard in the Cox model

Hendrik P. Lopuhaä, Eni Musta

We consider two isotonic smooth estimators for a monotone baseline hazard in the Cox model, a maximum smooth likelihood estimator and a Grenander-type estimator based on the smoothed Breslow estimator for the cumulative baseline hazard. We show that they are both asymptotically normal at rate $n^{m/(2m+1)}$, where $m\geq 2$ denotes the level of smoothness considered, and we relate their limit behavior to kernel smoothed isotonic estimators studied in Lopuhaä and Musta (2016). It turns out that the Grenander-type estimator is asymptotically equivalent to the kernel smoothed isotonic estimators, while the maximum smoothed likelihood estimator exhibits the same asymptotic variance but a different bias. Finally, we present numerical results on pointwise confidence intervals that illustrate the comparable behavior of the two methods.

en math.ST
arXiv Open Access 2015
A constellation of CubeSats with synthetic tracking cameras to search for 90% of potentially hazardous near-Earth objects

Michael Shao, Slava G. Turyshev, Sara Spangelo et al.

We present a new space mission concept that is capable of finding, detecting, and tracking 90% of near-Earth objects (NEO) with H magnitude of $\rm H\leq22$ (i.e., $\sim$140 m in size) that are potentially hazardous to the Earth. The new mission concept relies on two emerging technologies: the technique of synthetic tracking and the new generation of small and capable interplanetary spacecraft. Synthetic tracking is a technique that de-streaks asteroid images by taking multiple fast exposures. With synthetic tracking, an 800 sec observation with a 10 cm telescope in space can detect a moving object with apparent magnitude of 20.5 without losing sensitivity from streaking. We refer to NEOs with a minimum orbit intersection distance of $< 0.002$ au as Earth-grazers (EGs), representing typical albedo distributions. We show that a constellation of six SmallSats (comparable in size to 9U CubeSats) equipped with 10 cm synthetic tracking cameras and evenly-distributed in 1.0 au heliocentric orbit could detect 90% of EGs with $\rm H \leq 22~mag$ in $\sim$3.8 years of observing time. A more advanced constellation of nine 20 cm telescopes could detect 90% of $\rm H=24.2~mag$ (i.e., $\rm \sim 50~m$ in size) EGs in less than 5 years.

en astro-ph.IM
arXiv Open Access 2014
Behavior, Organization, Substance: Three Gestalts of General Systems Theory

Vincenzo De Florio

The term gestalt, when used in the context of general systems theory, assumes the value of "systemic touchstone", namely a figure of reference used to categorize the properties or qualities of a set of systems. Typical gestalts used in biology are those based on anatomical or physiological characteristics, which correspond respectively to architectural and organizational design choices in natural and artificial systems. In this paper we discuss three gestalts of general systems theory: behavior, organization, and substance, which refer respectively to the works of Wiener, Boulding, and Leibniz. Our major focus here is the system introduced by the latter. Through a discussion of some of the elements of the Leibnitian System, and by means of several novel interpretations of those elements in terms of today's computer science, we highlight the debt that contemporary research still has with this Giant among the giant scholars of the past.

Halaman 24 dari 9515