Hasil untuk "Standardization. Simplification. Waste"

Menampilkan 20 dari ~454931 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar

JSON API
arXiv Open Access 2026
DiffSoup: Direct Differentiable Rasterization of Triangle Soup for Extreme Radiance Field Simplification

Kenji Tojo, Bernd Bickel, Nobuyuki Umetani

Radiance field reconstruction aims to recover high-quality 3D representations from multi-view RGB images. Recent advances, such as 3D Gaussian splatting, enable real-time rendering with high visual fidelity on sufficiently powerful graphics hardware. However, efficient online transmission and rendering across diverse platforms requires drastic model simplification, reducing the number of primitives by several orders of magnitude. We introduce DiffSoup, a radiance field representation that employs a soup (i.e., a highly unstructured set) of a small number of triangles with neural textures and binary opacity. We show that this binary opacity representation is directly differentiable via stochastic opacity masking, enabling stable training without a mollifier (i.e., smooth rasterization). DiffSoup can be rasterized using standard depth testing, enabling seamless integration into traditional graphics pipelines and interactive rendering on consumer-grade laptops and mobile devices. Code is available at https://github.com/kenji-tojo/diffsoup.

en cs.GR, cs.CV
arXiv Open Access 2025
Enhanced Biogas Production via Anaerobic Co-Digestion of Slaughterhouse and Food Waste Using Ferric Oxide as a Sustainable Conductive Material

Michelle C. Almendrala, Kyle Adrienne T. Valenzuela, Steffany Marie Nina B. Santos et al.

The anaerobic co-digestion of slaughterhouse wastewater and food waste offers a sustainable approach to waste treatment and biogas production. However, limited literature was found on the study of ferric oxide as conductive material in co-digestion of the two substrates. This study evaluates the effect of ferric oxide on biogas yield, organic matter removal, and kinetics of anaerobic co-digestion. Five batch tests were performed: four with varying ferric oxide doses and one control. Results showed that ferric oxide significantly enhanced total solids (TS) and volatile solids (VS) reduction. The reactor with 0.5 g ferric oxide per 800 mL working volume achieved the highest TS and VS reduction, corresponding to the maximum methane yield of 9878.95 L methane per kg volatile solid. At this optimal dosage, biogas production increased by 81 percent compared to the control. However, further increases in ferric oxide above the optimal dosage concentration decreases biogas yield, indicating a threshold beyond which inhibitory effects occur. In addition, at this optimal dosage, reduction in BOD and COD was observed due to enhanced microbial activity. Furthermore, ferric oxide stabilizes anaerobic digestion by mitigating inhibitory compounds and promoting direct interspecies electron transfer, leading to improved methane yield. Kinetic modeling using the Logistic Function accurately predicted methane production trends, demonstrating its potential for industrial-scale application. Overall, the study confirms that ferric oxide at an optimal dose significantly enhance biogas yield and system performance during the anaerobic co-digestion.

en physics.chem-ph
arXiv Open Access 2025
Diagrammatic Simplification of Linearized Coupled Cluster Theory

Kevin Carter-Fenk

Linearized Coupled Cluster Doubles (LinCCD) often provides near-singular energies in small-gap systems that exhibit static correlation. This has been attributed to the lack of quadratic $T_2^2$ terms that typically balance out small energy denominators in the CCD amplitude equations. Herein, I show that exchange contributions to ring and crossed-ring contractions (not small denominators per se) cause the divergent behavior of LinCC(S)D approaches. Rather than omitting exchange terms, I recommend a regular and size-consistent method that retains only linear ladder diagrams. As LinCCD and configuration interaction doubles (CID) equations are isomorphic, this also implies that simplification (rather than quadratic extensions) of CID amplitude equations can lead to a size-consistent theory. Linearized ladder CCD (LinLCCD) is robust in statically-correlated systems and can be made $O(n_{\text{occ}}^4n_{\text{vir}}^2)$ with a hole-hole approximation. The results presented here show that LinLCCD and its hole-hole approximation can accurately capture energy differences, even outperforming full CCD and CCSD for non-covalent interactions in small-to-medium sized molecules, setting the stage for further adaptations of these approaches that incorporate more dynamical correlation.

en physics.chem-ph
arXiv Open Access 2025
Computer-Aided Multi-Stroke Character Simplification by Stroke Removal

Ryo Ishiyama, Shinnosuke Matsuo, Seiichi Uchida

Multi-stroke characters in scripts such as Chinese and Japanese can be highly complex, posing significant challenges for both native speakers and, especially, non-native learners. If these characters can be simplified without degrading their legibility, it could reduce learning barriers for non-native speakers, facilitate simpler and legible font designs, and contribute to efficient character-based communication systems. In this paper, we propose a framework to systematically simplify multi-stroke characters by selectively removing strokes while preserving their overall legibility. More specifically, we use a highly accurate character recognition model to assess legibility and remove those strokes that minimally impact it. Experimental results on 1,256 character classes with 5, 10, 15, and 20 strokes reveal several key findings, including the observation that even after removing multiple strokes, many characters remain distinguishable. These findings suggest the potential for more formalized simplification strategies.

en cs.CV
arXiv Open Access 2025
Quantifying Point Contributions: A Lightweight Framework for Efficient and Effective Query-Driven Trajectory Simplification

Yumeng Song, Yu Gu, Tianyi Li et al.

As large volumes of trajectory data accumulate, simplifying trajectories to reduce storage and querying costs is increasingly studied. Existing proposals face three main problems. First, they require numerous iterations to decide which GPS points to delete. Second, they focus only on the relationships between neighboring points (local information) while neglecting the overall structure (global information), reducing the global similarity between the simplified and original trajectories and making it difficult to maintain consistency in query results, especially for similarity-based queries. Finally, they fail to differentiate the importance of points with similar features, leading to suboptimal selection of points to retain the original trajectory information. We propose MLSimp, a novel Mutual Learning query-driven trajectory simplification framework that integrates two distinct models: GNN-TS, based on graph neural networks, and Diff-TS, based on diffusion models. GNN-TS evaluates the importance of a point according to its globality, capturing its correlation with the entire trajectory, and its uniqueness, capturing its differences from neighboring points. It also incorporates attention mechanisms in the GNN layers, enabling simultaneous data integration from all points within the same trajectory and refining representations, thus avoiding iterative processes. Diff-TS generates amplified signals to enable the retention of the most important points at low compression rates. Experiments involving eight baselines on three databases show that MLSimp reduces the simplification time by 42%--70% and improves query accuracy over simplified trajectories by up to 34.6%.

en cs.DB
arXiv Open Access 2025
Each Prompt Matters: Scaling Reinforcement Learning Without Wasting Rollouts on Hundred-Billion-Scale MoE

Anxiang Zeng, Haibo Zhang, Hailing Zhang et al.

We present CompassMax-V3-Thinking, a hundred-billion-scale MoE reasoning model trained with a new RL framework built on one principle: each prompt must matter. Scaling RL to this size exposes critical inefficiencies-zero-variance prompts that waste rollouts, unstable importance sampling over long horizons, advantage inversion from standard reward models, and systemic bottlenecks in rollout processing. To overcome these challenges, we introduce several unified innovations: (1) Multi-Stage Zero-Variance Elimination, which filters out non-informative prompts and stabilizes group-based policy optimization (e.g. GRPO) by removing wasted rollouts; (2) ESPO, an entropy-adaptive optimization method that balances token-level and sequence-level importance sampling to maintain stable learning dynamics; (3) a Router Replay strategy that aligns training-time MoE router decisions with inference-time behavior to mitigate train-infer discrepancies, coupled with a reward model adjustment to prevent advantage inversion; (4) a high-throughput RL system with FP8-precision rollouts, overlapped reward computation, and length-aware scheduling to eliminate performance bottlenecks. Together, these contributions form a cohesive pipeline that makes RL on hundred-billion-scale MoE models stable and efficient. The resulting model delivers strong performance across both internal and public evaluations.

en cs.AI, cs.LG
arXiv Open Access 2025
On the Simplification of Neural Network Architectures for Predictive Process Monitoring

Amaan Ansari, Lukas Kirchdorfer, Raheleh Hadian

Predictive Process Monitoring (PPM) aims to forecast the future behavior of ongoing process instances using historical event data, enabling proactive decision-making. While recent advances rely heavily on deep learning models such as LSTMs and Transformers, their high computational cost hinders practical adoption. Prior work has explored data reduction techniques and alternative feature encodings, but the effect of simplifying model architectures themselves remains underexplored. In this paper, we analyze how reducing model complexity, both in terms of parameter count and architectural depth, impacts predictive performance, using two established PPM approaches. Across five diverse event logs, we show that shrinking the Transformer model by 85% results in only a 2-3% drop in performance across various PPM tasks, while the LSTM proves slightly more sensitive, particularly for waiting time prediction. Overall, our findings suggest that substantial model simplification can preserve predictive accuracy, paving the way for more efficient and scalable PPM solutions.

en cs.LG
arXiv Open Access 2025
DisSim-FinBERT: Text Simplification for Core Message Extraction in Complex Financial Texts

Wonseong Kim, Christina Niklaus, Choong Lyol Lee et al.

This study proposes DisSim-FinBERT, a novel framework that integrates Discourse Simplification (DisSim) with Aspect-Based Sentiment Analysis (ABSA) to enhance sentiment prediction in complex financial texts. By simplifying intricate documents such as Federal Open Market Committee (FOMC) minutes, DisSim improves the precision of aspect identification, resulting in sentiment predictions that align more closely with economic events. The model preserves the original informational content and captures the inherent volatility of financial language, offering a more nuanced and accurate interpretation of long-form financial communications. This approach provides a practical tool for policymakers and analysts aiming to extract actionable insights from central bank narratives and other detailed economic documents.

en econ.EM, stat.CO
arXiv Open Access 2025
WS$^2$: Weakly Supervised Segmentation using Before-After Supervision in Waste Sorting

Andrea Marelli, Alberto Foresti, Leonardo Pesce et al.

In industrial quality control, to visually recognize unwanted items within a moving heterogeneous stream, human operators are often still indispensable. Waste-sorting stands as a significant example, where operators on multiple conveyor belts manually remove unwanted objects to select specific materials. To automate this recognition problem, computer vision systems offer great potential in accurately identifying and segmenting unwanted items in such settings. Unfortunately, considering the multitude and the variety of sorting tasks, fully supervised approaches are not a viable option to address this challange, as they require extensive labeling efforts. Surprisingly, weakly supervised alternatives that leverage the implicit supervision naturally provided by the operator in his removal action are relatively unexplored. In this paper, we define the concept of Before-After Supervision, illustrating how to train a segmentation network by leveraging only the visual differences between images acquired \textit{before} and \textit{after} the operator. To promote research in this direction, we introduce WS$^2$ (Weakly Supervised segmentation for Waste-Sorting), the first multiview dataset consisting of more than 11 000 high-resolution video frames captured on top of a conveyor belt, including "before" and "after" images. We also present a robust end-to-end pipeline, used to benchmark several state-of-the-art weakly supervised segmentation methods on WS$^2$.

en cs.CV
arXiv Open Access 2024
Inductive Reasoning with Equality Predicates, Contextual Rewriting and Variant-Based Simplification

Jose Meseguer

An inductive inference system for proving validity of formulas in the initial algebra $T_{\mathcal{E}}$ of an order-sorted equational theory $\mathcal{E}$ is presented. It has 20 inference rules, but only 9 of them require user interaction; the remaining 11 can be automated as simplification rules. In this way, a substantial fraction of the proof effort can be automated. The inference rules are based on advanced equational reasoning techniques, including: equationally defined equality predicates, narrowing, constructor variant unification, variant satisfiability, order-sorted congruence closure, contextual rewriting, ordered rewriting, and recursive path orderings. All these techniques work modulo axioms $B$, for $B$ any combination of associativity and/or commutativity and/or identity axioms. Most of these inference rules have already been implemented in Maude's NuITP inductive theorem prover.

en cs.LO
arXiv Open Access 2023
Multilingual Controllable Transformer-Based Lexical Simplification

Kim Cheng Sheang, Horacio Saggion

Text is by far the most ubiquitous source of knowledge and information and should be made easily accessible to as many people as possible; however, texts often contain complex words that hinder reading comprehension and accessibility. Therefore, suggesting simpler alternatives for complex words without compromising meaning would help convey the information to a broader audience. This paper proposes mTLS, a multilingual controllable Transformer-based Lexical Simplification (LS) system fined-tuned with the T5 model. The novelty of this work lies in the use of language-specific prefixes, control tokens, and candidates extracted from pre-trained masked language models to learn simpler alternatives for complex words. The evaluation results on three well-known LS datasets -- LexMTurk, BenchLS, and NNSEval -- show that our model outperforms the previous state-of-the-art models like LSBert and ConLS. Moreover, further evaluation of our approach on the part of the recent TSAR-2022 multilingual LS shared-task dataset shows that our model performs competitively when compared with the participating systems for English LS and even outperforms the GPT-3 model on several metrics. Moreover, our model obtains performance gains also for Spanish and Portuguese.

en cs.CL
arXiv Open Access 2023
Verification and Validation of the Stakeholder Tool for Assessing Radioactive Transportation (START)

Caitlin Condon, Philip Jensen, Patrick Royer et al.

The U.S. Department of Energy (DOE) Office of Integrated Waste Management is planning for the eventual transportation, storage, and disposal of spent nuclear fuel (SNF) and high-level radioactive waste (HLW) from nuclear power plant and DOE sites. The Stakeholder Tool for Assessing Radioactive Transportation (START) is a web-based, geospatial decision-support tool developed for evaluating routing options and other aspects of transporting SNF and HLW, covering rail, truck, barge, and intermodal infrastructure and operations in the continental United States. The verification and validation (V&V) process is intended to independently assess START to provide confidence in the ability of START to accurately provide intended results. The V&V process checks the START tool using a variety of methods, ranging from independent hand calculations to comparison of START performance and results to those of other codes. The V&V activity was conducted independently from the START development team with opportunities to provide feedback and collaborate throughout the process. The V&V analyzed attributes of transportation routes produced by START, including route distance and both population and population density captured within buffer zones around routes. Population in the buffer zone, population density in the buffer zone, and route distance were all identified as crucial outputs of the START code and were subject to V&V tasks. Some of the improvements identified through the V&V process were standardizing the underlying population data in START, changing the projection of the population raster data, and changes to the methodology used for population density to improve its applicability for expected users. This collaboration also led to suggested improvements to some of the underlying shape file segments within START.

en cs.CY, nlin.AO
arXiv Open Access 2022
Improving Bayesian radiological profiling of waste drums using Dirichlet priors, Gaussian process priors, and hierarchical modeling

Eric Laloy, Bart Rogiers, An Bielen et al.

We present three methodological improvements of the "SCK CEN approach" for Bayesian inference of the radionuclide inventory in radioactive waste drums, from radiological measurements. First we resort to the Dirichlet distribution for the prior distribution of the isotopic vector. The Dirichlet distribution possesses the attractive property that the elements of its vector samples sum up to 1. Second, we demonstrate that such Dirichlet priors can be incorporated within an hierarchical modeling of the prior uncertainty in the isotopic vector, when prior information about isotopic composition is available. Our used Bayesian hierarchical modeling framework makes use of this available information but also acknowledges its uncertainty by letting to a controlled extent the information content of the indirect measurement data (i.e., gamma and neutron counts) shape the actual prior distribution of the isotopic vector. Third, we propose to regularize the Bayesian inversion by using Gaussian process (GP) prior modeling when inferring 1D spatially-distributed quantities. As of uncertainty in the efficiencies, we keep using the same stylized drum modeling approach as proposed in our previous work to account for the source distribution uncertainty across the vertical direction of the drum. A series of synthetic tests followed by application to a real waste drum show that combining hierarchical modeling of the prior isotopic composition uncertainty together with GP prior modeling of the vertical Pu profile across the drum works well. We also find that our GP prior can handles both cases with and without spatial correlation. The computational times involved by our proposed approach are on the order of a few hours, say about 2, to provide uncertainty estimates for all variables of interest in the considered inverse problem. This warrants further investigations to speed up the inference.

en physics.data-an, physics.ins-det
S2 Open Access 2021
Slip tendency analysis of major faults in Germany

Luisa Röckel, Steffen Ahlers, Sophia Morawietz et al.

Abstract. Natural seismicity and tectonic activity are important processes for the site-selection and for the long-term safety assessment of a nuclear waste repository, as they can influence the integrity of underground structures significantly. Therefore, it is crucial to gain insight into the reactivation potential of faults. The two key factors that control the reactivation potential are (a) the geometry and properties of the fault such as strike direction and friction angle, and (b) the orientations and magnitudes of the recent stress field and future changes to it due to exogenous processes such as glacial loading as well as anthropogenic activities in the subsurface. One measure of the reactivation potential of faults is the ratio of resolved shear stress to normal stresses at the fault surface, which is called slip tendency. However, the available information on fault properties and the stress field in Germany is sparse. Geomechanical numerical modelling can provide a prediction of the required 3D stress tensor in places without stress data. Here, we present slip tendency calculations on major faults based on a 3D geomechanical numerical model of Germany and adjacent regions of the SpannEnD project (Ahlers et al., 2021). Criteria for the selection of faults relevant to the scope of the SpannEnD project were identified and 55 faults within the model area were selected. For the selected faults, simplified geometries were created. For a subset of the selected faults, vertical profiles and seismic sections could be used to generate semi-realistic 3D fault geometries. Slip tendency calculations using the stress tensor from the SpannEnD model were performed for both 3D fault sets. The slip tendencies were calculated without factoring in pore pressure and cohesion, and were normalized to a coefficient of friction of 0.6. The resulting values range mainly between 0 and 1, with 6 % of values larger than 0.4. In general, the observed slip tendency is slightly higher for faults striking in the NW and NNE directions than for faults of other strikes. Normal faults show higher slip tendencies than reverse and strike slip faults for the majority of faults. Seismic events are generally in good agreement with the regions of elevated slip tendencies; however, not all seismicity can be explained through the slip tendency analysis.

2 sitasi en
S2 Open Access 2021
Methodology of structured development and validation of multiphysical constitutive models using the example of crushed salt compaction under 3D THM load conditions

U. Düsterloh, S. Lerche

Abstract. The conceptual plans for the final underground disposal of radioactive waste in rock salt formations are based on extensive backfilling with crushed salt of the residual cavities left after waste deposition. It is therefore of particular importance for the historical and prognostic analysis of the load-bearing behavior and impermeability of a final repository in rock salt to demonstrate that compaction of the crushed salt backfill, which progresses over time, is suitable to seal the breaches in the geological barrier created during the underground excavation of the cavity in the long term such that safe containment of the waste is ensured. Relevant investigations on the thermal-hydraulic-mechanical (THM) behavior of crushed salt revealed that the constitutive models for the description of crushed salt compaction, which have regularly been based on the evaluation of oedometer tests, are not suitable for a sufficiently realistic representation of the essentially three-dimensional stress-strain behavior of crushed salt depending on the external load in space and time. Evidence for the above statement lies in particular in the fact that even when standardized mixtures of crushed salt are used, a computational reanalysis of compaction tests using a standardized set of parameters has hitherto been unsuccessful when different loading scenarios were specified for these laboratory tests. This means that deformations and porosities measured in the context of one individual laboratory tests can currently only be reanalyzed in sufficient quantity, irrespective of the choice of constitutive model, if the model parameters are determined in relation to this test. As a result, it must be stated that, on the one hand, the compaction behavior of crushed salt in space and time is not yet definitively understood, while, on the other hand, to ensure reliable, robust and sufficiently realistic statements to be made on compaction behavior, and thus to prove the safe containment of radioactive waste in rock salt, the availability of extensive systematically and sufficiently validated constitutive models is indispensable. This presentation introduces a methodological approach for the systematic and structured development and validation of multiphysical constitutive models, an approach that has meanwhile been successfully tested many times. The practical application of this methodology will be presented here using the example of a constitutive model that takes into account the triaxial stress-strain behavior of crushed salt. The individual development and validation steps are documented for the crushed salt model, EXPO-COM, newly developed at the Chair for Waste Disposal Technologies and Geomechanics. Validation of the constitutive model is performed by means of a back-analysis of triaxial long-term crushed salt compaction tests as follows: Test TK-031 of the German Federal Institute for Geosciences and Natural Resources (Bundesanstalt für Geowissenschaften und Rohstoffe, BGR) for isotropic load conditions Tests V1 (dry), V2 (w=0.1 %), and V3 (wet) of the German Society for Plant and Reactor Safety (Gesellschaft für Anlagen- und Reaktorsicherheit gGmbH, GRS) for different stresses and temperature levels as well as humidity Test TUC_V2 of the Clausthal University of Technology (TUC) for isotropic and deviatoric stress conditions. The TUC_V2 test characterizes, in the context of the methodology for the structured development and validation of multiphysical constitutive models, an innovative test method geared towards constitutive model development, in which the loading boundary conditions specified in the test guarantee the isolated analysis of individual factors influencing compaction behavior (Fig. 1). A description of the tests and test techniques that are still required for the full development and validation of the EXPO-COM constitutive model planned as part of the KOMPASS II research project is given together with a description of methodological guidelines relating to requirements on reliability, functionality, practicability, and validity ranges of the EXPO-COM constitutive model (Fig. 2). As a result of the subsequently possible comparison of experimentally validated and not yet validated dependencies or process variables, a validation status is defined for the constitutive model EXPO-COM. This validation status shows which factors influencing the THM-coupled material behavior of crushed salt are currently sufficiently realistically taken into account, and which influencing factors cannot yet be validated by the constitutive model. The main objectives of the tests to be carried out as part of the KOMPASS II research project include: Continued validation based on the systematized database to be created in KOMPASS II. Testing of the constitutive model in the context of numerical analyses of the predictive quality and numerical stability of the constitutive model for in situ relevant stress boundary conditions, prediction times and material properties.

1 sitasi en
S2 Open Access 2021
The recent stress state of Germany – results of a geomechanical–numerical 3D model

Steffen Ahlers, A. Henk, T. Hergert et al.

Abstract. A decisive criterion for the selection and the long-term safety of a deep geological repository for high radioactive waste is the crustal stress state and its future changes. The basis of any prognosis is the recent crustal stress state, but the state of knowledge in Germany is quite low in this respect. There are stress orientation data provided by the World Stress Map (WSM, Heidbach et al., 2018) and stress magnitude data from a database (Morawietz et al., 2020) for Germany, both providing selective information on the recent stress field. However, these data are often incomplete, of low quality and spatially unevenly distributed. Therefore, a 3D continuous description is not possible with these data so far, at most for the orientation of the maximum horizontal stress (SHmax), but not for the most important magnitudes of the minimum (Shmin) and SHmax. In the course of the SpannEnD project, a geomechanical–numerical 3D model of Germany is created, with which a continuous description of the complete tensor of the recent stress field in Germany is possible. The model covers an area of 1250×1000 km2 from Poland in the east, to France in the west, from Italy in the south to Scandinavia in the north. The depth extent is 100 km. Even though the focus is primarily on Germany, the model area was chosen to be so wide to minimize boundary effects and for a simplified definition of the displacement boundary conditions, which are ideally oriented perpendicular or parallel to the orientation of SHmax. The model contains a total of 21 units: The upper part of the lithospheric mantle, the lower crust, four laterally overlapping units of the upper crust, and 14 stratigraphic units of the sedimentary cover. The stratigraphic subdivision of the sedimentary cover is only done in the core area of the model; because this area is the focus of our study, our calibration data are mainly from this region and well-resolved geometry data are available. Outside of the core area, the sediments are grouped into an undifferentiated unit. The units are parameterized with density and elastic material parameters (Poisson's ratio and Young's modulus). The model has a lateral resolution of 2.5×2.5 km2 and a vertical resolution of a maximum of 240 m; in total it includes 11.1 million hexahedral elements. The equilibrium of forces between body and surface forces is solved by finite element method. The model is calibrated with Shmin and SHmax magnitudes from the WSM and data from the stress magnitude database. First, an initial stress state is generated and in a second step displacement boundary conditions are defined at the model edges, which are adjusted until a best-fit to the calibration data is found. The results show good agreement with both the SHmax orientation data from the WSM and the magnitudes of the two principal horizontal stresses (Shmin and SHmax) from the magnitude database.

1 sitasi en
arXiv Open Access 2021
Simplification of the local full vertex in the impurity problem in DMFT and its applications for the nonlocal correlation

Ryota Mizuno, Masayuki Ochi, Kazuhiko Kuroki

The two-particle vertex function is crucial for the diagrammatic extensions beyond DMFT for the nonlocal fluctuation. However, estimating the two-particle quantities is still a challenging task. In this study, we propose a simplification of the local two-particle full vertex and, using the simplified full vertex, we develop two methods to take into account the nonlocal fluctuation. We apply these methods to several models and confirm that our methods can capture important behaviors such as the pseudo gap in the DMFT + nonlocal calculation. In addition, the numerical costs are largely reduced compared to the conventional methods.

en cond-mat.str-el
arXiv Open Access 2021
Minimum-Complexity Graph Simplification under Fréchet-Like Distances

Omrit Filtser, Majid Mirzanezhad, Carola Wenk

Simplifying graphs is a very applicable problem in numerous domains, especially in computational geometry. Given a geometric graph and a threshold, the minimum-complexity graph simplification asks for computing an alternative graph of minimum complexity so that the distance between the two graphs remains at most the threshold. In this paper, we propose several NP-hardness and algorithmic results depending on the type of input and simplified graphs, the vertex placement of the simplified graph, and the distance measures between them (graph and traversal distances [1,2]). In general, we show that for arbitrary input and output graphs, the problem is NP-hard under some specific vertex-placement of the simplified graph. When the input and output are trees, and the graph distance is applied from the simplified tree to the input tree, we give an $O(kn^5)$ time algorithm, where $k$ is the number of the leaves of the two trees that are identical and $n$ is the number of vertices of the input.

en cs.CG, cs.DS
arXiv Open Access 2020
Optimizing Waste Management Collection Routes in Urban Haiti: A Collaboration between DataKind and SOIL

Michael Dowd, Anna Dixon, Benjamin Kinsella

Sustainable Organic Integrated Livelihoods (SOIL) is a research and development organization that aims to increase access to cost-effective household sanitation services in urban communities in Haiti. Each week, SOIL provides over 1,000 households with ecological sanitation toilets, then transports the waste to be transformed into rich compost. However, SOIL faces several challenges regarding the route optimization of their mixed fleet vehicle routing. This paper builds upon the authors' submission to Bloomberg's 2019 Data for Good Exchange (D4GX), presenting preliminary findings from a joint collaboration between DataKind, a data science nonprofit, and SOIL. This research showcases how optimization algorithms and open source tools (i.e., OpenStreetMap and Google OR-Tools) can help improve and reduce the costs of mixed-fleet routing problems, particularly in the context of developing countries. As a result of this work, SOIL is able to make improvement to their collection routes, which account for different road conditions and vehicle types. These improvements reduce operational costs and fuel use, which are essential to the service's expansion in the coming years.

en cs.CY
S2 Open Access 2019
Modeling of strontium leaching from carbonated Portland cement pastes using a simplified diffusion-kinetic analytical model

E. Boukobza, G. Bar-Nes, O. Ben-David et al.

Abstract One of the main challenges in nuclear waste management is to predict release of radionuclides during their long-term disposal within an intact matrix in the repository. One way to tackle this challenge is to conduct leaching experiments which emulate radionuclide release under extreme conditions in a relatively short time. In this work we present a simple analytical diffusion-kinetic model for strontium leaching from cylindrical samples of Portland cement paste. The model accounts explicitly for both strontium diffusion and strontium carbonate precipitation. We compare this model with a standard diffusion model, and demonstrate that it better fits experimental strontium leaching data from samples that showed minor carbonation, as well as samples that showed atmospheric carbonation. This diffusion-kinetic model gives rise to narrower prediction bounds and substantially smaller errors. Furthermore, it provides experimentalists conducting leaching tests an easily implementable tool to analyze their data in systems where precipitation is expected to occur. The approach presented here may serve as an alternative to a plain diffusion analysis often found in standardized leaching protocols, and to more intricate thermodynamic numerical software.

6 sitasi en Environmental Science

Halaman 30 dari 22747