Jianlu Zhang, Zhong Xie, Jiujun Zhang et al.
Hasil untuk "Standardization. Simplification. Waste"
Menampilkan 20 dari ~454868 hasil · dari DOAJ, CrossRef, arXiv, Semantic Scholar
Md. Adnanul Islam, Wasimul Karim, Md Mahbub Alam et al.
Accurate weight estimation of commercial and industrial waste is important for efficient operations, yet image-based estimation remains difficult because similar-looking objects may have different densities, and the visible size changes with camera distance. Addressing this problem, we propose Multimodal Weight Predictor (MWP) framework that estimates waste weight by combining RGB images with physics-informed metadata, including object dimensions, camera distance, and camera height. We also introduce Waste-Weight-10K, a real-world dataset containing 10,421 synchronized image-metadata collected from logistics and recycling sites. The dataset covers 11 waste categories and a wide weight range from 3.5 to 3,450 kg. Our model uses a Vision Transformer for visual features and a dedicated metadata encoder for geometric and category information, combining them with Stacked Mutual Attention Fusion that allows visual and physical cues guide each other. This helps the model manage perspective effects and link objects to material properties. To ensure stable performance across the wide weight range, we train the model using Mean Squared Logarithmic Error. On the test set, the proposed method achieves 88.06 kg Mean Absolute Error (MAE), 6.39% Mean Absolute Percentage Error (MAPE), and an R2 coefficient of 0.9548. The model shows strong accuracy for light objects in the 0-100 kg range with 2.38 kg MAE and 3.1% MAPE, maintaining reliable performance for heavy waste in the 1000-2000 kg range with 11.1% MAPE. Finally, we incorporate a physically grounded explanation module using Shapley Additive Explanations (SHAP) and a large language model to provide clear, human-readable explanations for each prediction.
Jinhong Jeong, Junghun Park, Youngjae Yu
Text simplification supports second language (L2) learning by providing comprehensible input, consistent with the Input Hypothesis. However, constructing personalized parallel corpora is costly, while existing large language model (LLM)-based readability control methods rely on pre-labeled sentence corpora and primarily target English. We propose Re-RIGHT, a unified reinforcement learning framework for adaptive multilingual text simplification without parallel corpus supervision. We first show that prompting-based lexical simplification at target proficiency levels (CEFR, JLPT, TOPIK, and HSK) performs poorly at easier levels and for non-English languages, even with state-of-the-art LLMs such as GPT-5.2 and Gemini 2.5. To address this, we collect 43K vocabulary-level data across four languages (English, Japanese, Korean, and Chinese) and train a compact 4B policy model using Re-RIGHT, which integrates three reward modules: vocabulary coverage, semantic preservation, and coherence. Compared to the stronger LLM baselines, Re-RIGHT achieves higher lexical coverage at target proficiency levels while maintaining original meaning and fluency.
Domantas Dilys, Hamish Carr, Steven Boeing
Many scientific and engineering problems are modelled by simulating scalar fields defined either on space-filling meshes (Eulerian) or as particles (Lagrangian). For analysis and visualization, topological primitives such as contour trees can be used, but these often need simplification to filter out small-scale features. For parcel-based convective cloud simulations, simplification of the contour tree requires a volumetric measure rather than persistence. Unlike for cubic meshes, volume cannot be approximated by counting regular vertices. Typically, this is addressed by resampling irregular data onto a uniform grid. Unfortunately, the spatial proximity of parcels requires a high sampling frequency, resulting in a massive increase in data size for processing. We therefore extend volume-based contour tree simplification to parcel-in-cell simulations with a graph adaptor in Viskores (VTK-m), using Delaunay tetrahedralization of the parcel centroids as input. Instead of relying on a volume approximation by counting regular vertices -- as was done for cubic meshes -- we adapt the 2D area splines reported by Bajaj et al. 10.1145/259081.259279, and Zhou et al. 10.1109/TVCG.2018.2796555. We implement this in Viskores (formerly called VTK-m) as prefix-sum style hypersweeps for parallel efficiency and show how it can be generalized to compute any integrable property. Finally, our results reveal that contour trees computed directly on the parcels are orders of magnitude faster than computing them on a resampled grid, while also arguably offering better quality segmentation, avoiding interpolation artifacts.
Qiang Ji, Lin Cheng, Zeng Liang et al.
To address the lack of energy-carbon efficiency evaluation and the underutilization of low-temperature waste heat in traditional direct reduction iron (DRI) production, this paper proposes a novel zero-carbon hydrogen metallurgy system that integrates the recovery and utilization of low-temperature and high-temperature waste heat, internal energy, and cold energy during hydrogen production, storage, reaction and circulation. Firstly, the detailed mathematical models are developed to describe energy and exergy characteristics of the operational components in the proposed zero-carbon hydrogen metallurgy system. Additionally, energy efficiency, exergy efficiency, and energy-carbon efficiency indices are introduced from a full life-cycle perspective of energy flow, avoiding the overlaps in energy inputs and outputs. Subsequently, the efficiency metrics of the proposed zero-carbon hydrogen metallurgy system are then compared with those of traditional DRI production systems with H$_2$/CO ratios of 6:4 and 8:2. The comparative results demonstrate the superiority and advancement of the proposed zero-carbon hydrogen metallurgy system. Finally, sensitivity analysis reveals that the overall electricity energy generated by incorporating the ORC and expander equipments exceeds the heat energy recovered from the furnace top gas, highlighting the energy potential of waste energy utilization.
Martin Fleischmann, Anastassia Vybornova, James D. Gaboardi et al.
Street network data is widely used to study human-based activities and urban structure. Often, these data are geared towards transportation applications, which require highly granular, directed graphs that capture the complex relationships of potential traffic patterns. While this level of network detail is critical for certain fine-grained mobility models, it represents a hindrance for studies concerned with the morphology of the street network. For the latter case, street network simplification - the process of converting a highly granular input network into its most simple morphological form - is a necessary, but highly tedious preprocessing step, especially when conducted manually. In this manuscript, we develop and present a novel adaptive algorithm for simplifying street networks that is both fully automated and able to mimic results obtained through a manual simplification routine. The algorithm - available in the neatnet Python package - outperforms current state-of-the-art procedures when comparing those methods to manually, human-simplified data, while preserving network continuity.
Artem Malko
Following the classical results of Stong, we introduce a cohomological analogue of a core of a finite sheaved topological space and propose an algorithm for simplification in this category. In particular we generalize the notion of beat vertices and show that if a vertex of a sheaved space has topologically acyclic downset (with trivial coefficients), then its removal preserves the sheaf cohomology.
Ahmad Faudzi Musib
Sarawak distinguishes out with its multicultural flexibility in comparison to a larger Malaysian nation-state where pluralism is all-encompassing and dominant. That is, Sarawak society appears to be more open to appreciating the complex tapestry of its people’s lives as well as their stated desire for an identity distinct from any dominant culture. The growth of regional music on audio carriers promotes the economics of a region. Aside from generating revenue, the music of Kayan and Kenyah and other groups of people living in the region can be shared with people in other countries, attracting tourists and social scientists from around the world. Among the main protagonists are Tusau Padan, Matthew Ngau Jau, Jerry Kamit, and Tuyang Tan Ngan. This study compares and contrasts performance ideas of their sape playing and productions, addressing simplification and refining of sound/devices/musical instruments and setting using some elements of Hendrix and Edge’s (U2 guitarist). The analysis will be focused on specific sound qualities, gadgets employed, and musical instruments as a whole. The discourse encompasses both artists’ perspectives as well as the audience’s comprehension.
Nikita Katyal, Pawan Kumar Rajpoot
Text Simplification is an ongoing problem in Natural Language Processing, solution to which has varied implications. In conjunction with the TSAR-2022 Workshop @EMNLP2022 Lexical Simplification is the process of reducing the lexical complexity of a text by replacing difficult words with easier to read (or understand) expressions while preserving the original information and meaning. This paper explains the work done by our team "teamPN" for English sub task. We created a modular pipeline which combines modern day transformers based models with traditional NLP methods like paraphrasing and verb sense disambiguation. We created a multi level and modular pipeline where the target text is treated according to its semantics(Part of Speech Tag). Pipeline is multi level as we utilize multiple source models to find potential candidates for replacement, It is modular as we can switch the source models and their weight-age in the final re-ranking.
Kai North, Tharindu Ranasinghe, Matthew Shardlow et al.
Lexical Simplification (LS) is the task of replacing complex for simpler words in a sentence whilst preserving the sentence's original meaning. LS is the lexical component of Text Simplification (TS) with the aim of making texts more accessible to various target populations. A past survey (Paetzold and Specia, 2017) has provided a detailed overview of LS. Since this survey, however, the AI/NLP community has been taken by storm by recent advances in deep learning, particularly with the introduction of large language models (LLM) and prompt learning. The high performance of these models sparked renewed interest in LS. To reflect these recent advances, we present a comprehensive survey of papers published between 2017 and 2023 on LS and its sub-tasks with a special focus on deep learning. We also present benchmark datasets for the future development of LS systems.
Randy Shoemaker, Sam Sartor, Pieter Peers
This paper presents a novel simplification method for removing vertices from an intrinsic triangulation corresponding to extrinsic vertices lying on near-developable (i.e., with limited Gaussian curvature) and general surfaces. We greedily process all intrinsic vertices with an absolute Gaussian curvature below a user selected threshold. For each vertex, we repeatedly perform local intrinsic edge flips until the vertex reaches the desired valence (three for internal vertices or two for boundary vertices) such that removal of the vertex and incident edges can be locally performed in the intrinsic triangulation. Each removed vertex's intrinsic location is tracked via (intrinsic) barycentric coordinates that are updated to reflect changes in the intrinsic triangulation. We demonstrate the robustness and effectiveness of our method on the Thingi10k dataset and analyze the effect of the curvature threshold on the solutions of PDEs.
V. Fischer, M. Pacheco Paneque, A. Legrain et al.
In most Swiss municipalities, a curbside system consisting of heavy trucks stopping at almost each household is used for non-recoverable waste collection. Due to the many stops of the trucks, this strategy causes high fuel consumption, emissions and noise. These effects can be alleviated by reducing the number of stops performed by collection vehicles. One possibility consists of locating collection points throughout the municipality such that residents bring their waste to their most preferred location. The optimization problem consists of selecting a subset of candidate locations to place the points such that each household disposes the waste at the most preferred location. Provided that the underlying road network is available, we refer to this optimization problem as the capacitated multi-vehicle covering tour problem on a road network (Cm-CTP-R). We introduce two mixed-integer linear programming (MILP) formulations: a road-network-based formulation that exploits the sparsity of the network and a customer-based formulation typically used in vehicle routing problems (VRP). To solve large instances, we propose a two-phased heuristic approach that addresses the two subproblems the Cm-CTP-R is built on: a set covering problem to select the locations and a split-delivery VRP to determine the routes. Computational experiments on both small and real-life instances show that the road-network-based formulation is better suited. Furthermore, the proposed heuristic provides good solutions with optimality gaps below 0.5% and 3.5% for 75% of the small and real-life instances respectively and is able to find better solutions than the exact method for many real-life instances.
Ahmet Ilker Topuz, Madis Kiisk, Andrea Giammanco
In this study, we employ the Monte Carlo simulations by using the GEANT4 code to demonstrate the capability of muon tomography based on the dual-parameter analysis in the examination of the nuclear waste barrels. Our current hodoscope setup consists of three top and three bottom plastic scintillators made of polyvinyl toluene with the thickness of 0.4 cm, and the composite target material is a cylindrical nuclear waste drum with the height of 96 cm and the radius of 29.6 cm where the outermost layer is stainless steel with the lateral thickness of 3.2 cm and the filling material is ordinary concrete that encapsulates the nuclear materials of dimensions 20$\times$20$\times$20 cm$^{3}$. By simulating with a narrow planar muon beam of 1$\times$1 cm$^{2}$ over the uniform energy interval between 0.1 and 8 GeV, we determine the variation of the average scattering angle together with the standard deviation by utilizing a 0.5-GeV bin length, the counts of the scattering angle by using a 1-mrad step, and the number of the absorption events for the five prevalent nuclear materials starting from cobalt and ending in plutonium. Via the duo-parametric analysis that is founded on the scattering angle as well as the absorption in the present study, we show that the presence of the nuclear materials in the waste barrels is numerically visible in comparison with the concrete-filled waste drum without any nuclear material, and the muon tomography is capable of distinguishing these nuclear materials by coupling the information about the scattering angle and the number of absorption in the cases where one of these two parameters yields strong similarity for certain nuclear materials.
Eric Laloy, Bart Rogiers, An Bielen et al.
We present a Bayesian approach to probabilistically infer vertical activity profiles within a radioactive waste drum from segmented gamma scanning (SGS) measurements. Our approach resorts to Markov chain Monte Carlo (MCMC) sampling using the state-of-the-art Hamiltonian Monte Carlo (HMC) technique and accounts for two important sources of uncertainty: the measurement uncertainty and the uncertainty in the source distribution within the drum. In addition, our efficiency model simulates the contributions of all considered segments to each count measurement. Our approach is first demonstrated with a synthetic example, after which it is used to resolve the vertical activity distribution of 5 nuclides in a real waste package.
Makeda K. Stephenson, W. Grayson
Bioreactors have become indispensable tools in the cell-based therapy industry. Various forms of bioreactors are used to maintain well-controlled microenvironments to regulate cell growth, differentiation, and tissue development. They are essential for providing standardized, reproducible cell-based products for regenerative medicine applications or to establish physiologically relevant in vitro models for testing of pharmacologic agents. In this review, we discuss three main classes of bioreactors: cell expansion bioreactors, tissue engineering bioreactors, and lab-on-a-chip systems. We briefly examine the factors driving concerted research endeavors in each of these areas and describe the major advancements that have been reported in the last three years. Emerging issues that impact the commercialization and clinical use of bioreactors include (i) the need to scale up to greater cell quantities and larger graft sizes, (ii) simplification of in vivo systems to function without exogenous stem cells or growth factors or both, and (iii) increased control in the manufacture and monitoring of miniaturized systems to better capture complex tissue and organ physiology.
I. Piccoli, A. Torreggiani, C. Pituello et al.
Biochar from agricultural biomasses and solid wastes represents a win-win solution for a rational waste management. Its sustainable usage requires the identification and standardization of biochar characteristics. The aim of this work was to identify the physical-chemical and spatial characteristics of biochars from pruning residues (PR), poultry litter (PL), and anaerobic cattle digestate (CD) at two pyrolysis temperatures (350 °C and 550 °C). The biochar characterization was carried out by applying emerging imaging techniques, 2D automated optical image analysis and hyperspectral enhanced dark-field microscopy (EDFM), and by SEM analysis. As predictable, the feedstock composition and the pyrolysis temperature strongly influence the physical structures of the biochar samples. Irrespective of charring temperature, PR biochar was mainly characterized by a broken and fragmented structure with an irregular and rough particle surface, completely different from the original PR wood cell. The EDFM imaging analysis evidenced the thermal degradation of PR vegetal products, composed primarily of hemicellulose, cellulose and lignin. On the contrary, small and regular particles with a smooth surface were produced by the PL pyrolysis, especially at 550 °C, due to the lower PL morphological homogeneity in comparison with the other biomasses. Finally, CD charring at both temperatures was characterized by changes in chemical composition, suggested by a lower pixel intensity. In conclusion, the emerging imaging techniques used in this study proved to be very effective in analyzing some properties of biochars, and can, therefore be considered as promising experimental strategies for detecting the feedstock and pyrolysis temperature of biochar.
Hai-Dang Dau, Nicolas Chopin
A standard way to move particles in a SMC sampler is to apply several steps of a MCMC (Markov chain Monte Carlo) kernel. Unfortunately, it is not clear how many steps need to be performed for optimal performance. In addition, the output of the intermediate steps are discarded and thus wasted somehow. We propose a new, waste-free SMC algorithm which uses the outputs of all these intermediate MCMC steps as particles. We establish that its output is consistent and asymptotically normal. We use the expression of the asymptotic variance to develop various insights on how to implement the algorithm in practice. We develop in particular a method to estimate, from a single run of the algorithm, the asymptotic variance of any particle estimate. We show empirically, through a range of numerical examples, that waste-free SMC tends to outperform standard SMC samplers, and especially so in situations where the mixing of the considered MCMC kernels decreases across iterations (as in tempering or rare event problems).
Michele Pastena, Bastian Weinhorst, Günter Kanisch et al.
Dismantling nuclear power plants entails the production of a large amount of contaminated (or potentially contaminated) material whose disposal is of crucial importance. Most of the end products have to be stored in special repositories but it can happen that some of them are slightly contaminated or not contaminated at all, making it possible to free release them. One possible approach to free release measurements uses Large Clearance Monitors, chambers surrounded by plastic scintillation detectors that allow a decision about the clearance of waste packages up to 1000 kg. Due to the composite nature of the detectors in a Large Clearance Monitor, it is easy to imagine that one can apply 3D imaging algorithms to localize radioactive sources inside a waste package. In this work we will show how a special algorithm that maximizes the conditional informational entropy allows decisions about the clearance of portions of the sample.
Khoury Ibrahim, Danielle A. Savage, Addie Schnirel et al.
Leveraging over 30,000 images each with up to 89 labels collected by Recology---an integrated resource recovery company with both residential and commercial trash, recycling and composting services---the authors develop ContamiNet, a convolutional neural network, to identify contaminating material in residential recycling and compost bins. When training the model on a subset of labels that meet a minimum frequency threshold, ContamiNet preforms almost as well human experts in detecting contamination (0.86 versus 0.88 AUC). Recology is actively piloting ContamiNet in their daily municipal solid waste (MSW) collection to identify contaminants in recycling and compost bins to subsequently inform and educate customers about best sorting practices.
Miguel Coviello Gonzalez, Marek Chrobak
We address the problem of designing micro-fluidic chips for sample preparation, which is a crucial step in many experimental processes in chemical and biological sciences. One of the objectives of sample preparation is to dilute the sample fluid, called reactant, using another fluid called buffer, to produce desired volumes of fluid with prespecified reactant concentrations. In the model we adopt, these fluids are manipulated in discrete volumes called droplets. The dilution process is represented by a mixing graph whose nodes represent 1-1 micro-mixers and edges represent channels for transporting fluids. In this work we focus on designing such mixing graphs when the given sample (also referred to as the target) consists of a single-droplet, and the objective is to minimize total fluid waste. Our main contribution is an efficient algorithm called RPRIS that guarantees a better provable worst-case bound on waste and significantly outperforms state-of-the-art algorithms in experimental comparison.
Halaman 20 dari 22744