Andrea Lodi, S. Martello, M. Monaci
Hasil untuk "Standardization. Simplification. Waste"
Menampilkan 20 dari ~454891 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Jan-Niklas Schäfer, Tillmann Carl, Kristin Kühl et al.
The rapid advancement of high-performance computing infrastructure and its extended application produce an increasing amount of waste heat. This heat constitutes an unsustainable loss of energy as well as requires cooling solutions that transcend conventional thermal management. Here, we demonstrate a novel mechanism that converts vertical waste heat supply directly into horizontal fluid motion, enabling autonomous, self-powered pumping in microenvironments. Our approach is based on a concept that combines geometric symmetry breaking with heterogeneous thermal conductivities to induce local thermocapillary Marangoni flows. We provide an implementation of the concept as well as an experimental and numerical proof-of-concept, showing good agreement between the respective flow fields. The approach is scalable and operates under realistic areal heating conditions. It enables versatile pumping designs for microtechnological applications, lab-on-a-chip architectures, passive thermal management and heat-driven microfluidic systems.
Jeffrey Spaan, Kuan-Hsun Chen, Ana-Lucia Varbanescu
The rapid growth of AI has fueled the expansion of accelerator- or GPU-based data centers. However, the rising operational energy consumption has emerged as a critical bottleneck and a major sustainability concern. Dynamic Voltage and Frequency Scaling (DVFS) is a well-known technique used to reduce energy consumption, and thus improve energy-efficiency, since it requires little effort and works with existing hardware. Reducing the energy consumption of training and inference of Large Language Models (LLMs) through DVFS or power capping is feasible: related work has shown energy savings can be significant, but at the cost of significant slowdowns. In this work, we focus on reducing waste in LLM operations: i.e., reducing energy consumption without losing performance. We propose a fine-grained, kernel-level, DVFS approach that explores new frequency configurations, and prove these save more energy than previous, pass- or iteration-level solutions. For example, for a GPT-3 training run, a pass-level approach could reduce energy consumption by 2% (without losing performance), while our kernel-level approach saves as much as 14.6% (with a 0.6% slowdown). We further investigate the effect of data and tensor parallelism, and show our discovered clock frequencies translate well for both. We conclude that kernel-level DVFS is a suitable technique to reduce waste in LLM operations, providing significant energy savings with negligible slow-down.
Da Kong, Vadim Indelman
Partially Observable Markov Decision Processes (POMDPs) provide a principled mathematical framework for decision-making under uncertainty. However, the exact solution to POMDPs is computationally intractable. In this paper, we address the computational intractability by introducing a novel framework for adaptive open-loop simplification with formal performance guarantees. Our method adaptively interleaves open-loop and closed-loop planning via a topology-based belief tree, enabling a significant reduction in planning complexity. The key contribution lies in the derivation of efficiently computable bounds which provide formal guarantees and can be used to ensure that our simplification can identify the immediate optimal action of the original POMDP problem. Our framework therefore provides computationally tractable performance guarantees for macro-actions within POMDPs. Furthermore, we propose a novel framework for safely skipping replanning during execution, supported by theoretical guarantees on multi-step open-loop action sequences. To the best of our knowledge, this framework is the first to address skipping replanning with formal performance guarantees. Practical online solvers for our proposed simplification are developed, including a sampling-based solver and an anytime solver. Empirical results demonstrate substantial computational speedups while maintaining provable performance guarantees, advancing the tractability and efficiency of POMDP planning.
Jonas V. Funk, Lukas Roming, Andreas Michel et al.
Growing waste streams and the transition to a circular economy require efficient automated waste sorting. In industrial settings, materials move on fast conveyor belts, where reliable identification and ejection demand pixel-accurate segmentation. RGB imaging delivers high-resolution spatial detail, which is essential for accurate segmentation, but it confuses materials that look similar in the visible spectrum. Hyperspectral imaging (HSI) provides spectral signatures that separate such materials, yet its lower spatial resolution limits detail. Effective waste sorting therefore needs methods that fuse both modalities to exploit their complementary strengths. We present Bidirectional Cross-Attention Fusion (BCAF), which aligns high-resolution RGB with low-resolution HSI at their native grids via localized, bidirectional cross-attention, avoiding pre-upsampling or early spectral collapse. BCAF uses two independent backbones: a standard Swin Transformer for RGB and an HSI-adapted Swin backbone that preserves spectral structure through 3D tokenization with spectral self-attention. We also analyze trade-offs between RGB input resolution and the number of HSI spectral slices. Although our evaluation targets RGB-HSI fusion, BCAF is modality-agnostic and applies to co-registered RGB with lower-resolution, high-channel auxiliary sensors. On the benchmark SpectralWaste dataset, BCAF achieves state-of-the-art performance of 76.4% mIoU at 31 images/s and 75.4% mIoU at 55 images/s. We further evaluate a novel industrial dataset: K3I-Cycling (first RGB subset already released on Fordatis). On this dataset, BCAF reaches 62.3% mIoU for material segmentation (paper, metal, plastic, etc.) and 66.2% mIoU for plastic-type segmentation (PET, PP, HDPE, LDPE, PS, etc.).
Tiffany Yu, Rye Stahle-Smith, Darssan Eswaramoorthi et al.
Symbolic accelerators are increasingly used for symbolic data processing in domains such as genomics, NLP, and cybersecurity. However, these accelerators face scalability issues due to excessive memory use and routing complexity, especially when targeting a large set. We present AutoSlim, a machine learning-based graph simplification framework designed to reduce the complexity of symbolic accelerators built on Non-deterministic Finite Automata (NFA) deployed on FPGA-based overlays such as NAPOLY+. AutoSlim uses Random Forest classification to prune low-impact transitions based on edge scores and structural features, significantly reducing automata graph density while preserving semantic correctness. Unlike prior tools, AutoSlim targets automated score-aware simplification with weighted transitions, enabling efficient ranking-based sequence analysis. We evaluated data sets (1K to 64K nodes) in NAPOLY+ and conducted performance measurements including latency, throughput, and resource usage. AutoSlim achieves up to 40 percent reduction in FPGA LUTs and over 30 percent pruning in transitions, while scaling to graphs an order of magnitude larger than existing benchmarks. Our results also demonstrate how hardware interconnection (fanout) heavily influences hardware cost and that AutoSlim's pruning mitigates resource blowup.
Xunyang Zhu, Hongfei Ye, Yifei Wang et al.
The sizing field defined on a triangular background grid is pivotal for controlling the quality and efficiency of unstructured mesh generation. However, creating an optimal background grid that is geometrically conforming, computationally lightweight, and free from artifacts like banding is a significant challenge. This paper introduces a novel, adaptive background grid simplification (ABGS) framework based on a Graph Convolutional Network (GCN). We reformulate the grid simplification task as an edge score regression problem and train a GCN model to efficiently predict optimal edge collapse candidates. The model is guided by a custom loss function that holistically considers both geometric fidelity and sizing field accuracy. This data-driven approach replaces a costly procedural evaluation, accelerating the simplification process. Experimental results demonstrate the effectiveness of our framework across diverse and complex engineering models. Compared to the initial dense grids, our simplified background grids achieve an element reduction of 74%-94%, leading to a 35%-88% decrease in sizing field query times.
Yingqiang Gao, Kaede Johnson, David Froehlich et al.
Automatic text simplification (ATS) aims to enhance language accessibility for various target groups, particularly persons with intellectual disabilities. Recent advancements in generative AI, especially large language models (LLMs), have substantially improved the quality of machine-generated text simplifications, thereby mitigating information barriers for the target group. However, existing LLM-based ATS systems do not incorporate preference feedback on text simplifications during training, resulting in a lack of personalization tailored to the specific needs of target group representatives. In this work, we extend the standard supervised fine-tuning (SFT) approach for adapting LLM-based ATS models by leveraging a computationally efficient LLM alignment technique -- direct preference optimization (DPO). Specifically, we post-train LLM-based ATS models using human feedback collected from persons with intellectual disabilities, reflecting their preferences on paired text simplifications generated by mainstream LLMs. Furthermore, we propose a pipeline for developing personalized LLM-based ATS systems, encompassing data collection, model selection, SFT and DPO post-training, and evaluation. Our findings underscore the necessity of active participation of target group persons in designing personalized AI accessibility solutions aligned with human expectations. This work represents a step towards personalizing inclusive AI systems at the target-group level, incorporating insights not only from text simplification experts but also from target group persons themselves.
K. C. Appling, M. Sobsey, L. Durso et al.
Antimicrobial resistance (AMR) threatens human and animal health; effective response requires monitoring AMR presence in humans, animals, and the environment. The World Health Organization Tricycle Protocol (WHO TP) standardizes and streamlines global AMR monitoring around a single indicator organism, extended-spectrum-β-lactamase-producing Escherichia coli (ESBL-Ec). The WHO TP culture-based method detects and quantifies ESBL-Ec by spread-plating or membrane filtration on either MacConkey or TBX agar (supplemented with cefotaxime). These methods require laboratories and trained personnel, limiting feasibility in low-resource and field settings. We adapted the WHO TP using a simplified method, the compartment bag test (CBT), to quantify most probable numbers (MPN) of ESBL-Ec in samples. CBT methods can be used correctly in the field by typical adults after a few hours’ training. We collected and analyzed municipal wastewater, surface water, and chicken waste samples from sites in Raleigh and Chapel Hill, NC over an 8-month period. Presumptive ESBL-Ec were quantified using MF on TBX agar supplemented with cefotaxime (MF+TBX), as well as using the CBT with chromogenic E. coli medium containing cefotaxime. Presumptive ESBL-Ec bacteria were isolated from completed tests for confirmation and characterization by Kirby Bauer disk diffusion tests (antibiotic sensitivity) and EnteroPluri biochemical tests (speciation). Both methods were easy to use, but MF+TBX required additional time and effort. The proportion of E. coli that were presumptively ESBL in surface water samples was significantly greater downstream vs upstream of wastewater treatment plant (WWTP) outfalls, suggesting that treated wastewater is a source of ESBL-Ec in some surface waters. The CBT and MF+TBX tests provided similar (but not identical) quantitative results, making the former method suitable as an alternative to the more complex MF+TBX procedure in some applications. Further AMR surveillance using MF+TBX and/or CBT methods may be useful to characterize and refine their performance for AMR monitoring in NC and elsewhere.
Chaozhong Xue, Yongqi Dong, Jiaqi Liu et al.
Medical waste recycling and treatment has gradually drawn concerns from the whole society, as the amount of medical waste generated is increasing dramatically, especially during the pandemic of COVID-19. To tackle the emerging challenges, this study designs a reverse logistics system architecture with three modules, i.e., medical waste classification & monitoring module, temporary storage & disposal site (disposal site for short) selection module, as well as route optimization module. This overall solution design won the Grand Prize of the "YUNFENG CUP" China National Contest on Green Supply and Reverse Logistics Design ranking 1st. This paper focuses on the design of the route optimization module. In this module, a route optimization problem is designed considering transportation costs and multiple risk costs (e.g., environment risk, population risk, property risk, and other accident-related risks). The Analytic Hierarchy Process is employed to determine the weights for each risk element, and a customized genetic algorithm is developed to solve the route optimization problem. A case study under the COVID-19 pandemic is further provided to verify the proposed model. Limited by length, detailed descriptions of the whole system and the other modules can be found at https://shorturl.at/cdY59.
Shuzhen Li, Xin Wu, Youzhou Jiang et al.
The sustainable recycling of valuable metals from spent lithium-ion batteries (LIBs) is impeded by the issues of extensive chemicals consumption, tedious separation process and deficient selectivity. Here, novel electrochemically driven and internal circulation strategy was developed for the direct and selective recycling of valuable metals from waste LiCoO2 of spent LIBs. Firstly, the waste LiCoO2 can be efficiently dissolved by generated acid (H2SO4) during electro-deposition of Cu from CuSO4 electrolyte. Then, Co2+ ions in the lixivium can be electrodeposited and recovered as metallic Co with a coinstantaneous regeneration of H2SO4 and regenerated acid can be reused as leachant without obvious shrinking of leaching capability based on circulating leaching results. Over 92% Co and 97% Li can be leached, and 100% Cu and 93% Co are recovered as their metallic forms under the optimized experimental conditions. Results of leaching kinetics suggest that the leaching of Co and Li is controlled by internal diffusion with significantly reduced apparent activation energies (Ea) for Li and Co. Finally, Li2CO3 can be recovered from Li+ enriched lixivium after circulating leaching. This recycling process is a simplified route without any input of leachant and reductant, and valuable metals can be selectively recovered in a closed-loop way with high efficiency.
Yu Qiao, Xiaofei Li, Daniel Wiechmann et al.
State-of-the-art text simplification (TS) systems adopt end-to-end neural network models to directly generate the simplified version of the input text, and usually function as a blackbox. Moreover, TS is usually treated as an all-purpose generic task under the assumption of homogeneity, where the same simplification is suitable for all. In recent years, however, there has been increasing recognition of the need to adapt the simplification techniques to the specific needs of different target groups. In this work, we aim to advance current research on explainable and controllable TS in two ways: First, building on recently proposed work to increase the transparency of TS systems, we use a large set of (psycho-)linguistic features in combination with pre-trained language models to improve explainable complexity prediction. Second, based on the results of this preliminary task, we extend a state-of-the-art Seq2Seq TS model, ACCESS, to enable explicit control of ten attributes. The results of experiments show (1) that our approach improves the performance of state-of-the-art models for predicting explainable complexity and (2) that explicitly conditioning the Seq2Seq model on ten attributes leads to a significant improvement in performance in both within-domain and out-of-domain settings.
Sabine Storandt, Johannes Zink
Given a polyline on $n$ vertices, the polyline simplification problem asks for a minimum size subsequence of these vertices defining a new polyline whose distance to the original polyline is at most a given threshold under some distance measure, usually the local Hausdorff or the local Fréchet distance. Here, local means that, for each line segment of the simplified polyline, only the distance to the corresponding sub-curve in the original polyline is measured. Melkman and O'Rourke [Computational Morphology '88] introduced a geometric data structure to solve polyline simplification under the local Hausdorff distance in $O(n^2 \log n)$ time, and Guibas, Hershberger, Mitchell, and Snoeyink [Int. J. Comput. Geom. Appl. '93] considered polyline simplification under the Fréchet distance as ordered stabbing and provided an algorithm with a running time of $O(n^2 \log^2 n)$, but they did not restrict the simplified polyline to use only vertices of the original polyline. We show that their techniques can be adjusted to solve polyline simplification under the local Fréchet distance in $O(n^2 \log n)$ time instead of $O(n^3)$ when applying the Imai--Iri algorithm. Our algorithm may serve as a more efficient subroutine for multiple other algorithms. We provide a simple algorithm description as well as rigorous proofs to substantiate this theorem. We also investigate the geometric data structure introduced by Melkman and O'Rourke, which we refer to as wavefront, in more detail and feature some interesting properties. As a result, we can prove that under the L$_1$ and the L$_\infty$ norm, the algorithm can be significantly simplified and then only requires a running time of $O(n^2)$. We also define a natural class of polylines where our algorithm always achieves this running time also in the Euclidean norm L$_2$.
L. Daciolo, N. S. Correia, M. Boscov
For the design of Municipal Solid Waste (MSW) landfills, especially in the initial stages of the project, shear strength parameters are often selected from published results or literature recommendations and results of published tests. However, when adopting MSW shear strength parameters of MSW in the literature, a great variability is present, associated with testing procedures and intrinsic regional differences in the samples. Despite the lack of standardization of observations in the literature, statistical techniques results can help identify the main factors that affect this variability and categorize the observations for better inference of the results. This research gathered 313 observations of laboratory direct shear tests results presented in 30 international published researches, addressing results from different countries and testing configurations, in order to assess data statistical behavior and propose a classification. Results showed the factors that most contribute to the observational divergences, being the main factor associated with the mechanical-morphological behavior of waste components. A reorganization of data into classes (A, B and C) according to waste compressible, incompressible and reinforcement compositions was carried out in order to group shear strength parameters in a ternary diagram. The classification of shear strength envelopes for each proposed class and different strain levels enabled the verification of a hardening behavior of MSW and a prediction of mechanical parameters.
Christina Niklaus, Matthias Cetto, André Freitas et al.
We present a context-preserving text simplification (TS) approach that recursively splits and rephrases complex English sentences into a semantic hierarchy of simplified sentences. Using a set of linguistically principled transformation patterns, input sentences are converted into a hierarchical representation in the form of core sentences and accompanying contexts that are linked via rhetorical relations. Hence, as opposed to previously proposed sentence splitting approaches, which commonly do not take into account discourse-level aspects, our TS approach preserves the semantic relationship of the decomposed constituents in the output. A comparative analysis with the annotations contained in the RST-DT shows that we are able to capture the contextual hierarchy between the split sentences with a precision of 89% and reach an average precision of 69% for the classification of the rhetorical relations that hold between them.
A. Tamburini, M. Tedesco, A. Cipollina et al.
E. Bontempi
Dhruv Kumar, Lili Mou, Lukasz Golab et al.
We present a novel iterative, edit-based approach to unsupervised sentence simplification. Our model is guided by a scoring function involving fluency, simplicity, and meaning preservation. Then, we iteratively perform word and phrase-level edits on the complex sentence. Compared with previous approaches, our model does not require a parallel training set, but is more controllable and interpretable. Experiments on Newsela and WikiLarge datasets show that our approach is nearly as effective as state-of-the-art supervised approaches.
Gang Xu, Ran Ling, Jessica Zhang et al.
In this paper, we propose an improved singularity structure simplification method for hexahedral (hex) meshes using a weighted ranking approach. In previous work, the selection of to-be-collapsed base complex sheets/chords is only based on their thickness, which will introduce a few closed-loops and cause an early termination of simplification and a slow convergence rate. In this paper, a new weighted ranking function is proposed by combining the valence prediction function of local singularity structure, shape quality metric of elements and the width of base complex sheets/chords together. Adaptive refinement and local optimization are also introduced to improve the uniformity and aspect ratio of mesh elements. Compared to thickness ranking methods, our weighted ranking approach can yield a simpler singularity structure with fewer base-complex components, while achieving comparable Hausdorff distance ratio and better mesh quality. Comparisons on a hex-mesh dataset are performed to demonstrate the effectiveness of the proposed method.
Halaman 22 dari 22745