Hasil untuk "Standardization. Simplification. Waste"

Menampilkan 20 dari ~426085 hasil · dari DOAJ, arXiv, CrossRef

JSON API
arXiv Open Access 2026
A.R.I.S.: Automated Recycling Identification System for E-Waste Classification Using Deep Learning

Dhruv Talwar, Harsh Desai, Wendong Yin et al.

Traditional electronic recycling processes suffer from significant resource loss due to inadequate material separation and identification capabilities, limiting material recovery. We present A.R.I.S. (Automated Recycling Identification System), a low-cost, portable sorter for shredded e-waste that addresses this efficiency gap. The system employs a YOLOx model to classify metals, plastics, and circuit boards in real time, achieving low inference latency with high detection accuracy. Experimental evaluation yielded 90% overall precision, 82.2% mean average precision (mAP), and 84% sortation purity. By integrating deep learning with established sorting methods, A.R.I.S. enhances material recovery efficiency and lowers barriers to advanced recycling adoption. This work complements broader initiatives in extending product life cycles, supporting trade-in and recycling programs, and reducing environmental impact across the supply chain.

en cs.LG
arXiv Open Access 2026
The use of spectral indices in environmental monitoring of smouldering coal-waste dumps

Anna Abramowicz, Michal Laska, Adam Nadudvari et al.

The study aimed to evaluate the applicability of environmental indices in the monitoring of smouldering coal-waste dumps. A dump located in the Upper Silesian Coal Basin served as the research site for a multi-method analysis combining remote sensing and field-based data. Two UAV survey campaigns were conducted, capturing RGB, infrared, and multispectral imagery. These were supplemented with direct ground measurements of subsurface temperature and detailed vegetation mapping. Additionally, publicly available satellite data from the Landsat and Sentinel missions were analysed. A range of vegetation and fire-related indices (NDVI, SAVI, EVI, BAI, among others) were calculated to identify thermally active zones and assess vegetation conditions within these degraded areas. The results revealed strong seasonal variability in vegetation indices on thermally active sites, with evidence of disrupted vegetation cycles, including winter greening in moderately heated root zones - a pattern indicative of stress and degradation processes. While satellite data proved useful in reconstructing the fire history of the dump, their spatial resolution was insufficient for detailed monitoring of small-scale thermal anomalies. The study highlights the diagnostic potential of UAV-based remote sensing in post-industrial environments undergoing land degradation but emphasises the importance of field validation for accurate environmental assessment.

en physics.geo-ph, physics.ins-det
arXiv Open Access 2025
Standardization of Weighted Ranking Correlation Coefficients

Pierangelo Lombardo

A fundamental problem in statistics is measuring the correlation between two rankings of a set of items. Kendall's $τ$ and Spearman's $ρ$ are well established correlation coefficients whose symmetric structure guarantees zero expected value between two rankings randomly chosen with uniform probability. In many modern applications, however, greater importance is assigned to top-ranked items, motivating weighted variants of these coefficients. Such weighting schemes generally break the symmetry of the original formulations, resulting in a non-zero expected value under independence and compromising the interpretation of zero correlation. We propose a general standardization function $g(\cdot)$ that transforms a ranking correlation coefficient $Γ$ into a standardized form $g(Γ)$ with zero expected value under randomness. The transformation preserves the domain $[-1,1]$, satisfies the boundary conditions, is continuous and increasing, and reduces to the identity for coefficients that already satisfy the zero-expected-value property. The construction of $g(x)$ depends on three distributional parameters of $Γ$: its mean, variance, and left variance; since their exact calculation becomes infeasible for large ranking lengths $n$, we develop accurate numerical estimates based on Monte Carlo sampling combined with polynomial regression to capture their dependence on $n$.

en stat.ME, cond-mat.stat-mech
arXiv Open Access 2025
Toward AIML Enabled WiFi Beamforming CSI Feedback Compression: An Overview of IEEE 802.11 Standardization

Ziming He

Transmit beamforming is one of the key techniques used in the existing IEEE 802.11 WiFi standards and future generations such as 11be and 11bn, a.k.a., ultra high reliability (UHR). The paper gives an overview of the current standardization activities regarding the artificial intelligence and machine learning (AIML) enabled beamforming channel state information (CSI) feedback compression technique, defined by the 802.11 AIML topic interest group (TIG). Two key challenges the AIML TIG is going to tackle in the future beamforming standards and four defined key performance indicators (KPIs) for the AIML enabled schemes are discussed in the paper. The two challenges are the CSI feedback overhead and the compression complexity, and the four KPIs are feedback overhead, AIML model sharing overhead, packet error rate and complexity. Moreover, the paper presents a couple of AIML enabled compression schemes accepted by the TIG, such as the K-means and autoencoder based schemes, and uses simulated and analyzed data to explain how these schemes are designed according to the KPIs. Finally, future research directions are indicated for encouraging more researchers and engineers to contribute to this technique and the standardization of the next generation WiFi beamforming.

en eess.SP
arXiv Open Access 2025
Taming Domain Shift in Multi-source CT-Scan Classification via Input-Space Standardization

Chia-Ming Lee, Bo-Cheng Qiu, Ting-Yao Chen et al.

Multi-source CT-scan classification suffers from domain shifts that impair cross-source generalization. While preprocessing pipelines combining Spatial-Slice Feature Learning (SSFL++) and Kernel-Density-based Slice Sampling (KDS) have shown empirical success, the mechanisms underlying their domain robustness remain underexplored. This study analyzes how this input-space standardization manages the trade-off between local discriminability and cross-source generalization. The SSFL++ and KDS pipeline performs spatial and temporal standardization to reduce inter-source variance, effectively mapping disparate inputs into a consistent target space. This preemptive alignment mitigates domain shift and simplifies the learning task for network optimization. Experimental validation demonstrates consistent improvements across architectures, proving the benefits stem from the preprocessing itself. The approach's effectiveness was validated by securing first place in a competitive challenge, supporting input-space standardization as a robust and practical solution for multi-institutional medical imaging.

en eess.IV, cs.CE
arXiv Open Access 2025
Model-robust standardization in cluster-randomized trials

Fan Li, Jiaqi Tong, Xi Fang et al.

In cluster-randomized trials, generalized linear mixed models and generalized estimating equations have conventionally been the default analytic methods for estimating the average treatment effect as routine practice. However, recent studies have demonstrated that their treatment effect coefficient estimators may correspond to ambiguous estimands when the models are misspecified or when there exists informative cluster sizes. In this article, we present a unified approach that standardizes output from a given regression model to ensure estimand-aligned inference for the treatment effect parameters in cluster-randomized trials. We introduce estimators for both the cluster-average and the individual-average treatment effects (marginal estimands) that are always consistent regardless of whether the specified working regression models align with the unknown data generating process. We further explore the use of a deletion-based jackknife variance estimator for inference. The development of our approach also motivates a natural test for informative cluster size. Extensive simulation experiments are designed to demonstrate the advantage of the proposed estimators under a variety of scenarios. The proposed model-robust standardization methods are implemented in the MRStdCRT R package.

en stat.ME
arXiv Open Access 2024
Logit Standardization in Knowledge Distillation

Shangquan Sun, Wenqi Ren, Jingzhi Li et al.

Knowledge distillation involves transferring soft labels from a teacher to a student using a shared temperature-based softmax function. However, the assumption of a shared temperature between teacher and student implies a mandatory exact match between their logits in terms of logit range and variance. This side-effect limits the performance of student, considering the capacity discrepancy between them and the finding that the innate logit relations of teacher are sufficient for student to learn. To address this issue, we propose setting the temperature as the weighted standard deviation of logit and performing a plug-and-play Z-score pre-process of logit standardization before applying softmax and Kullback-Leibler divergence. Our pre-process enables student to focus on essential logit relations from teacher rather than requiring a magnitude match, and can improve the performance of existing logit-based distillation methods. We also show a typical case where the conventional setting of sharing temperature between teacher and student cannot reliably yield the authentic distillation evaluation; nonetheless, this challenge is successfully alleviated by our Z-score. We extensively evaluate our method for various student and teacher models on CIFAR-100 and ImageNet, showing its significant superiority. The vanilla knowledge distillation powered by our pre-process can achieve favorable performance against state-of-the-art methods, and other distillation variants can obtain considerable gain with the assistance of our pre-process.

en cs.CV
arXiv Open Access 2023
DiffusionCT: Latent Diffusion Model for CT Image Standardization

Md Selim, Jie Zhang, Michael A. Brooks et al.

Computed tomography (CT) is one of the modalities for effective lung cancer screening, diagnosis, treatment, and prognosis. The features extracted from CT images are now used to quantify spatial and temporal variations in tumors. However, CT images obtained from various scanners with customized acquisition protocols may introduce considerable variations in texture features, even for the same patient. This presents a fundamental challenge to downstream studies that require consistent and reliable feature analysis. Existing CT image harmonization models rely on GAN-based supervised or semi-supervised learning, with limited performance. This work addresses the issue of CT image harmonization using a new diffusion-based model, named DiffusionCT, to standardize CT images acquired from different vendors and protocols. DiffusionCT operates in the latent space by mapping a latent non-standard distribution into a standard one. DiffusionCT incorporates an Unet-based encoder-decoder, augmented by a diffusion model integrated into the bottleneck part. The model is designed in two training phases. The encoder-decoder is first trained, without embedding the diffusion model, to learn the latent representation of the input data. The latent diffusion model is then trained in the next training phase while fixing the encoder-decoder. Finally, the decoder synthesizes a standardized image with the transformed latent representation. The experimental results demonstrate a significant improvement in the performance of the standardization task using DiffusionCT.

en eess.IV, cs.CV
arXiv Open Access 2023
Envisioning the Future of Cyber Security in Post-Quantum Era: A Survey on PQ Standardization, Applications, Challenges and Opportunities

Saleh Darzi, Kasra Ahmadi, Saeed Aghapour et al.

The rise of quantum computers exposes vulnerabilities in current public key cryptographic protocols, necessitating the development of secure post-quantum (PQ) schemes. Hence, we conduct a comprehensive study on various PQ approaches, covering the constructional design, structural vulnerabilities, and offer security assessments, implementation evaluations, and a particular focus on side-channel attacks. We analyze global standardization processes, evaluate their metrics in relation to real-world applications, and primarily focus on standardized PQ schemes, selected additional signature competition candidates, and PQ-secure cutting-edge schemes beyond standardization. Finally, we present visions and potential future directions for a seamless transition to the PQ era.

en cs.CR
arXiv Open Access 2023
LLM4Jobs: Unsupervised occupation extraction and standardization leveraging Large Language Models

Nan Li, Bo Kang, Tijl De Bie

Automated occupation extraction and standardization from free-text job postings and resumes are crucial for applications like job recommendation and labor market policy formation. This paper introduces LLM4Jobs, a novel unsupervised methodology that taps into the capabilities of large language models (LLMs) for occupation coding. LLM4Jobs uniquely harnesses both the natural language understanding and generation capacities of LLMs. Evaluated on rigorous experimentation on synthetic and real-world datasets, we demonstrate that LLM4Jobs consistently surpasses unsupervised state-of-the-art benchmarks, demonstrating its versatility across diverse datasets and granularities. As a side result of our work, we present both synthetic and real-world datasets, which may be instrumental for subsequent research in this domain. Overall, this investigation highlights the promise of contemporary LLMs for the intricate task of occupation extraction and standardization, laying the foundation for a robust and adaptable framework relevant to both research and industrial contexts.

en cs.CL, cs.AI
arXiv Open Access 2023
ECMAScript -- The journey of a programming language from an idea to a standard

Juho Vepsäläinen

A significant portion of the web is powered by ECMAScript. As a web technology, it is ubiquitous and available on most platforms natively or through a web browser. ECMAScript is the dominant language of the web, but at the same time, it was not designed as such. The story of ECMAScript is a story of the impact of standardization on the popularity of technology. Simultaneously, the story shows how external pressures can shape a programming language and how politics can mar the evolution of a standard. In this article, we will go through the movements that led to the dominant position of ECMAScript, evaluate the factors leading to it, and consider its evolution using the Futures Triangle framework and the theory of standards wars.

en cs.PL
arXiv Open Access 2023
Automatic Standardization of Arabic Dialects for Machine Translation

Abidrabbo Alnassan

Based on an annotated multimedia corpus, television series Mar{ā}y{ā} 2013, we dig into the question of ''automatic standardization'' of Arabic dialects for machine translation. Here we distinguish between rule-based machine translation and statistical machine translation. Machine translation from Arabic most of the time takes standard or modern Arabic as the source language and produces quite satisfactory translations thanks to the availability of the translation memories necessary for training the models. The case is different for the translation of Arabic dialects. The productions are much less efficient. In our research we try to apply machine translation methods to a dialect/standard (or modern) Arabic pair to automatically produce a standard Arabic text from a dialect input, a process we call ''automatic standardization''. we opt here for the application of ''statistical models'' because ''automatic standardization'' based on rules is more hard with the lack of ''diglossic'' dictionaries on the one hand and the difficulty of creating linguistic rules for each dialect on the other. Carrying out this research could then lead to combining ''automatic standardization'' software and automatic translation software so that we take the output of the first software and introduce it as input into the second one to obtain at the end a quality machine translation. This approach may also have educational applications such as the development of applications to help understand different Arabic dialects by transforming dialectal texts into standard Arabic.

en cs.CL
arXiv Open Access 2023
An Open Dataset Storage Standard for 6G Testbeds

Gilles Callebaut, Michiel Sandra, Christian Nelson et al.

The emergence of sixth-generation (6G) networks has spurred the development of novel testbeds, including sub-THz networks, cell-free systems, and 6G simulators. To maximize the benefits of these systems, it is crucial to make the generated data publicly available and easily reusable by others. Although data sharing has become a common practice, a lack of standardization hinders data accessibility and interoperability. In this study, we propose the Dataset Storage Standard (DSS) to address these challenges by facilitating data exchange and enabling convenient processing script creation in a testbed-agnostic manner. DSS supports both experimental and simulated data, allowing researchers to employ the same processing scripts and tools across different datasets. Unlike existing standardization efforts such as SigMF and NI RF Data Recording API, DSS provides a broader scope by accommodating a common definition file for testbeds and is not limited to RF data storage. The dataset format utilizes a hierarchical structure, with a tensor representation for specific experiment scenarios. In summary, DSS offers a comprehensive and flexible framework for enhancing the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) in 6G testbeds, promoting open and efficient data sharing in the research community.

en eess.SP
arXiv Open Access 2020
Unveiling Relations in the Industry 4.0 Standards Landscape based on Knowledge Graph Embeddings

Ariam Rivas, Irlán Grangel-González, Diego Collarana et al.

Industry~4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of \emph{empowering interoperability} in smart factories. These standards enable the description and interaction of the main components, systems, and processes inside of a smart factory. Due to the growing number of frameworks and standards, there is an increasing need for approaches that automatically analyze the landscape of I4.0 standards. Standardization frameworks classify standards according to their functions into layers and dimensions. However, similar standards can be classified differently across the frameworks, producing, thus, interoperability conflicts among them. Semantic-based approaches that rely on ontologies and knowledge graphs, have been proposed to represent standards, known relations among them, as well as their classification according to existing frameworks. Albeit informative, the structured modeling of the I4.0 landscape only provides the foundations for detecting interoperability issues. Thus, graph-based analytical methods able to exploit knowledge encoded by these approaches, are required to uncover alignments among standards. We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards. We use knowledge graph embeddings to automatically create these communities exploiting the meaning of the existing relationships. In particular, we focus on the identification of similar standards, i.e., communities of standards, and analyze their properties to detect unknown relations. We empirically evaluate our approach on a knowledge graph of I4.0 standards using the Trans$^*$ family of embedding models for knowledge graph entities. Our results are promising and suggest that relations among standards can be detected accurately.

en cs.AI, cs.DB
arXiv Open Access 2020
STAN-CT: Standardizing CT Image using Generative Adversarial Network

Md Selim, Jie Zhang, Baowei Fei et al.

Computed tomography (CT) plays an important role in lung malignancy diagnostics and therapy assessment and facilitating precision medicine delivery. However, the use of personalized imaging protocols poses a challenge in large-scale cross-center CT image radiomic studies. We present an end-to-end solution called STAN-CT for CT image standardization and normalization, which effectively reduces discrepancies in image features caused by using different imaging protocols or using different CT scanners with the same imaging protocol. STAN-CT consists of two components: 1) a novel Generative Adversarial Networks (GAN) model that is capable of effectively learning the data distribution of a standard imaging protocol with only a few rounds of generator training, and 2) an automatic DICOM reconstruction pipeline with systematic image quality control that ensure the generation of high-quality standard DICOM images. Experimental results indicate that the training efficiency and model performance of STAN-CT have been significantly improved compared to the state-of-the-art CT image standardization and normalization algorithms.

en eess.IV, cs.CV
arXiv Open Access 2019
Accelerating Training of Deep Neural Networks with a Standardization Loss

Jasmine Collins, Johannes Balle, Jonathon Shlens

A significant advance in accelerating neural network training has been the development of normalization methods, permitting the training of deep models both faster and with better accuracy. These advances come with practical challenges: for instance, batch normalization ties the prediction of individual examples with other examples within a batch, resulting in a network that is heavily dependent on batch size. Layer normalization and group normalization are data-dependent and thus must be continually used, even at test-time. To address the issues that arise from using explicit normalization techniques, we propose to replace existing normalization methods with a simple, secondary objective loss that we term a standardization loss. This formulation is flexible and robust across different batch sizes and surprisingly, this secondary objective accelerates learning on the primary training objective. Because it is a training loss, it is simply removed at test-time, and no further effort is needed to maintain normalized activations. We find that a standardization loss accelerates training on both small- and large-scale image classification experiments, works with a variety of architectures, and is largely robust to training across different batch sizes.

en cs.LG, cs.AI
arXiv Open Access 2014
Custom v. Standardized Risk Models

Zura Kakushadze, Jim Kyung-Soo Liew

We discuss when and why custom multi-factor risk models are warranted and give source code for computing some risk factors. Pension/mutual funds do not require customization but standardization. However, using standardized risk models in quant trading with much shorter holding horizons is suboptimal: 1) longer horizon risk factors (value, growth, etc.) increase noise trades and trading costs; 2) arbitrary risk factors can neutralize alpha; 3) "standardized" industries are artificial and insufficiently granular; 4) normalization of style risk factors is lost for the trading universe; 5) diversifying risk models lowers P&L correlations, reduces turnover and market impact, and increases capacity. We discuss various aspects of custom risk model building.

en q-fin.PM, q-fin.RM
arXiv Open Access 2014
Standardization of type Ia supernovae

Rodrigo C. V. Coelho, Maurício O. Calvão, Ribamar R. R. Reis et al.

Type Ia supernovae (SNe Ia) have been intensively investigated due to its great homogeneity and high luminosity, which make it possible to use them as standardizable candles for the determination of cosmological parameters. In 2011, the physics Nobel prize was awarded for the discovery of the accelerating expansion of the Universe through observations of distant supernovae. This is a pedagogical article, aimed at those starting their study of that subject, in which we dwell on some topics related to the analysis of SNe Ia and their use in luminosity distance estimators. Here we investigate their spectral properties and light curve standardization, paying careful attention to the fundamental quantities directly related to the SNe Ia observables. Finally, we describe our own step-by-step implementation of a classical light curve fitter, the stretch, applying it to real data from the Carnegie Supernova Project.

en astro-ph.CO, astro-ph.HE
arXiv Open Access 2013
High Quality Requirement Engineering and Applying Priority Based Tools for QoS Standardization in Web Service Architecture

C. Dinesh

Even though there are more development to improving the Quality of Service and requirement engineering in web services yet there is a big scarcity for its related standardization in day to day progress leading to vast needs in its area. Also in web service environment it always has been a big challenge to raise the standard of Quality of Service in requirement engineering analysis.

en cs.SE

Halaman 42 dari 21305