Hasil untuk "Standardization. Simplification. Waste"

Menampilkan 20 dari ~454979 hasil · dari CrossRef, arXiv, DOAJ, Semantic Scholar

JSON API
arXiv Open Access 2026
Metrology for Quantum Hardware Standardization -- Charting a Pathway: A Strategic Review

Nobu-Hisa Kaneko

Advances in quantum mechanics have long underpinned metrology by enabling practical realizations of units through quantum effects. With the 2019 SI revision, traceability is anchored in defined fundamental constants, reinforcing the quantum-mechanical basis of modern standards. In parallel, quantum technologies are transitioning from laboratory science to engineering and early industrial deployment, bringing familiar pressures for integration, reliability, cost reduction, supply-chain formation, and standardization. The direction of benefit is thus reversing: metrology and precision measurement are becoming enabling infrastructure for the industrialization of quantum technologies. Against this backdrop, this paper surveys the metrology and precision-measurement capabilities required across representative quantum-computing modalities and identifies where electrical and related metrology can contribute to the development, characterization, and reliable operation of quantum hardware. We then discuss cross-cutting measurement needs and standardization opportunities that recur across platforms, and note how similar frameworks can extend to emerging quantum-sensing applications.

en quant-ph
S2 Open Access 2014
A simplified up-down method (SUDO) for measuring mechanical nociception in rodents using von Frey filaments

R. Bonin, Cyril Bories, Y. de Koninck

BackgroundThe measurement of mechanosensitivity is a key method for the study of pain in animal models. This is often accomplished with the use of von Frey filaments in an up-down testing paradigm. The up-down method described by Chaplan et al. (J Neurosci Methods 53:55–63, 1994) for mechanosensitivity testing in rodents remains one of the most widely used methods for measuring pain in animals. However, this method results in animals receiving a varying number of stimuli, which may lead to animals in different groups receiving different testing experiences that influences their later responses. To standardize the measurement of mechanosensitivity we developed a simplified up-down method (SUDO) for estimating paw withdrawal threshold (PWT) with von Frey filaments that uses a constant number of five stimuli per test. We further refined the PWT calculation to allow the estimation of PWT directly from the behavioral response to the fifth stimulus, omitting the need for look-up tables.ResultsThe PWT estimates derived using SUDO strongly correlated (r > 0.96) with the PWT estimates determined with the conventional up-down method of Chaplan et al., and this correlation remained very strong across different levels of tester experience, different experimental conditions, and in tests from both mice and rats. The two testing methods also produced similar PWT estimates in prospective behavioral tests of mice at baseline and after induction of hyperalgesia by intraplantar capsaicin or complete Freund’s adjuvant.ConclusionSUDO thus offers an accurate, fast and user-friendly replacement for the widely used up-down method of Chaplan et al.

374 sitasi en Medicine
arXiv Open Access 2025
The effect of predation on the dynamics of Chronic Wasting Disease in deer

Cody E. FitzGerald, James P. Keener

Chronic Wasting Disease (CWD) is a neurological disease impacting deer, elk, moose, and other cervid populations and is caused by a misfolded protein known as a prion. CWD is difficult to control due to the persistence of prions in the environment. Prions can remain infectious for more than a decade and have been found in soil as well as other environmental vectors, such as ticks and plants. Here, we provide a bifurcation analysis of a mathematical model of CWD spread in a cervid population, and use a modification of the Gillespie algorithm to explore if wolves can be used as an ecological control strategy to limit the spread of the disease in several relevant scenarios. We then analytically compute the probability that the disease spreads given one infected member enters a fully healthy population and the probability of elimination, given a fully susceptible population and remaining prions in the environment. From our analysis, we conclude that wolves can be used as an effective control strategy to limit the spread of CWD in cervid populations, and hunting or other means of lowering the susceptible population are beneficial to controlling the spread of CWD, although it is important to note that inferring biologically relevant parameters from the existing data is an ongoing challenge for this system.

en q-bio.PE, math.DS
arXiv Open Access 2025
Standardization for improved Spatio-Temporal Image Fusion

Harkaitz Goyena, Peter M. Atkinson, Unai Pérez-Goya et al.

Spatio-Temporal Image Fusion (STIF) methods usually require sets of images with matching spatial and spectral resolutions captured by different sensors. To facilitate the application of STIF methods, we propose and compare two different standardization approaches. The first method is based on traditional upscaling of the fine-resolution images. The second method is a sharpening approach called Anomaly Based Satellite Image Standardization (ABSIS) that blends the overall features found in the fine-resolution image series with the distinctive attributes of a specific coarse-resolution image to produce images that more closely resemble the outcome of aggregating the fine-resolution images. Both methods produce a significant increase in accuracy of the Unpaired Spatio Temporal Fusion of Image Patches (USTFIP) STIF method, with the sharpening approach increasing the spectral and spatial accuracies of the fused images by up to 49.46\% and 78.40\%, respectively.

en cs.CV, stat.CO
arXiv Open Access 2025
LLM-based Text Simplification and its Effect on User Comprehension and Cognitive Load

Theo Guidroz, Diego Ardila, Jimmy Li et al.

Information on the web, such as scientific publications and Wikipedia, often surpasses users' reading level. To help address this, we used a self-refinement approach to develop a LLM capability for minimally lossy text simplification. To validate our approach, we conducted a randomized study involving 4563 participants and 31 texts spanning 6 broad subject areas: PubMed (biomedical scientific articles), biology, law, finance, literature/philosophy, and aerospace/computer science. Participants were randomized to viewing original or simplified texts in a subject area, and answered multiple-choice questions (MCQs) that tested their comprehension of the text. The participants were also asked to provide qualitative feedback such as task difficulty. Our results indicate that participants who read the simplified text answered more MCQs correctly than their counterparts who read the original text (3.9% absolute increase, p<0.05). This gain was most striking with PubMed (14.6%), while more moderate gains were observed for finance (5.5%), aerospace/computer science (3.8%) domains, and legal (3.5%). Notably, the results were robust to whether participants could refer back to the text while answering MCQs. The absolute accuracy decreased by up to ~9% for both original and simplified setups where participants could not refer back to the text, but the ~4% overall improvement persisted. Finally, participants' self-reported perceived ease based on a simplified NASA Task Load Index was greater for those who read the simplified text (absolute change on a 5-point scale 0.33, p<0.05). This randomized study, involving an order of magnitude more participants than prior works, demonstrates the potential of LLMs to make complex information easier to understand. Our work aims to enable a broader audience to better learn and make use of expert knowledge available on the web, improving information accessibility.

en cs.CL
arXiv Open Access 2025
Towards AI-Native RAN: An Operator's Perspective of 6G Day 1 Standardization

Nan Li, Qi Sun, Lehan Wang et al.

Artificial Intelligence/Machine Learning (AI/ML) has become the most certain and prominent feature of 6G mobile networks. Unlike 5G, where AI/ML was not natively integrated but rather an add-on feature over existing architecture, 6G shall incorporate AI from the onset to address its complexity and support ubiquitous AI applications. Based on our extensive mobile network operation and standardization experience from 2G to 5G, this paper explores the design and standardization principles of AI-Native radio access networks (RAN) for 6G, with a particular focus on its critical Day 1 architecture, functionalities and capabilities. We investigate the framework of AI-Native RAN and present its three essential capabilities to shed some light on the standardization direction; namely, AI-driven RAN processing/optimization/automation, reliable AI lifecycle management (LCM), and AI-as-a-Service (AIaaS) provisioning. The standardization of AI-Native RAN, in particular the Day 1 features, including an AI-Native 6G RAN architecture, were proposed. For validation, a large-scale field trial with over 5000 5G-A base stations have been built and delivered significant improvements in average air interface latency, root cause identification, and network energy consumption with the proposed architecture and the supporting AI functions. This paper aims to provide a Day 1 framework for 6G AI-Native RAN standardization design, balancing technical innovation with practical deployment.

en cs.NI, cs.AI
S2 Open Access 2024
AI-QuIC: Machine Learning for Automated Detection of Misfolded Proteins in Seed Amplification Assays

Kyle D. Howey, Manci Li, Peter R. Christenson et al.

Advancements in AI, particularly deep learning, have revolutionized protein folding modeling, offering insights into biological processes and accelerating drug discovery for protein misfolding diseases. However, detecting misfolded proteins associated with neurodegenerative disorders, such as Alzheimer’s, Parkinson’s, ALS, and prion diseases, relies on Seed Amplification Assays (SAAs) analyzed through manual, time-consuming, and potentially inconsistent methods. We introduce AI-QuIC, an AI-driven platform that automates the analysis of Real-Time Quaking- Induced Conversion (RT-QuIC) assay data, a type of SAA crucial for detecting misfolded proteins. Utilizing a well-labeled RT-QuIC dataset of over 8,000 wells—the largest curated dataset for chronic wasting disease prion detection—we applied various AI models to classify true positive, false positive, and negative reactions. Notably, our deep-learning-based model achieved over 98% sensitivity and 97% specificity. By learning directly from raw fluorescence data, deep learning simplifies the SAA-analysis workflow. Automating and standardizing SAA data interpretation with AI-QuIC provides robust, scalable, and consistent diagnostic solutions.

1 sitasi en Biology
arXiv Open Access 2024
Post-Quantum Security: Origin, Fundamentals, and Adoption

Johanna Barzen, Frank Leymann

Nowadays, predominant asymmetric cryptographic schemes are considered to be secure because discrete logarithms are believed to be hard to be computed. The algorithm of Shor can effectively compute discrete logarithms, i.e. it can brake such asymmetric schemes. But the algorithm of Shor is a quantum algorithm and at the time this algorithm has been invented, quantum computers that may successfully execute this algorithm seemed to be far out in the future. The latter has changed: quantum computers that are powerful enough are likely to be available in a couple of years. In this article, we first describe the relation between discrete logarithms and two well-known asymmetric security schemes, RSA and Elliptic Curve Cryptography. Next, we present the foundations of lattice-based cryptography which is the bases of schemes that are considered to be safe against attacks by quantum algorithms (as well as by classical algorithms). Then we describe two such quantum-safe algorithms (Kyber and Dilithium) in more detail. Finally, we give a very brief and selective overview of a few actions currently taken by governments and industry as well as standardization in this area. The article especially strives towards being self-contained: the required mathematical foundations to understand post-quantum cryptography are provided and examples are given.

en cs.CR, quant-ph
S2 Open Access 2024
Implementasi Sistem Informasi Manajemen Pegawai Dinas Pendidikan Kota Palu Menggunakan Metode Waterfall

Siti Rahmawati, H. Ngemba, Syaiful Hendra et al.

The Department of Education and Culture of Palu City has the primary responsibility for increasing the salaries and ranks of teachers and staff. Previously, the application process was done manually, requiring teachers and staff to visit the Department's office with physical documents. This caused delays, wasted time and effort, and the risk of lost documents. To address these issues, this study aims to develop a Personnel Management Information System at the Department of Education in Palu City, focusing on improving efficiency and transparency in the salary and rank promotion process. Using the SDLC Waterfall method, this research involves observation and direct interviews with the Department to understand the existing needs and processes. Implementing this system will enable the Department to monitor employee performance, manage administration, and expedite and simplify the salary and rank promotion process, overcoming the challenges of physical documents and enhancing overall transparency. This study also conducts testing using the black box method with the Equivalence Partition technique and user testing using the EUCS (End-User Computing Satisfaction) method. The implications of this research on the Department's policies include increased administrative process efficiency by integrating the management information system into all administrative processes, encouraging the development of digitalization policies within the Department, setting new transparency standards in salary and rank management, and optimizing human resources through training in system use. New policies can simplify and standardize the procedures for applying for salary and rank promotions, reducing the risk of errors and lost documents.

S2 Open Access 2024
Innovative Approaches to Optimising Bar Menu Design: Enhancing Operational Profitability and Service Quality

Volodymyr Tolstov

Empirical research on bar operations at Waterpark Odessa and Waterpark-Hotel Zatoka during the summer seasons of 2018–2022 evaluates how compact beverage assortments, standardized production cycles, and visually structured drink cards affect profitability and guest service in seasonal entertainment venues. The study employs a mixed-methods design, combining indices from managerial accounting for core beverage categories with qualitative reconstructions of production flows, stock policies, and menu design decisions. Case evidence is linked to international literature on menu engineering, digital ordering tools, and theme park F&B strategies. The menu redesign replaces overloaded assortments with a focused set of margin-rich beverages, supported by unified ingredients, pre-preparations, and simplified drink assembly schemes. The visual layout draws attention to signature items, family bundles, and photogenic drinks, which are promoted through social media and digital channels. A comparative analysis of records from the pre- and post-optimization periods reveals lower food costs for cocktails and soft drinks, reduced all-inclusive production costs due to decreased labor intensity and waste, shorter service times during peak loads, and higher average check sizes. At the same time, the beverage share in total food and beverage (F&B) revenue moves closer to benchmarks reported for successful bar and nightclub operations. Sales data indicate a shift in demand toward visually highlighted items and bundle offers, while rarely ordered, complex cocktails are losing prominence or disappearing altogether. The case refines menu-engineering approaches for environments with extreme seasonality, pronounced daytime peaks, and guest streams dominated by families. A scalable managerial framework is proposed for beverage assortment design, stock management, menu visualization, and digital ordering in waterparks and related leisure properties.

S2 Open Access 2024
A-195 Combination of Lean Mindset, Robotics and Digitalization allowed us to manage a 2.5 times increase in workload in our central reference lab

A. Bansal, R. Datta, S. Bhakta et al.

Our Agilus Diagnostics, Gurugram, India is a Central Reference Laboratory and part of the pan-India network over 400 lab in India. Our Gurugram lab delivers 3,200 tubes in the serum work area in 17 hour operations daily and is ISO 15189 and CAP accredited.In Jan 2023, our lab started to support Govt PHC Program samples and our workload increased to 8000 tubes. This was a clear operational and clinical challenge for us with a 2.5 x jump in workload. To achieve this goal, a lean mindset with process improvement steps were to be applied required across the total testing process. This lean approach was supported by cobas ® Lab Automation systems managed by navify ® Lab Operations informatics automation solution. This poster explains our journey of planning, implementing, and delivering on our goal of predictable operational efficiency when faced with this challenge. We took 4 major steps We redesigned lab processes using lean process improvement operational dashboards. This poster will show the metric we track which help eliminate waste in the system. We optimized our total lab automation solution with smart load balancing across cobas ® Pro integrated analyzers and staff friendly recursive workflow. Clear and strict use of off peak hours for maintenance and QC procedures and standardized predictable schedule of operation. This poster will share our time table with key milestones across the 24 hours We built on the Informatics solution using 5 layered Auto verification rules based decision framework. IQC, Systems Flags check, Sample Quality checks, Biological Reference check, custom Auto verification Range, Critical ranges Delta check Criteria, Clinical Correlation Rule are all part of the auto verification logic framework. We were able to deliver the following results.Lean process design led to 53% simplification. Number of process steps reduced to 29 from 62 steps across the total testing process. Total Lab Automation has helped improve 90 percentile production TAT by 46% from 250 mins to 135 mins Our 90 % TAT CV has significantly reduced making our production predictable. Over 17,500 results / day reported are subject to auto verification with 87 % of all results auto-verified and auto-approved. Not a single additional employee was added in the analytical or post analytical areas of the lab operations Lean Mindset complemented by Total Laboratory Automation along with Decision Automation on an Informatics solution has enhanced lab performance in terms of TAT, Efficiency, and Staff Motivation. This poster will also demonstrate using incremental effectiveness analysis comparative data of key performance indicators, such predictability in production, and efficiency of workforce, senior staff time saved, before and after technology deployment. The challenge of increase in sample workload is real in India and this approach allows for a sustainable way to ensure patient impact with minimum burden on our workforce.

S2 Open Access 2021
The Alzheimer's Association international guidelines for handling of cerebrospinal fluid for routine clinical measurements of amyloid β and tau

O. Hansson, R. Batrla, Britta Brix et al.

The core cerebrospinal fluid (CSF) Alzheimer's disease (AD) biomarkers amyloid beta (Aβ42 and Aβ40), total tau, and phosphorylated tau, have been extensively clinically validated, with very high diagnostic performance for AD, including the early phases of the disease. However, between‐center differences in pre‐analytical procedures may contribute to variability in measurements across laboratories. To resolve this issue, a workgroup was led by the Alzheimer's Association with experts from both academia and industry. The aim of the group was to develop a simplified and standardized pre‐analytical protocol for CSF collection and handling before analysis for routine clinical use, and ultimately to ensure high diagnostic performance and minimize patient misclassification rates. Widespread application of the protocol would help minimize variability in measurements, which would facilitate the implementation of unified cut‐off levels across laboratories, and foster the use of CSF biomarkers in AD diagnostics for the benefit of the patients.

96 sitasi en Medicine
S2 Open Access 2020
Product Quality Improvement Policies in Industry 4.0: Characteristics, Enabling Factors, Barriers, and Evolution Toward Zero Defect Manufacturing

Foivos Psarommatis, Sylvain Prouvost, Gökan May et al.

In the competitive market of manufacturing, quality is a criterion of primary importance in order to win market share. Quality improvement must be coupled with performance point of view. Lean Manufacturing, Six Sigma, Lean Six Sigma, Total Quality Management, Theory of Constraints, and their combination are philosophies dedicated to this goal. This study is a literature review on the implementation of these philosophies to improve quality of processes and products in a system, and also covers the commonalities and differences with Zero Defect Manufacturing (ZDM) philosophy. In this study, 45 articles have been analyzed. These articles have been selected by a research on several scientific libraries with specific keywords. The methodology is based on a list of information extracted from each paper. The data searched are on the tool selections, critical factors of implementations and the benefits obtained from them. Based on the review and analysis of the literature and practices, we provide the top 10 main components of the tools used for quality improvement, enabling factors, benefits, and barriers to implementation. Moreover, we present and discuss categorization of quality improvement methods and the way toward ZDM. The need of standardized toolkits for different levels of maturity in quality management systems and a better education have been enlightened. Thanks to technological improvement in information flow management, ZDM seems close to be achieved even though some new risks and wastes have to be taken care of within the implementation.

121 sitasi en Computer Science
arXiv Open Access 2023
No Compromise in Solution Quality: Speeding Up Belief-dependent Continuous POMDPs via Adaptive Multilevel Simplification

Andrey Zhitnikov, Ori Sztyglic, Vadim Indelman

Continuous POMDPs with general belief-dependent rewards are notoriously difficult to solve online. In this paper, we present a complete provable theory of adaptive multilevel simplification for the setting of a given externally constructed belief tree and MCTS that constructs the belief tree on the fly using an exploration technique. Our theory allows to accelerate POMDP planning with belief-dependent rewards without any sacrifice in the quality of the obtained solution. We rigorously prove each theoretical claim in the proposed unified theory. Using the general theoretical results, we present three algorithms to accelerate continuous POMDP online planning with belief-dependent rewards. Our two algorithms, SITH-BSP and LAZY-SITH-BSP, can be utilized on top of any method that constructs a belief tree externally. The third algorithm, SITH-PFT, is an anytime MCTS method that permits to plug-in any exploration technique. All our methods are guaranteed to return exactly the same optimal action as their unsimplified equivalents. We replace the costly computation of information-theoretic rewards with novel adaptive upper and lower bounds which we derive in this paper, and are of independent interest. We show that they are easy to calculate and can be tightened by the demand of our algorithms. Our approach is general; namely, any bounds that monotonically converge to the reward can be utilized to achieve significant speedup without any loss in performance. Our theory and algorithms support the challenging setting of continuous states, actions, and observations. The beliefs can be parametric or general and represented by weighted particles. We demonstrate in simulation a significant speedup in planning compared to baseline approaches with guaranteed identical performance.

en cs.AI, cs.RO
arXiv Open Access 2023
Scamming the Scammers: Using ChatGPT to Reply Mails for Wasting Time and Resources

Enrico Cambiaso, Luca Caviglione

The use of Artificial Intelligence (AI) to support cybersecurity operations is now a consolidated practice, e.g., to detect malicious code or configure traffic filtering policies. The recent surge of AI, generative techniques and frameworks with efficient natural language processing capabilities dramatically magnifies the number of possible applications aimed at increasing the security of the Internet. Specifically, the ability of ChatGPT to produce textual contents while mimicking realistic human interactions can be used to mitigate the plague of emails containing scams. Therefore, this paper investigates the use of AI to engage scammers in automatized and pointless communications, with the goal of wasting both their time and resources. Preliminary results showcase that ChatGPT is able to decoy scammers, thus confirming that AI is an effective tool to counteract threats delivered via mail. In addition, we highlight the multitude of implications and open research questions to be addressed in the perspective of the ubiquitous adoption of AI.

en cs.CR, cs.AI
arXiv Open Access 2023
The Fewer Splits are Better: Deconstructing Readability in Sentence Splitting

Tadashi Nomoto

In this work, we focus on sentence splitting, a subfield of text simplification, motivated largely by an unproven idea that if you divide a sentence in pieces, it should become easier to understand. Our primary goal in this paper is to find out whether this is true. In particular, we ask, does it matter whether we break a sentence into two or three? We report on our findings based on Amazon Mechanical Turk. More specifically, we introduce a Bayesian modeling framework to further investigate to what degree a particular way of splitting the complex sentence affects readability, along with a number of other parameters adopted from diverse perspectives, including clinical linguistics, and cognitive linguistics. The Bayesian modeling experiment provides clear evidence that bisecting the sentence leads to enhanced readability to a degree greater than what we create by trisection.

en cs.CL, cs.AI
arXiv Open Access 2023
ezBIDS: Guided standardization of neuroimaging data interoperable with major data archives and platforms

Daniel Levitas, Soichi Hayashi, Sophia Vinci-Booher et al.

Data standardization has become one of the leading methods neuroimaging researchers rely on for data sharing and reproducibility. Data standardization promotes a common framework through which researchers can utilize others' data. Yet, as of today, formatting datasets that adhere to community best practices requires technical expertise involving coding and considerable knowledge of file formats and standards. We describe ezBIDS, a tool for converting neuroimaging data and associated metadata to the Brain Imaging Data Structure (BIDS) standard. ezBIDS provides four unique features: (1) No installation or programming requirements. (2) Handling of both imaging and task events data and metadata. (3) Automated inference and guidance for adherence to BIDS. (4) Multiple data management options: download BIDS data to local system, or transfer to OpenNeuro.org or brainlife.io. In sum, ezBIDS requires neither coding proficiency nor knowledge of BIDS and is the first BIDS tool to offer guided standardization, support for task events conversion, and interoperability with OpenNeuro and brainlife.io.

Halaman 39 dari 22749