T. Beasley, R. Schumacker
Hasil untuk "Standardization. Simplification. Waste"
Menampilkan 20 dari ~454985 hasil · dari CrossRef, arXiv, DOAJ, Semantic Scholar
Piercosma Bisconti, Marcello Galisai
This white paper examines the technical foundations of European AI standardization under the AI Act. It explains how harmonized standards enable the presumption of conformity mechanism, describes the CEN/CENELEC standardization process, and analyzes why AI poses unique standardization challenges including stochastic behavior, data dependencies, immature evaluation practices, and lifecycle dynamics. The paper argues that AI systems are typically components within larger sociotechnical systems, requiring a layered approach where horizontal standards define process obligations and evidence structures while sectoral profiles specify domain-specific thresholds and acceptance criteria. It proposes a workable scheme based on risk management, reproducible technical checks redefined as stability of measured properties, structured documentation, comprehensive logging, and assurance cases that evolve over the system lifecycle. The paper demonstrates that despite methodological difficulties, technical standards remain essential for translating legal obligations into auditable engineering practice and enabling scalable conformity assessment across providers, assessors, and enforcement authorities
Atma Anand
Information-processing systems that coordinate multiple agents and objectives face fundamental thermodynamic constraints. We show that solutions with maximum utility to act as coordination focal points have a much higher selection pressure for being findable across agents rather than accuracy. We derive that the information-theoretic minimum description length of coordination protocols to precision $\varepsilon$ scales as $L(P)\geq NK\log_2 K+N^2d^2\log (1/\varepsilon)$ for $N$ agents with $d$ potentially conflicting objectives and internal model complexity $K$. This scaling forces progressive simplification, with coordination dynamics changing the environment itself and shifting optimization across hierarchical levels. Moving from established focal points requires re-coordination, creating persistent metastable states and hysteresis until significant environmental shifts trigger phase transitions through spontaneous symmetry breaking. We operationally define coordination temperature to predict critical phenomena and estimate coordination work costs, identifying measurable signatures across systems from neural networks to restaurant bills to bureaucracies. Extending the topological version of Arrow's theorem on the impossibility of consistent preference aggregation, we find it recursively binds whenever preferences are combined. This potentially explains the indefinite cycling in multi-objective gradient descent and alignment faking in Large Language Models trained with reinforcement learning with human feedback. We term this framework Thermodynamic Coordination Theory (TCT), which demonstrates that coordination requires radical information loss.
Yuan Gao, Guangjin Pan, Zhiyong Zhong et al.
With the integration of cellular networks in vertical industries that demand precise location information, such as vehicle-to-everything (V2X), public safety, and Industrial Internet of Things (IIoT), positioning has become an imperative component for future wireless networks. By exploiting a wider spectrum, multiple antennas and flexible architectures, cellular positioning achieves ever-increasing positioning accuracy. Still, it faces fundamental performance degradation when the distance between user equipment (UE) and the base station (BS) is large or in non-line-of-sight (NLoS) scenarios. To this end, the 3rd generation partnership project (3GPP) Rel-18 proposes to standardize sidelink (SL) positioning, which provides unique opportunities to extend the positioning coverage via direct positioning signaling between UEs. Despite the standardization advancements, the capability of SL positioning is controversial, especially how much spectrum is required to achieve the positioning accuracy defined in 3GPP. To this end, this article summarizes the latest standardization advancements of 3GPP on SL positioning comprehensively, covering a) network architecture; b) positioning types; and c) performance requirements. The capability of SL positioning using various positioning methods under different imperfect factors is evaluated and discussed in-depth. Finally, according to the evolution of SL in 3GPP Rel-19, we discuss the possible research directions and challenges of SL positioning.
Xi Fang, Xueqi Wang, Patrick J. Heagerty et al.
Stepped-wedge cluster-randomized trials (SW-CRTs) are widely used in healthcare and implementation science, providing an ethical advantage by ensuring all clusters eventually receive the intervention. The staggered rollout of treatment introduces complexities in defining and estimating treatment effect estimands, particularly under informative sizes. Traditional model-based methods, including generalized estimating equations (GEE) and linear mixed models (LMM), produce estimates that depend on implicit weighting schemes and parametric assumptions, leading to bias for different types of estimands in the presence of informative sizes. While recent methods have attempted to provide robust estimation in SW-CRTs, they are restrictive on modeling assumptions or lack of general framework for consistent estimating multiple estimands under informative size. In this article, we propose a model-robust standardization framework for SW-CRTs that generalizes existing methods from parallel-arm CRTs. We define causal estimands including horizontal-individual, horizontal-cluster, vertical-individual, and vertical-cluster average treatment effects under a super population framework and introduce an augmented standardization estimator that standardizes parametric and semiparametric working models while maintaining robustness to informative cluster size under arbitrary misspecification. We evaluate the finite-sample properties of our proposed estimators through extensive simulation studies, assessing their performance under various SW-CRT designs. Finally, we illustrate the practical application of model-robust estimation through a reanalysis of two real-world SW-CRTs.
Lars Ullrich, Michael Buchholz, Klaus Dietmayer et al.
Assuring safety of artificial intelligence (AI) applied to safety-critical systems is of paramount importance. Especially since research in the field of automated driving shows that AI is able to outperform classical approaches, to handle higher complexities, and to reach new levels of autonomy. At the same time, the safety assurance required for the use of AI in such safety-critical systems is still not in place. Due to the dynamic and far-reaching nature of the technology, research on safeguarding AI is being conducted in parallel to AI standardization and regulation. The parallel progress necessitates simultaneous consideration in order to carry out targeted research and development of AI systems in the context of automated driving. Therefore, in contrast to existing surveys that focus primarily on research aspects, this paper considers research, standardization and regulation in a concise way. Accordingly, the survey takes into account the interdependencies arising from the triplet of research, standardization and regulation in a forward-looking perspective and anticipates and discusses open questions and possible future directions. In this way, the survey ultimately serves to provide researchers and safety experts with a compact, holistic perspective that discusses the current status, emerging trends, and possible future developments.
Luisa Ferrari, Massimo Ventrucci
Latent Gaussian Models (LGMs) are a subset of Bayesian Hierarchical models where Gaussian priors, conditional on variance parameters, are assigned to all effects in the model. LGMs are employed in many fields for their flexibility and computational efficiency. However, practitioners find prior elicitation on the variance parameters challenging because of a lack of intuitive interpretation for them. Recently, several papers have tackled this issue by rethinking the model in terms of variance partitioning (VP) and assigning priors to parameters reflecting the relative contribution of each effect to the total variance. So far, the class of priors based on VP has been mainly deployed for random effects and fixed effects separately. This work presents a novel standardization procedure that expands the applicability of VP priors to a broader class of LGMs, including both fixed and random effects. We describe the steps required for standardization through various examples, with a particular focus on the popular class of intrinsic Gaussian Markov random fields (IGMRFs). The practical advantages of standardization are demonstrated with simulated data and a real dataset on survival analysis.
J. Wicki, T. Perneger, A. Junod et al.
J. Whitcher, C. Shiboski, S. Shiboski et al.
Max J. L. Lee, Ju Lin, Li-Ta Hsu
We propose a feasibility study for real-time automated data standardization leveraging Large Language Models (LLMs) to enhance seamless positioning systems in IoT environments. By integrating and standardizing heterogeneous sensor data from smartphones, IoT devices, and dedicated systems such as Ultra-Wideband (UWB), our study ensures data compatibility and improves positioning accuracy using the Extended Kalman Filter (EKF). The core components include the Intelligent Data Standardization Module (IDSM), which employs a fine-tuned LLM to convert varied sensor data into a standardized format, and the Transformation Rule Generation Module (TRGM), which automates the creation of transformation rules and scripts for ongoing data standardization. Evaluated in real-time environments, our study demonstrates adaptability and scalability, enhancing operational efficiency and accuracy in seamless navigation. This study underscores the potential of advanced LLMs in overcoming sensor data integration complexities, paving the way for more scalable and precise IoT navigation solutions.
Kan Zheng, Rongtao Xu, Jie Mei et al.
The Ambient Internet of Things (A-IoT) has emerged as a critical direction for achieving effective connectivity as the IoT system evolves to 6G. However, the introduction of A-IoT technologies, particularly involving backscatter modulation, poses numerous challenges for system design and network operations. This paper surveys current standardization efforts, highlights potential challenges, and explores future directions for A-IoT. It begins with a comprehensive overview of ongoing standardization initiatives by the 3rd Generation Partnership Project (3GPP) on A-IoT, providing a solid foundation for further technical research in both industry and academia. Building upon this groundwork, the paper conducts an analysis of critical enabling technologies for A-IoT. Moreover, a comprehensive A-IoT demonstration system is designed to showcase the practical viability and efficiency of A-IoT techniques, supported by field experiments. We finally address ongoing challenges associated with A-IoT technologies, providing valuable insights for future research endeavors.
Changsheng Zhao, Jianhua Zhang, Yuxiang Zhang et al.
Integrated sensing and communication (ISAC) has been recognized as the key technology in the vision of the sixth generation (6G) era. With the emergence of new concepts in mobile communications, the channel model is the prerequisite for system design and performance evaluation. Currently, 3GPP Release 19 is advancing the standardization of ISAC channel models. Nevertheless, a unified modeling framework has yet to be established. This paper provides a simulation diagram of ISAC channel modeling extended based on the Geometry-Based Stochastic Model (GBSM), compatible with existing 5G channel models and the latest progress in the 3rd Generation Partnership Project (3GPP) standardization. We first introduce the progress of the ISAC channel model standardization in general. Then, a concatenated channel modeling approach is presented considering the team's standardization proposals, which is implemented on the BUPTCMCC-6G-CMG+ channel model generator. We validated the model in cumulative probability density function (CDF) in statistical extension of angle and delay, and radar cross section (RCS). Simulation results show that the proposed model can realistically characterize the feature of channel concatenation and RCS within the ISAC channel.
Sebastian Klotz
Scott Schnelle, Francesca M. Favaro
Automated Driving Systems (ADS) hold great potential to increase safety, mobility, and equity. However, without public acceptance, none of these promises can be fulfilled. To engender public trust, many entities in the ADS community participate in standards development organizations (SDOs) with the goal of enhancing safety for the entire industry through a collaborative approach. The breadth and depth of the ADS safety standardization landscape is vast and constantly changing, as often is the case for novel technologies in rapid evolution. The pace of development of the ADS industry makes it hard for the public and interested parties to keep track of ongoing SDO efforts, including the topics touched by each standard and the committees addressing each topic, as well as make sense of the wealth of documentation produced. Therefore, the authors present here a simplified framework for abstracting and organizing the current landscape of ADS safety standards into high-level, long term themes. This framework is then utilized to develop and organize associated research questions that have not yet reached widely adopted industry positions, along with identifying potential gaps where further research and standardization is needed.
D. Gerber, H. Singh, E. Larkins et al.
Importance Clinical trial sponsors rely on eligibility criteria to control the characteristics of patients in their studies, promote the safety of participants, and optimize the interpretation of results. However, in recent years, complex and often overly restrictive inclusion and exclusion criteria have created substantial barriers to patient access to novel therapies, hindered trial recruitment and completion, and limited generalizability of trial results. A LUNGevity Foundation working group developed a framework for lung cancer clinical trial eligibility criteria. The goals of this framework are to (1) simplify eligibility criteria, (2) facilitate stakeholders' (patients, clinicians, and sponsors) search for appropriate trials, and (3) harmonize trial populations to support intertrial comparisons of treatment effects. Observations Clinicians and representatives from the pharmaceutical industry, the National Cancer Institute, the US Food and Drug Administration (FDA), the European Medicines Agency, and the LUNGevity Foundation undertook a process to identify and prioritize key items for inclusion in trial eligibility criteria. The group generated a prioritized library of terms to guide investigators and sponsors in the design of first-line, advanced non-small cell lung cancer clinical trials intended to support marketing application. These recommendations address disease stage and histologic features, enrollment biomarkers, performance status, organ function, brain metastases, and comorbidities. This effort forms the basis for a forthcoming FDA draft guidance for industry. Conclusions and Relevance As an initial step, the recommended cross-trial standardization of eligibility criteria may harmonize trial populations. Going forward, by connecting diverse stakeholders and providing formal opportunity for public input, the emerging FDA draft guidance may also provide an opportunity to revise and simplify long-standing approaches to trial eligibility. This work serves as a prototype for similar efforts now underway for other cancers.
S. Pearson, Dorothy Goulart-Fisher, Thomas Lee
Muhammad K. Shehzad, Luca Rose, M. Majid Butt et al.
With the deployment of 5G networks, standards organizations have started working on the design phase for sixth-generation (6G) networks. 6G networks will be immensely complex, requiring more deployment time, cost and management efforts. On the other hand, mobile network operators demand these networks to be intelligent, self-organizing, and cost-effective to reduce operating expenses (OPEX). Machine learning (ML), a branch of artificial intelligence (AI), is the answer to many of these challenges providing pragmatic solutions, which can entirely change the future of wireless network technologies. By using some case study examples, we briefly examine the most compelling problems, particularly at the physical (PHY) and link layers in cellular networks where ML can bring significant gains. We also review standardization activities in relation to the use of ML in wireless networks and future timeline on readiness of standardization bodies to adapt to these changes. Finally, we highlight major issues in ML use in the wireless technology, and provide potential directions to mitigate some of them in 6G wireless networks.
L. Sanz, Charlène Aubinet, H. Cassol et al.
Establishing an accurate diagnosis is crucial for patients with disorders of consciousness (DoC) following a severe brain injury. The Coma Recovery Scale-Revised (CRS-R) is the recommended behavioral scale for assessing the level of consciousness among these patients, but its long duration of administration is a major hurdle in clinical settings. The Simplified Evaluation of CONsciousness Disorders (SECONDs) is a shorter scale that was developed to tackle this issue. It consists of six mandatory items, observation, command-following, visual pursuit, visual fixation, oriented behaviors, and arousal, and two conditional items, communication and localization to pain. The score ranges between 0 and 8 and corresponds to a specific diagnosis (i.e., coma, unresponsive wakefulness syndrome, minimally conscious state minus/plus, or emergence from the minimally conscious state). A first validation study on patients with prolonged DoC showed high concurrent validity and intra- and inter-rater reliability. The SECONDs requires less training than the CRS-R and its administration lasts about 7 minutes (interquartile range: 5-9 minutes). An additional index score allows the more precise tracking of a patient's behavioral fluctuation or evolution over time. The SECONDs is therefore a fast and valid tool for assessing the level of consciousness in patients with severe brain injury. It can easily be used by healthcare staff and implemented in time-constrained clinical settings, such as intensive care units, to help decrease misdiagnosis rates and to optimize treatment decisions. These administration guidelines provide detailed instructions for administering the SECONDs in a standardized and reproducible manner, which is an essential requirement for achieving a reliable diagnosis.
A. M. Ardle, A. Binek, A. Moradian et al.
Background Accurate discovery assay workflows are critical for identifying authentic circulating protein biomarkers in diverse blood matrices. Maximizing the commonalities in the proteomic workflows between different biofluids simplifies the approach and increases the likelihood for reproducibility. We developed a workflow that allows flexibility for high and mid-throughput analysis for three blood-based proteomes: naive plasma, plasma depleted of the 14 most abundant proteins, and dried blood. Methods Optimal conditions for sample preparation and DIA-MS analysis were established in plasma then automated and adapted for depleted plasma and whole blood. The MS workflow was modified to facilitate sensitive high-throughput or deep profile analysis with mid-throughput analysis. Analytical performance was evaluated from 5 complete workflows repeated over 3 days as well as a linearity analysis of a 5–6-point dilution curve. Result Using our high-throughput workflow, 74%, 93%, 87% of peptides displayed an inter-day CV<30% in plasma, depleted plasma and whole blood. While the mid-throughput workflow had 67%, 90%, 78% of peptides in plasma, depleted plasma and whole blood meeting the CV<30% standard. Lower limits of detection and quantitation were determined for proteins and peptides observed in each biofluid and workflow. Combining the analysis of both high-throughput plasma fractions exceeded the number of reliably identified proteins for individual biofluids in the mid-throughput workflows. Conclusion The workflow established here allowed for reliable detection of proteins covering a broad dynamic range. We envisage that implementation of this standard workflow on a large scale will facilitate the translation of candidate markers into clinical use.
Jonas Coelho Kasmanas, Alexander Bartholomäus, F. B. Corrêa et al.
Abstract Metagenomics became a standard strategy to comprehend the functional potential of microbial communities, including the human microbiome. Currently, the number of metagenomes in public repositories is increasing exponentially. The Sequence Read Archive (SRA) and the MG-RAST are the two main repositories for metagenomic data. These databases allow scientists to reanalyze samples and explore new hypotheses. However, mining samples from them can be a limiting factor, since the metadata available in these repositories is often misannotated, misleading, and decentralized, creating an overly complex environment for sample reanalysis. The main goal of the HumanMetagenomeDB is to simplify the identification and use of public human metagenomes of interest. HumanMetagenomeDB version 1.0 contains metadata of 69 822 metagenomes. We standardized 203 attributes, based on standardized ontologies, describing host characteristics (e.g. sex, age and body mass index), diagnosis information (e.g. cancer, Crohn's disease and Parkinson), location (e.g. country, longitude and latitude), sampling site (e.g. gut, lung and skin) and sequencing attributes (e.g. sequencing platform, average length and sequence quality). Further, HumanMetagenomeDB version 1.0 metagenomes encompass 58 countries, 9 main sample sites (i.e. body parts), 58 diagnoses and multiple ages, ranging from just born to 91 years old. The HumanMetagenomeDB is publicly available at https://webapp.ufz.de/hmgdb/.
Halaman 40 dari 22750