Abstract As systems continue to grow in scale and complexity, the Systems Engineering community has turned to Model‐Based Systems Engineering (MBSE) to manage complexity, maintain consistency, and assure traceability during system development. It is different from “engineering with models,” which has been a common practice in the engineering profession for decades. MBSE is a holistic, systems engineering approach centered on the evolving system model, which serves as the “sole source of truth” about the system. It comprises system specification, design, validation, and configuration management. Even though MBSE is beginning to see a fair amount of use in multiple industries, specific advances are needed on multiple fronts to realize its full benefits. This paper discusses the motivation for MBSE, and its current state of maturity. It presents systems modeling methodologies and the role of ontologies and metamodels in MBSE. It presents model‐based verification and validation (V&V) as an example of MBSE use. An illustrative example of the use of MBSE for design synthesis is presented to demonstrate an important MBSE capability. The paper concludes with a discussion of challenges to widescale adoption and offers promising research directions to fully realize the potential benefits of MBSE.
This paper presents a novel method to determine dielectric constant and loss tangent of substrates used in microstrip circuits. The method is based on the reflected group delay of a microstrip resonator coupled to the input by means of an interdigital capacitor. The proposed method is first validated through simulations carried out using Advanced Design System by Keysight Technologies. The results obtained from simulations established that the proposed method can characterize the microstrip substrates having dielectric constant in the range of 1.8 to 7.2 at a loss tangent of <inline-formula> <tex-math notation="LaTeX">$4\times 10^{-4}$ </tex-math></inline-formula> with an error of less than 5% and 80% respectively. Final validation of the method is performed by the testing of Teflon and some standard boards from Rogers Corp. The error in the extracted values of dielectric constant and loss tangent of the boards used in testing are also found in agreement to the suggested values during simulation. Dielectric characterization of Acrylonitrile Butadiene Styrene (ABS) and Polylactic Acid (PLA) are also done to present a potential application of the proposed method in characterizing 3-D printing materials for microstrip applications.
This paper addresses the issues of floating high voltage at light loads and insufficient cross-regulation in open-loop high-isolation multi-output power supplies. It initially introduces methods of separate winding arrangements and core projection coincidence to reduce distributed capacitance and leakage inductance in transformers. Subsequent analysis reveals the impact of distributed capacitors on converter gain based on the fundamental harmonic wave analysis method. Additionally, a series compensation method is utilized to mitigate this impact of leakage inductance on convertor gain. This combination leads to the development of a CLLC resonant converter with an asymmetric structure. Finally, the paper presents the development of a 4-output prototype with a total output power of 420 W. Testing results indicate that both the primary and secondary sides of transformer meet an AC18 kV isolation requirement. The voltage accuracy of power output across the full load range is controllable within ± 10%. In fault mode, the faulty branch actively shuts off the output, while normal functioning is maintained in the other branches, verifying the correctness of the proposed design scheme.
Control engineering systems. Automatic machinery (General), Technology
Face recognition (FR) is a less intrusive biometrics technology with various applications, such as security, surveillance, and access control systems. FR remains challenging, especially when there is only a single image per person as a gallery dataset and when dealing with variations like pose, illumination, and occlusion. Deep learning techniques have shown promising results in recent years using VAE and GAN, with approaches such as patch-VAE, VAE-GAN for 3D Indoor Scene Synthesis, and hybrid VAE-GAN models. However, in Single Sample Per Person Face Recognition (SSPP FR), the challenge of learning robust and discriminative features that preserve the subject’s identity persists. To address these issues, we propose a novel framework called AD-VAE, specifically for SSPP FR, using a combination of variational autoencoder (VAE) and Generative Adversarial Network (GAN) techniques. The proposed AD-VAE framework is designed to learn how to build representative identity-preserving prototypes from both controlled and wild datasets, effectively handling variations like pose, illumination, and occlusion. The method uses four networks: an encoder and decoder similar to VAE, a generator that receives the encoder output plus noise to generate an identity-preserving prototype, and a discriminator that operates as a multi-task network. AD-VAE outperforms all tested state-of-the-art face recognition techniques, demonstrating its robustness. The proposed framework achieves superior results on four controlled benchmark datasets—AR, E-YaleB, CAS-PEAL, and FERET—with recognition rates of 84.9%, 94.6%, 94.5%, and 96.0%, respectively, and achieves remarkable performance on the uncontrolled LFW dataset, with a recognition rate of 99.6%. The AD-VAE framework shows promising potential for future research and real-world applications.
We provide a theory, algorithms, and simulations of nonequilibrium quantum systems using a one-dimensional (1D) completely positive (CP), matrix-product (MP) density-operator (ρ) representation. By generalizing the matrix product state's orthogonality center, to additionally store positive classical mixture correlations, the MPρ factorization naturally emerges. In this setting, we analytically and numerically examine the virtual gauge freedoms associated with the representation of quantum density operators. Based on this perspective, we simplify algorithms in certain limits to speed up the integration of the canonical-form master-equation dynamics. This enables us to quickly evolve under the dynamics of two-body quantum channels without resorting to optimization-based methods. In addition to this technical advance, we also scale up numerical examples and discuss implications for accurately modeling hardware architectures and predicting their performance in the near term. This includes an example of the quantum to classical transition of informationally leaky, i.e., decohering, qubits. In this setting, because of loss from environmental interactions, nonlocal complex coherence correlations are converted into global incoherent classical statistical mixture correlations. Lastly, the representation of both global and local correlations is discussed. We expect this work to have applications in additional nonequilibrium settings, beyond qubit engineering.
Rahman Matee Ur, Abbas Ghazanfar, Hussain Syed Baqar
et al.
Conventional solid oxide fuel cells (SOFCs) work at high operating temperatures (800–1,000°C). Lowering the operating temperature of SOFCs reduces the open-circuit voltage (OCV) and performance. Herein, a scheme was established to boost the voltage of the developed SOFC using a DC-DC voltage booster. The LTspice technique was used to develop a DC-DC booster, and the code was generated with a minimum of 0.7 V. For experimental evidence, BixAg1.00Fe1−x
Zn2O7+δ
(BAFZ oxide) materials were synthesized to investigate anodic properties. UV-vis and Fourier transform infrared spectroscopy techniques were used to determine the band gaps and functional groups. The vibrational modes of composite materials were studied via Raman spectroscopy. A slight peak shift toward a higher wavenumber was noted in the BAFZ oxide sample attributed to the addition of bismuth trioxide (Bi2O3). The conductivity was measured and found to be 1.2 S/cm at 600°C in a H2 atmosphere. Fuel cell performance was also measured in the temperature range of 400–620°C, and a maximum OCV of 1.1 V was achieved at 620°C. Finally, the boosted voltage was recorded at 2.2 V under the same circumstances using a DC-DC booster.
Abdoulaye Bodian, Alben Cardenas, Dina Ouardani
et al.
The modernization of agriculture can help humanity address major challenges such as population growth, climate change, and labor shortages. Semi-autonomous agricultural robots offer clear advantages in automating tasks and improving efficiency. However, in open-field conditions, their autonomy is limited by the size and weight of onboard batteries. Wireless charging is a promising solution to overcome this limitation. This work proposes a methodology for the design, modeling, and experimental validation of a wireless power transfer (WPT) system for battery recharging of agricultural robots. A brief review of WPT technologies is provided, followed by key design considerations, co-simulation, and testing results. The proposed WPT system uses a resonant inductive power transfer topology with series–series (SS) compensation, a high-frequency inverter (85 kHz), and optimized spiral planar coils, enabling medium-range operation under agricultural conditions. The main contribution lies in the first experimental assessment of WPT performance under real agricultural environmental factors such as soil moisture and water presence, combined with electromagnetic safety evaluation and robust component selection for harsh conditions. Results highlight both the potential and limitations of this approach, demonstrating its feasibility and paving the way for future integration with intelligent alignment and adaptive control strategies.
Brandon Warner, Edward Ratner, Kallin Carlous-Khan
et al.
This paper proposes a novel model-agnostic method for weighting the outputs of base classifiers in machine learning (ML) ensembles. Our approach uses class-based weight coefficients assigned to every output class in each learner in the ensemble. This is particularly useful when the base classifiers have highly variable performance across classes. Our method generates a dense set of coefficients for the models in our ensemble by considering the model performance on each class. We compare our novel method to the commonly used ensemble approaches like voting and weighted averages. In addition, we compare our approach to class-specific soft voting (CSSV), which was also designed to address variable performance but generates a sparse set of weights by solving a linear system. We choose to illustrate the power of this approach by applying it to an ensemble of extreme learning machines (ELMs), which are well suited for this approach due to their stochastic, highly varying performance across classes. We illustrate the superiority of our approach by comparing its performance to that of simple majority voting, weighted majority voting, and class-specific soft voting using ten popular open-source multiclass classification datasets.
khoirul Adib, Maya Rini Handayani, Wenty Dwi Yuniarti
et al.
Pemilihan Presiden di Indonesia seringkali menjadi pemicu perubahan dramatis dalam dinamika opini publik, terutama di era digital yang dipenuhi dengan suara yang tersebar di media sosial. Penelitian ini bertujuan untuk memetakan perubahan sentimen publik pasca-pemilihan Presiden dengan menggunakan analisis media sosial, dengan fokus pada aplikasi X yang memiliki 24 juta pengguna aktif di Indonesia. Metode Support Vector Machine (SVM) digunakan untuk menganalisis dan mengklasifikasikan sentimen dengan akurat berdasarkan kata tweet yang sedang tren setelah pemilihan Presiden. Penelitian ini bertujuan untuk memberikan pemahaman yang lebih dalam tentang perubahan opini publik pasca-pemilihan presiden, dengan menggambarkan dinamika sentimen masyarakat yang tercermin dalam media sosial. Kontribusi dari penelitian ini adalah pemetaan yang akurat tentang perubahan opini publik, yang dapat memberikan wawasan yang berharga bagi pembuat kebijakan, analis politik, dan praktisi media sosial dalam merespons kebutuhan masyarakat di era digital ini. Hasil pengujian dengan menggunakan 3850 dengan karateristik dataset dengan menggunakan tiga kelas kata tweet yang sedang tren dari platform X menunjukkan tingkat akurasi tertinggi pada klasifikasi "Pemilu Damai" dengan 97.3%, "Hak Angket" dengan 96.5%, dan "Pemilu Curang" dengan 94.0%.
Effective collision risk reduction in autonomous vehicles relies on robust and straightforward pedestrian tracking. Challenges posed by occlusion and switching scenarios significantly impede the reliability of pedestrian tracking. In the current study, we strive to enhance the reliability and also the efficacy of pedestrian tracking in complex scenarios. Particularly, we introduce a new pedestrian tracking algorithm that leverages both the YOLOv8 (You Only Look Once) object detector technique and the StrongSORT algorithm, which is an advanced deep learning multi-object tracking (MOT) method. Our findings demonstrate that StrongSORT, an enhanced version of the DeepSORT MOT algorithm, substantially improves tracking accuracy through meticulous hyperparameter tuning. Overall, the experimental results reveal that the proposed algorithm is an effective and efficient method for pedestrian tracking, particularly in complex scenarios encountered in the MOT16 and MOT17 datasets. The combined use of Yolov8 and StrongSORT contributes to enhanced tracking results, emphasizing the synergistic relationship between detection and tracking modules.
Abstract Medical devices are commonly implanted underneath the skin, but how to real‐time noninvasively monitor their migration, integrity, and biodegradation in human body is still a formidable challenge. Here, the study demonstrates that benzyl violet 4B (BV‐4B), a main component in the FDA‐approved surgical suture, is found to produce fluorescence signal in the first near‐infrared window (NIR‐I, 700–900 nm) in polar solutions, whereas BV‐4B self‐assembles into highly crystalline aggregates upon a formation of ultrasmall nanodots and can emit strong fluorescence in the second near‐infrared window (NIR‐II, 1000–1700 nm) with a dramatic bathochromic shift in the absorption spectrum of ≈200 nm. Intriguingly, BV‐4B‐involved suture knots underneath the skin can be facilely monitored during the whole degradation process in vivo, and the rupture of the customized BV‐4B‐coated silicone catheter is noninvasively diagnosed by NIR‐II imaging. Furthermore, BV‐4B suspended in embolization glue achieves hybrid fluorescence‐guided surgery (hybrid FGS) for arteriovenous malformation. As a proof‐of‐concept study, the solid‐state BV‐4B is successfully used for NIR‐II imaging of surgical sutures in operations of patients. Overall, as a clinically translatable solid‐state dye, BV‐4B can be applied for in vivo monitoring the fate of medical devices by NIR‐II imaging.
Raymond Hall Yip Louie, Curtis Cai, Jerome Samir
et al.
Abstract Chimeric antigen receptor (CAR) T cell therapy is effective in treating B cell malignancies, but factors influencing the persistence of functional CAR+ T cells, such as product composition, patients’ lymphodepletion, and immune reconstitution, are not well understood. To shed light on this issue, here we conduct a single-cell multi-omics analysis of transcriptional, clonal, and phenotypic profiles from pre- to 1-month post-infusion of CAR+ and CAR− T cells from patients from a CARTELL study (ACTRN12617001579381) who received a donor-derived 4-1BB CAR product targeting CD19. Following infusion, CAR+ T cells and CAR− T cells shows similar differentiation profiles with clonally expanded populations across heterogeneous phenotypes, demonstrating clonal lineages and phenotypic plasticity. We validate these findings in 31 patients with large B cell lymphoma treated with CD19 CAR T therapy. For these patients, we identify using longitudinal mass-cytometry data an association between NK-like subsets and clinical outcomes at 6 months with both CAR+ and CAR− T cells. These results suggest that non-CAR-derived signals can provide information about patients’ immune recovery and be used as correlate of clinically relevant parameters.
The choices of a population to apply social distancing are modeled as a Nash game, where the agents determine their social interactions. The interconnections among the agents are modeled by a network. The main contribution of this work is the study of an agent-based epidemic model coupled with a social distancing game, which are both determined by the networked structure of human interconnections. The information available to the agents plays a crucial role. We examine the case that the agents know exactly the health states of their neighbors and the case they have only statistical information for the global prevalence of the epidemic. The agents are considered to be myopic, and thus, the Nash equilibria of static games for each day are studied. Through theoretical analysis, we characterize these Nash equilibria and we propose algorithms to compute them. Interestingly, in the case of statistical information the equilibrium strategies for an agent, at each day, are either full isolation or no social distancing at all. Through experimental studies, we observe that in the case of local information, the agents can significantly affect the prevalence of the epidemic with low social distancing, while in the other case, they can also affect the prevalence of the epidemic, but they have to pay the burden of not being well informed by applying strict social distancing. Moreover, the effects of the network structure, the virus transmissibility, the number of vulnerable agents, the health care system capacity and the information quality (fake news) are discussed and relevant simulations are provided.
Computer applications to medicine. Medical informatics
Food production systems are complex industrial operations that often involve multiple parties. This study proposes inventory management strategies for a multi-echelon perishable food supply chain with growing and deteriorating items. The upstream end of the proposed food supply chain is the farming echelon where newborn growing items are reared to maturity. Following this, the items are sent to the processing echelon for processing, a term that collectively describes activities such as slaughtering, cutting and packaging. The aim of the processing echelon is to transform live growing items into processed food products that are suitable for human consumption. The downstream end of the supply chain is the retail echelon where consumer demand for processed food products is met. Once the items are processed, they are subject to deterioration at both the processing and retail echelons. In light of this, an integrated inventory model aimed at optimising the performance of the entire food supply chain is formulated. The impact of investing in preservation technologies is also investigated due to the perishable nature of food products. To do this, a secondary model that incorporates an investment in preservation technologies is formulated. The model, representing a simplified industrial food production system, is aimed at jointly optimising the lot-size, number of shipments, growing cycle duration, processing cycle duration and the preservation technology investment amount. The results from the numerical example demonstrate that the preservation technology investment is worthwhile because it results in reduced inventory management costs across the supply chain.
Sreekanth Bathula, Srinivasan Anand, Thaseem Thajudeen
et al.
Abstract Recent studies argue that inhalation of respiratory droplets in indoor environments is one of the significant routes of COVID-19 infection. In many cases, patients are isolated in hospitals and quarantine centers to minimize the spread. However, the rooms allocated to these patients are accessed by health care and sanitization workers a couple of times in a day. Since the expiratory activities release airborne droplets with certain viral load, there is a greater need to study the survival of these droplets in the room of a patient to control the exposure to the accessing people. A bi-compartment and bi-component numerical model is developed to study the survival of these droplets in a room, taking into consideration the deposition rates of the droplets and the ventilation rates in the room. The vital aspects related to the survival of the droplets, such as the effect of the severity of the infection, types of releases, size-dependent deposition and role of ventilation are discussed.
Jae-In Lee, Eun-Ji Cho, Fritz Ndumbe Lyonga
et al.
A mechanized thermo-chemical treatment system was developed to treat the undecomposed carcass and remediate livestock burial sites. Animal carcasses were thus processed via crushing, mixing, and treatment with quicklime treatment, heat treatment (200–500 °C), and mixing with sawdust. The machinery was applied to two sites where 16,000 chickens and 418 pigs had previously been buried in fiber-reinforced plastic storage bins. No dioxins were detected in the gas discharged during processing, and the concentration of total volatile organic compound, toluene, ethylbenzene, xylene, and styrene were 430.3, 139.0, 18.3, 21.4, and 10.4 μg/m<sup>3</sup>, respectively, which were below the air pollutant emission standards issued by the Korean Ministry of Environment. Korean standards stipulating the use of treated carcasses as compost (C, N, and P content, heavy metal concentration, <i>Escherichia coli</i>, and <i>Salmonella</i>) were met, but the germination index value was less than 70, not satisfying the criteria. Plant height, leaf length, leaf width, and dry weight of lettuce grown in soil amended with treated carcass product were significantly lower than those grown in low nutrient soil due to the poor germination index of the treated carcass. These results indicate that a composting process is required before the use of the treated carcass as a fertilizer. The addition of zeolite retarded the elution of ammonia from the carcasses and its efficiency was about 87.9%. It is expected that the mechanized thermo-chemical treatment process developed in this study could replace other technologies for the remediation of livestock burial sites.
Kotaro NAKAMURA, Takehiko MURAMATSU, Takashi OGAWA
et al.
Natural gas-fired combined cycle power plants (NGCC) have the advantages of high efficiency and low CO2 intensity compared to coal-fired power plants. When variable renewable energy sources are introduced to the grid in large quantities, the NGCC is expected to have rapid start-ups, rapid shutdowns, and increased partialload operations to stabilize the grid. Due to these temporary operations, the ratio of NO2/NOX in the NGCC exhaust gas changes significantly. In general, when the NO2/NOX ratio is high, the efficiency of the de-NOX systems decreases. Moreover, the performance of de-NOX systems has a transient response due to changes at the catalyst surface and the adsorption of NH3. Considering the trajectory of increased variable renewable energy, it is necessary to develop an efficient NOx removal system that is effective over a wide range of NO2/NOX ratios. In the modeling of de-NOX system performance, this study extends the general Eley-Rideal reaction between adsorbed NH3 and gas-phase NOX to include the existence of O2, H2O, CO2, and transient NO2 in the exhaust gas along with changes in redox sites (i.e., V5+=O and V4+-OH). A two-dimensional transient numerical simulation code was developed and adapted using experimental results obtained from treating simulated NGCC exhaust gas using a commercial honeycomb-shaped selective catalyst. Numerical simulations incorporating the empirically determined kinetic equations accurately predict the transient and equilibrium concentrations of NOX and NH3 exiting a honeycomb catalyst even under gas conditions including high NO2.