A Novel Construction Method of Bayesian Neural Networks Based on Multi-type Engineering Knowledge
Wenbin Ye, YanZhou Duan, Jun Yuan
et al.
Abstract In the field of engineering, the utilization of surrogate models to replace computationally intensive simulation software has become a widely adopted approach. However, when addressing complex engineering problems, the costs of simulations can escalate significantly, making it challenging for simulation data to fulfill the training requirements of surrogate models. Recognizing that designers accumulate valuable design knowledge throughout the design process, this knowledge inherently governs the mapping rules between design parameters and performance metrics. This paper introduces a novel method for constructing surrogate models by integrating limited simulation data with engineering knowledge through Bayesian neural networks (B-DaKnow). In B-DaKnow, neural networks employ variational inference and automatic differentiation to amalgamate simulation data and engineering knowledge while optimizing weights and biases via evolutionary algorithms. The proposed methodology is validated using ten benchmark functions and three engineering cases. The experimental results demonstrate that: (1) the incorporation of diverse engineering knowledge enhances prediction accuracy in B-DaKnow to varying degrees; (2) in tackling complex engineering challenges, B-DaKnow exhibits superior performance compared to alternative algorithms; (3) B-DaKnow demonstrates commendable robustness, as evidenced by only slight fluctuations in prediction results across different problems.
Electronic computers. Computer science
Perbandingan Metode Single Exponential Smoothing Dan Metode Double Exponential Smoothing Untuk Memprediksi Konsumsi Energi Listrik Di PT. PLN (Persero) ULP Lhokseumawe
Meisya Syahtira, Nurdin Nurdin, Fajriana Fajriana
Energi listrik merupakan kebutuhan vital bagi masyarakat dan menjadi penunjang utama dalam berbagai sektor, termasuk rumah tangga, bisnis, hingga industri. Seiring meningkatnya permintaan listrik setiap tahunnya, PT. PLN (Persero) ULP Lhokseumawe dituntut mampu melakukan perencanaan distribusi dan kapasitas daya yang akurat. Prediksi yang kurang tepat dapat menimbulkan ketidakseimbangan antara pasokan dan kebutuhan energi. Penelitian ini membandingkan dua metode peramalan, yaitu Single Exponential Smoothing (SES) dan Double Exponential Smoothing (DES), untuk memprediksi konsumsi energi listrik di wilayah Lhokseumawe. Data yang digunakan berupa konsumsi listrik bulanan per kecamatan periode 2022–2024, dengan proyeksi prediksi hingga tahun 2027. Tahapan penelitian meliputi pengumpulan data, pra-processing, penerapan metode SES dan DES, evaluasi akurasi menggunakan MAPE, serta perancangan sistem berbasis web menggunakan Python dan Flask. Hasil penelitian menunjukkan bahwa metode SES memiliki tingkat akurasi lebih tinggi dengan nilai MAPE sebesar 5,85%, sedangkan metode DES memperoleh nilai MAPE sebesar 7,87%. Hal ini menegaskan bahwa SES lebih sesuai digunakan untuk data dengan pola fluktuatif acak. Sebaliknya, DES lebih cocok diterapkan pada data dengan pola tren. Melalui perbandingan nilai MAPE, diperoleh gambaran metode mana yang lebih optimal digunakan dalam konteks prediksi konsumsi listrik di Lhokseumawe. Penelitian ini diharapkan dapat memberikan kontribusi praktis bagi PT. PLN (Persero) ULP Lhokseumawe dalam menyusun strategi distribusi energi listrik yang lebih efektif dan efisien.
Electronic computers. Computer science
Instrument-Splatting++: Towards Controllable Surgical Instrument Digital Twin Using Gaussian Splatting
Shuojue Yang, Zijian Wu, Chengjiaao Liao
et al.
High-quality and controllable digital twins of surgical instruments are critical for Real2Sim in robot-assisted surgery, as they enable realistic simulation, synthetic data generation, and perception learning under novel poses. We present Instrument-Splatting++, a monocular 3D Gaussian Splatting (3DGS) framework that reconstructs surgical instruments as a fully controllable Gaussian asset with high fidelity. Our pipeline starts with part-wise geometry pretraining that injects CAD priors into Gaussian primitives and equips the representation with part-aware semantic rendering. Built on the pretrained model, we propose a semantics-aware pose estimation and tracking (SAPET) method to recover per-frame 6-DoF pose and joint angles from unposed endoscopic videos, where a gripper-tip network trained purely from synthetic semantics provides robust supervision and a loose regularization suppresses singular articulations. Finally, we introduce Robust Texture Learning (RTL), which alternates pose refinement and robust appearance optimization, mitigating pose noise during texture learning. The proposed framework can perform pose estimation and learn realistic texture from unposed videos. We validate our method on sequences extracted from EndoVis17/18, SAR-RARP, and an in-house dataset, showing superior photometric quality and improved geometric accuracy over state-of-the-art baselines. We further demonstrate a downstream keypoint detection task where unseen-pose data augmentation from our controllable instrument Gaussian improves performance.
Minimizing the Carbon Footprint in LoRa-Based IoT Networks: A Machine Learning Perspective on Gateway Positioning
Francisco-Jose Alvarado-Alcon, Rafael Asorey-Cacheda, Antonio-Javier Garcia-Sanchez
et al.
The Internet of Things (IoT) is gaining significant attention for its ability to digitally transform various sectors by enabling seamless connectivity and data exchange. However, deploying these networks is challenging due to the need to tailor configurations to diverse application requirements. To date, there has been limited focus on examining and enhancing the carbon footprint (CF) associated with these network deployments. In this study, we present an optimization framework leveraging machine learning techniques to minimize the CF associated with IoT multi-hop network deployments by varying the placement of the required gateways. Additionally, we establish a direct comparison between our proposed machine learning method and the integer linear program (ILP) approach. Our findings reveal that placing gateways using neural networks can achieve a 14% reduction in the CF for simple networks compared to those not using optimization for gateway placement. The ILP method could reduce the CF by 16.6% for identical networks, although it incurs a computational cost more than 250 times higher, which has its own environmental impact. Furthermore, we highlight the superior scalability of machine learning techniques, particularly advantageous for larger networks, as discussed in our concluding remarks.
Electronic computers. Computer science, Information technology
Technostress and generative AI in the workplace: a qualitative analysis of young professionals
Malte Högemann, Malte Högemann, Laura Hein
et al.
Generative artificial intelligence (GenAI) is rapidly diffusing into the workplace and is expected to substantially reshape roles, workflows, and skill requirements, particularly for young professionals as early adopters who are highly exposed to these tools. While GenAI is widely regarded as a means to increase productivity, its adoption may simultaneously introduce new challenges, including various forms of technostress. Drawing on 15 semi-structured interviews with young professionals in research and development (R&D), IT, finance, and marketing in organizations piloting or using GenAI, we conducted a structured qualitative content analysis guided by established technostress dimensions. Our findings indicate that classic technostress dimensions remain relevant but manifest differently across sectors and contexts. Moreover, additional GenAI-specific stressors emerged, such as regulatory and compliance ambiguity, data protection and copyright concerns, perceived dependency, potential skill degradation, doubts about the reliability and controllability of AI outputs, and a shift towards more monitoring and conceptual work. At the same time, participants reported techno-eustress in the form of efficiency gains, learning opportunities, and enhanced intrinsic motivation. Overall, the study extends existing technostress frameworks and underscores the importance of AI literacy, clear organizational governance, and supportive work design to mitigate negative technostress while enabling the productive use of GenAI.
Electronic computers. Computer science
MacroSwarm: A Field-based Compositional Framework for Swarm Programming
Gianluca Aguzzi, Roberto Casadei, Mirko Viroli
Swarm behaviour engineering is an area of research that seeks to investigate methods and techniques for coordinating computation and action within groups of simple agents to achieve complex global goals like pattern formation, collective movement, clustering, and distributed sensing. Despite recent progress in the analysis and engineering of swarms (of drones, robots, vehicles), there is still a need for general design and implementation methods and tools that can be used to define complex swarm behaviour in a principled way. To contribute to this quest, this article proposes a new field-based coordination approach, called MacroSwarm, to design and program swarm behaviour in terms of reusable and fully composable functional blocks embedding collective computation and coordination. Based on the macroprogramming paradigm of aggregate computing, MacroSwarm builds on the idea of expressing each swarm behaviour block as a pure function, mapping sensing fields into actuation goal fields, e.g., including movement vectors. In order to demonstrate the expressiveness, compositionality, and practicality of MacroSwarm as a framework for swarm programming, we perform a variety of simulations covering common patterns of flocking, pattern formation, and collective decision-making. The implications of the inherent self-stabilisation properties of field-based computations in MacroSwarm are discussed, which formally guarantee some resilience properties and guided the design of the library.
Logic, Electronic computers. Computer science
RTL Evaluation of ℓ2-Norm Approximation with Rotated ℓ1-Norm for 2-Tuple Arrays
Shu Abe, Yuya Kodama, Hiroyoshi Yamada
et al.
Electronic computers. Computer science
Learning Treatment Representations for Downstream Instrumental Variable Regression
Shiangyi Lin, Hui Lan, Vasilis Syrgkanis
Traditional instrumental variable (IV) estimators face a fundamental constraint: they can only accommodate as many endogenous treatment variables as available instruments. This limitation becomes particularly challenging in settings where the treatment is presented in a high-dimensional and unstructured manner (e.g. descriptions of patient treatment pathways in a hospital). In such settings, researchers typically resort to applying unsupervised dimension reduction techniques to learn a low-dimensional treatment representation prior to implementing IV regression analysis. We show that such methods can suffer from substantial omitted variable bias due to implicit regularization in the representation learning step. We propose a novel approach to construct treatment representations by explicitly incorporating instrumental variables during the representation learning process. Our approach provides a framework for handling high-dimensional endogenous variables with limited instruments. We demonstrate both theoretically and empirically that fitting IV models on these instrument-informed representations ensures identification of directions that optimize outcome prediction. Our experiments show that our proposed methodology improves upon the conventional two-stage approaches that perform dimension reduction without incorporating instrument information.
Constructive Instrumental Variable Identification and Inference with Many Weak Interaction Moments
Di Zhang, Minhao Yao, Zhonghua Liu
et al.
Instrumental variable methods are widely used for causal inference, but identification becomes especially challenging when instruments are weak and potentially invalid. These challenges are particularly pronounced in Mendelian randomization, where genetic variants serve as instruments and violations of exclusion restriction or independence assumptions are common. We propose MAGIC, a constructive and assumption-lean framework that achieves identification even when all candidate instruments may be invalid. The method exploits pairwise and higher-order interactions among mutually independent instruments to construct moment conditions orthogonal to both unmeasured confounding and direct effects under a linear structural model. The resulting estimation problem involves many potentially weak interaction moments with unknown nuisance parameters. We develop a semiparametric generalized method of moments estimator and introduce a global Neyman orthogonality condition to ensure robustness of both the moment function and its derivative to nuisance estimation under many weak moment asymptotics. We establish consistency and asymptotic normality when the number of moments diverges with sample size and characterize the semiparametric efficiency bound under fixed dimension. Simulations and an application to UK Biobank data illustrate the method.
Progress in photonic technologies for next-generation astrophotonic instruments
Ahmed Shadman Alam
The study of integrating photonic devices into astronomical instruments is the primary focus of astrophotonics. The growth in this area of study is relatively recent. Research related to astronomical spectroscopic phenomena has received a lot of attention in recent times. There are several important advantages to integrating photonics technology into an astronomical instrument, such as cost savings because of increased reproducibility, thermal and mechanical stability, and miniaturization. This paper provides a brief review of recent advances in astrophotonics.
Modelling note’s pitch and duration in trained professional singers
Behnam Faghih, Amin Shoari Nejad, Joseph Timoney
Abstract Performing musical notes correctly does not mean that all the performers will play the notes at the exact same pitch and duration. However, it does imply that they are performing the notes within acceptable psychoacoustic ranges. Therefore, this article aims to find the range of a note’ duration and pitch according to its position in a piece of music by analysing several parameters in trained-professional singers’ behaviours in singing notes. To achieve the goal, the variations of eight variables on 2688 solo singing recorded files by trained professional singers were investigated to find the relationships between a performed note’s F0 and duration with these variables. The variables considered in this study are the interval to the following and previous notes, the existence of rest before or after the note, the note’s MIDI pitch code and duration in a music score, and the particular singing technique applied. The Bayesian hierarchical model was used to find the effect of the variables on the pitch and duration of a note sung by professionals, mainly in opera style, singers. The investigation confirms that these parameters affect the pitch and duration of notes performed by professional singers. Finally, this paper proposes formulas to calculate the pitch frequency and duration of the notes according to the variables to simulate the behaviour of the trained-professional singers in performing notes’ pitches and duration.
Acoustics. Sound, Electronic computers. Computer science
Non-Motorized License Plate Recognition and Localization Method Based on Semantic Alignment and Hierarchical Optimization
TAN Ruoqi, DONG Minggang, ZHAO Weixiao, WU Tianhao
Holding non-motorized vehicles accountable for legal violations effectively enhances urban traffic safety. Non-motorized vehicle license plates are characterized by small size, dense distribution, and ease of being obscured, which leads to significant feature information loss during the detection process in traditional deep learning-based methods. A non-motorized vehicle license plate recognition and localization method based on semantic alignment and hierarchical optimization is proposed. In this method, a semantic alignment module is designed for the underlying information fusion. During the upsampling process, low-level target information is used to guide the fusion of high-level semantics downwards, addressing the loss of small target features caused by conflicts between high- and low-level semantics. Subsequently, a hierarchical optimization module is constructed within the CSP structure to replace the deep ELAN module. This module uses a stack of a few convolutional kernel modules to extract the target information, reducing the number of network layers and preventing the loss of feature information at deeper levels. In the final stage, the K-Means++ algorithm is employed to cluster and obtain the initial anchor boxes suitable for non-motorized license plates to reduce the matching error during the training process. This approach aims to improve the accuracy of small-object recognition and localization. The experimental results demonstrate that the proposed method achieves a recognition and localization accuracy of 90.95% on a non-motorized vehicle license plate dataset. Compared with representative methods such as YOLOv7 and YOLOv8, it improves the accuracy by at least 3.58%. The proposed approach is effective for non-motorized vehicle license plate recognition and localization.
Computer engineering. Computer hardware, Computer software
МАТЕМАТИЧЕСКАЯ МОДЕЛЬ ЦИФРОВОЙ СИСТЕМЫ АВТОМАТИЧЕСКОГО СОПРОВОЖДЕНИЯ ЦЕЛИ ПО ДАЛЬНОСТИ С ПРИМЕНЕНИЕМ СХЕМЫ «КОД-ВРЕМЕННАЯ ЗАДЕРЖКА»
Тураева Н.М.
В данной статье рассматривается математическая модель цифровой системы автоматического сопровождения цели по дальности, которая, в отличие от существующих, удовлетворяет по всем требованиям устойчивости и качеству системы измерения дальности и автоматического сопровождения цели. Также в статье показана структурная схема и построена математическая модель преобразователя «код-временная задержка», которая исследована на устойчивость и качество, определены допустимые области параметров цифрового управляющего устройства, обеспечивающие устойчивость работы построенной математической модели. Определены допустимые области параметров алгоритма работы цифрового управляющего устройства, при которых система автоматического сопровождения дальности соответствует своему предназначению.
Electronic computers. Computer science, Cybernetics
Review: Sensing technologies for precision specialty crop production
W. S. Lee, V. Alchanatis, C. Yang
et al.
448 sitasi
en
Environmental Science
Privacy Protection of Digital Images Using Watermarking and QR Code-based Visual Cryptography
Akanksha Arora, Hitendra Garg, Shivendra Shivani
The increase in information sharing in terms of digital images imposes threats to privacy and personal identity. Digital images can be stolen while in transfer and any kind of alteration can be done very easily. Thus, privacy protection of digital images from attackers becomes very important. Encryption, steganography, watermarking, and visual cryptography techniques to protect digital images have been proposed from time to time. The present paper is focused on the enhancement of privacy protection of digital images utilizing watermarking and a QR code-based expansion-free and meaningful visual cryptography approach which generates visually appealing QR codes for transmitting meaningful shares. The original secret image is processed with a watermark image (copyright logo, signature, and so on), and then halftoning of the watermarked image has been done to limit pixel expansion. Then, the halftoned image has been partitioned using VC into two shares. These shares are embedded with a QR code to make the shares meaningful. Lossless compression has been performed on the QR code-based shares. The compression method employed in visual cryptography would save space and time. The proposed approach keeps the beauty of visual cryptography, i.e., computation-free decryption, and the size of the recovered image the same as the original secret image. The experimental results confirm the effectiveness of the proposed approach.
Electronic computers. Computer science
Conditional Effects, Observables and Instruments
Stanley Gudder
We begin with a study of operations and the effects they measure. We define the probability that an effect $a$ occurs when the system is in a state $ρ$ by $P_ρ(a)= tr(ρa)$. If $P_ρ(a)\ne 0$ and $\mathcal{I}$ is an operation that measures $a$, we define the conditional probability of an effect $b$ given $a$ relative to $\mathcal{I}$ by \begin{equation*} P_ρ(b\mid a) = tr[\mathcal{I} (ρ)b] /P_ρ(a) \end{equation*} We characterize when Bayes' quantum second rule \begin{equation*} P_ρ(b\mid a)=\frac{P_ρ(b)}{P_ρ(a)}\,P_ρ(a\mid b) \end{equation*} holds. We then consider Lüders and Holevo operations. We next discuss instruments and the observables they measure. If $A$ and $B$ are observables and an instrument $\mathcal{I}$ measures $A$, we define the observable $B$ conditioned on $A$ relative to $\mathcal{I}$ and denote it by $(B\mid A)$. Using these concepts, we introduce Bayes' quantum first rule. We observe that this is the same as the classical Bayes' first rule, except it depends on the instrument used to measure $A$. We then extend this to Bayes' quantum first rule for expectations. We show that two observables $B$ and $C$ are jointly commuting if and only if there exists an atomic observable $A$ such that $B=(B\mid A)$ and $C=(C\mid A)$. We next obtain a general uncertainty principle for conditioned observables. Finally, we discuss observable conditioned quantum entropies. The theory is illustrated with many examples.
Pattern analysis for machine olfaction: a review
R. Gutierrez-Osuna
561 sitasi
en
Computer Science
Detours: binary interception of Win32 functions
G. Hunt, Doug Brubacher
554 sitasi
en
Computer Science
Effect of gap size of gold interdigitated electrodes on the electrochemical immunosensing of cardiac troponin-I for point-of-care applications
Ashish Mathur, Souradeep Roy, Shalini Nagabooshanam
et al.
This work describes the effect of electrode geometry on the sensor's performance, towards the fabrication of impedimetric bio-sensor for cardiac troponin-I (CTnI) detection. Herein, development of a novel, Electrochemical Impedance Spectroscopy (EIS) based immunosensor for point-of-care applications is explored using gold coated Interdigitated Electrodes (IDEs). The effect of sensor's geometry and its effect on sensor's performance has not been well characterised in the available literature; the understanding of which would significantly advance bio-sensor design and its performance. Greater control of electrode geometries and inter-electrode spacing will increase the electrode surface area, consequently increasing the charge-transfer resistance and reduce the double-layer capacitance. These, in turn, give rise to improved signal-to-noise ratios, thereby affording greater sensitivity, lower detection limits and faster detection times. IDEs of various gap sizes (5 µm, 10 µm, 50 µm and 75 µm) were investigated for sensing of CTnI within 2 ng/mL-12 ng/mL concentration range. The sensitivity is found to be largely dependent on IDE's gap size: ∼ 50% enhancement is achieved upon decreasing the spacing from 75 µm to 5 µm. The response time of the developed immunosensor was found to be ∼ 10 s with excellent selectivity and performance in spiked serum, which makes it an ideal candidate for point of care applications.
What Impulse Response Do Instrumental Variables Identify?
Bonsoo Koo, Seojeong Lee, Myung Hwan Seo
et al.
The local projection-instrumental variable (LP-IV) literature has been largely silent on cases in which impulse responses are set-identified, arising when the shock of interest is composite and instruments are correlated with multiple components. We demonstrate that LP-IV estimands constructed using one instrument at a time identify affine combinations of impulse responses to structural shock components with instrument-specific and potentially negative weights, challenging standard causal interpretation. The two-stage least squares compounds the identification problem. However, we show that individual LP-IV estimands characterize the identified set when sign restrictions on the correlations between instruments and structural shock components are imposed. Under weak stationarity, these identified sets are sharp and cannot be further narrowed in key cases. Two empirical examples--decomposing the U.S. government spending multiplier and disentangling pure monetary shocks from central bank information shocks--illustrate the usefulness of our approach.