Electromagnetic (EM) communication is approaching fundamental physical and thermodynamic limits, where further performance gains through spectrum expansion and waveform optimization alone are increasingly unsustainable. The purpose of this paper is to explore how wireless communication may evolve beyond the EM paradigm by reframing information transfer as controlled manipulation of physical, biological, and cognitive states rather than radiative signal propagation. The main contribution of this work is a state-centric conceptual framework for post-6G communication. The paper identifies and categorizes ten foundational paradigms, including quantum-state transfer, atomic and lattice-level signaling, biological communication, cognitive telepresence, and spacetime-based coordination, defining potential non-EM and hybrid communication mechanisms. In addition, a research roadmap is outlined to place these paradigms within plausible future network generations beyond 6G. The key findings of this study are conceptual. The analysis shows that diverse communication mechanisms across physical, biological, and cognitive domains can be unified using common principles such as state transduction, coherence preservation, entropy management, and energy-aware conversion. These findings indicate that future communication systems may evolve from spectrum-bound infrastructures into adaptive and self-organizing networks that integrate information transfer with sensing, computation, and actuation. This work establishes a conceptual reference framework for future theoretical and interdisciplinary research on communication beyond conventional EM-based systems.
Geemi P Wellawatte, Huixuan Guo, Magdalena Lederbauer
et al.
Retrieval-Augmented Generation (RAG) is a widely used strategy in Large-Language Models (LLMs) to extrapolate beyond the inherent pre-trained knowledge. Hence, RAG is crucial when working in data-sparse fields such as Chemistry. The evaluation of RAG systems is commonly conducted using specialized datasets. However, existing datasets, typically in the form of scientific Question-Answer-Context (QAC) triplets or QA pairs, are often limited in size due to the labor-intensive nature of manual curation or require further quality assessment when generated through automated processes. This highlights a critical need for large, high-quality datasets tailored to scientific applications. We introduce ChemLit-QA, a comprehensive, expert-validated, open-source dataset comprising over 1,000 entries specifically designed for chemistry. Our approach involves the initial generation and filtering of a QAC dataset using an automated framework based on GPT-4 Turbo, followed by rigorous evaluation by chemistry experts. Additionally, we provide two supplementary datasets: ChemLit-QA-neg focused on negative data, and ChemLit-QA-multi focused on multihop reasoning tasks for LLMs, which complement the main dataset on hallucination detection and more reasoning-intensive tasks.
Gustavo Domingues, Leticia de Oliveira, Leina Yoshida
et al.
Background: Head-mounted displays (HMDs) offer compelling virtual and augmented experiences, yet their influence on everyday accuracy and efficiency is not fully understood. In particular, video see-through (VST) and optical see-through (OST) devices may introduce perceptual distortions that degrade performance. Methods: We compared a VST HMD (Meta Quest 3) and an OST HMD (Microsoft HoloLens) in two representative motor tasks: dart throwing (far-field interaction) and bottle filling (near-field interaction). Eighty volunteers were split into two experiments, each using one HMD type. Every participant performed both tasks twice—once with the assigned HMD and once with normal vision. Completion time, dart-board error, water-level deviation, and selfreported visual-discomfort symptoms (eyestrain, blurred vision, nausea) were recorded. Results: Wearing either HMD lengthened task completion and reduced precision relative to the naked-eye baseline. Dart throws landed farther from the bullseye and showed greater score variability under HMD conditions. In the bottle-filling task, participants overfilled more frequently and deviated further from the target water level when using an HMD. Mild visual discomfort was reported by some users, whereas severe symptoms were rare. Conclusions: Both VST and
OST HMDs can impose perceptual and cognitive demands that impair speed and accuracy in common near- and farfield activities. Refining calibration procedures and real-time visual feedback may mitigate these effects; broader studies across diverse user groups and task domains are warranted.
The Internet of Things (IoT) is an emerging technology that has attracted significant attention and triggered a technical revolution in recent years. Numerous IoT devices are directly connected to the physical world, such as security cameras and medical equipment, making IoT security a critical issue. Artificial intelligence (AI) based intrusion detection technology for IoT can rapidly detect network attacks and improve security performance. However, this technology is vulnerable to backdoor attacks. As an important form of adversarial machine learning (ML), backdoor attacks can allow malicious traffic to evade detection of the intrusion detection system, posing a significant threat to the IoT security. This study focuses on backdoor attack and defense methods for AI–based IoT intrusion detection system. Specifically, we first use different ML and deep learning (DL) classification models to classify IoT traffic data, thereby achieving intrusion detection within IoT. Additionally, we employ data poisoning techniques to implant backdoors into models, enabling backdoor attacks on classification models. For backdoor defense, we propose backdoor detection and mitigate methods: (1) The proposed backdoor detection method is achieved by leveraging the strong correlation between the backdoor trigger and the target classification; (2) we utilize the unlearning method to mitigate the backdoor effect, enhancing the robustness of classification networks. Extensive experiments were conducted on the CICIOT2023 dataset to evaluate the effectiveness of IoT intrusion detection, backdoor attack, and defense.
Amir Ali Mohammad Khani, Ali Soldoozy, Farzane Soleimani Rudi
et al.
This study deals with a passive RF duplexer integrated with a two-notch band. To design the model, a band-pass filter is considered. Using micro-strip technology, the RF duplexer substation is simulated. It is a rectangular in parallel coupling with frequency bands of 1 and 5 GHz while existing three ports. Moreover, to enhance the impedance coefficient and decrease the admittance, the method of complementary paired resonators is taken into account. Furthermore, scattering parameters were used by the step impedance method to make an integrated monolayer substrate from signal branching in duplex mode. Thus, the band-pass filter making the frequency cut-off bands allows designing GSM-4G radars. The low cut-off microwave band is included in these bands at the 77 MHz central frequency and the second cut-off band for GSM-4G radars at the 437 MHz central frequency. The duplexer has the total dimensions of 14 mm × 99 mm and the presented RF duplexer is simulated in CST.
Electric apparatus and materials. Electric circuits. Electric networks, Computer engineering. Computer hardware
The local force field generated by light endows optical microrobots with remarkable flexibility and adaptivity, promising significant advancements in precise medicine and cell transport. Nevertheless, the automated navigation of multiple optical microrobots in intricate, dynamic environments over extended distances remains a challenge. Herein, a versatile control strategy aimed at navigating optical microrobotic swarms to distant targets under obstacles of varying sizes, shapes, and velocities is introduced. By confining all microrobots within a manipulation domain, swarm integrity is ensured while mitigating the effects of Brownian motion. Obstacle's elliptical approximation is developed to facilitate efficient obstacle avoidance for microrobotic swarms. Additionally, several supplementary functions are integrated to enhance swarm robustness and intelligence, addressing uncertainties such as swarm collapse, particle immobilization, and anomalous laser–obstacle interactions in real microscopic environments. We further demonstrate the efficacy and versatility of our proposed strategy by achieving autonomous long‐distance navigation to a series of targets. This strategy is compatible with both optical trapping‐ and nudging‐based microrobotic swarms, representing a significant advancement in enabling optical microrobots to undertake complex tasks such as drug delivery and nanosurgery and understanding collective motions.
Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
Abstract With the great development of Internet of Things (IoT) and edge computing, the development of sports activities depends on the development of information technology and it is inevitable to pay attention to the combination and optimization of resources. The combination of IoT and edge computing will be critical in sports activities. This paper elaborates on the application of network skill in sports event information management, that is, through the effective gathering of sports event data, to realize the use of sports event information, to achieve the purpose of information and digitization. Furthermore, the goal is to investigate the effect of sports event in the era of IoT. The impact of sports events on the economy and culture of the hosting city is investigated using IoT concept of edge computing. By analyzing the advantages and disadvantages of traditional centralized optimization method, we present a series of performance indicators and utility functions and show that the method is effective and achieves the optimal purpose. Through vital research, it is found that with the development of the edge computing and IoT industry, the scale of sports events is constantly expanding. By 2019, there has been a scale of 1,271 billion yuan. An increase of 981 billion yuan, compared with 290 billion yuan in 2013. Therefore, the use of the IoT technology in combination with edge computing to manage sports events will greatly encourage the expansion of sports activities. Furthermore, the holding of sporting events reflects a city’s overall strength and enhances the city’s exposure and fame. The investigation offers a certain reference point for cities looking to increase their influence through events.
Abstract The rapid development of the Internet of Vehicles (IoV) along with the emergence of intelligent applications have put forward higher requirements for massive task offloading. Even though Mobile Edge Computing (MEC) can diminish network transmission delay and ease network congestion, the constrained heterogeneous resources of a single edge server and the highly dynamic topology of vehicular edge networks may compromise the efficiency of task offloading, including latency and energy consumption. Vehicular edge networks are also vulnerable to malicious outside attacks. In this paper, we propose a new blockchain-enabled digital twin vehicular edge network (DTVEN) where digital twin (DT) is exploited to monitor network communication, computation, and caching (3C) resources management in real time to provide rich data for offloading decision-making, and blockchain is utilized to secure fair and decentralized offloading transactions among DTs. To ensure 3C resources sharing across edge servers, we design a DT-assisted edge cooperation scheme, which makes full use of edge resources in vehicular networks. Furthermore, a DT-based smart contract is built to achieve a quick and effective consensus process. Then, we apply a task offloading algorithm based on an improved cuckoo algorithm (ICA) and a resource allocation scheme based on greedy strategy to minimize network cost by comprehensively taking into account latency and energy consumption. Numerical results demonstrate that our proposed scheme outperforms the existing schemes in terms of network cost.
Background: Site pollution in construction can be reduced by using high levels of prefabrication and industrialization. However, the lack of green concepts and methods during the prefabrication assembly process hinders its environmental benefits. Digital twin technology can monitor sites in real-time and provide data visualization for decision support, which has been used in construction management and risk control. Methods: We propose a six-dimensional digital twin framework that includes physical and virtual spaces, project management and service layers, twin data, and component connections. The framework integrates green factors of prefabricated construction into a model evolution framework and mechanism that enables real-time green services throughout the process. Results: The proposed framework, modeling method, and evolution method were tested in prefabrication projects in Tianjin. By applying these methods, inadequate management measures were promptly identified and strengthened. Energy consumption and pollution were reduced by comparing with the plan before construction. In addition, the model evolution method optimized green management measures and improved the level of green construction management on site. Conclusions: The application results demonstrate the effectiveness of our proposed framework, the model building method, and the evolution method in improving the green level of prefabricated construction.
The measurement of physical parameters of porous rock, which constitute reservoirs, is an essential part of hydrocarbon exploration. Typically, the measurement of these physical parameters is carried out through core analysis in a laboratory, which requires considerable time and high costs. Another approach involves using digital rock models, where the physical parameters are calculated through image processing and numerical simulations. However, this method also requires a significant amount of time for estimating the physical parameters of each rock sample. Machine learning, specifically convolutional neural network (CNN) algorithms, has been developed as an alternative method for estimating the physical parameters of porous rock in a shorter time frame. The advancement of CNN, particularly through transfer learning using pre-trained models, has contributed to rapid prediction capabilities. However, not all pre-trained models are suitable for estimating the physical parameters of porous rock. In this study, transfer learning was applied to estimate parameters of sandstones such as porosity, specific surface area, average grain size, average coordination number, and average throat radius. Six types of pre-trained models were utilized: ResNet152, DenseNet201, Xception, InceptionV3, InceptionResNetV2, and MobileNetV2. The results of this study indicate that the DenseNet201 model achieved the best performance with an error rate of 2.11%. Overall, this study highlights the potential of transfer learning to ultimately lead to more efficient and effective computation.
To process all the pixels in the face image, feature extraction can be performed using the Haar Wavelet method so that it processes identifiers with lower dimensions. However, a classification algorithm must separate the distance between classes with minimal data to classify low-dimensional facial images. KNN and SVM algorithms are classifiers that can be used for facial image recognition. When classifying images, SVM creates a hyperplane, divides the input space between classes and classifies based on which side of the hyperplane the unclassified object is placed when it is placed in the input space. KNN uses a voting system to determine which class an unclassified object belongs to, taking into account the nearest neighbor class in the decision space. When classifying, KNN will generally classify accurately, resulting in some minor misclassifications that plagued the final classified image. This study aims to compare the two algorithms on image identifiers with low dimensions resulting from haar wavelet extraction. The research results obtained are facial image classification using the haar wavelet extraction method using the SVM algorithm to obtain an accuracy of 98.8%. Whereas when using the KNN algorithm, the accuracy obtained is 96.6%. The results of this study show that the SVM algorithm produces better accuracy in facial image recognition using haar wavelet feature extraction compared to the KNN algorithm. The SVM algorithm can recognize facial images even though it uses image training data with various face poses and sizes, resulting in higher accuracy.
Today, botnets are the most common threat on the Internet and are used as the main attack vector against individuals and businesses. Cybercriminals have exploited botnets for many illegal activities, including click fraud, DDOS attacks, and spam production. In this article, we suggest a method for identifying the behavior of data traffic using machine learning classifiers including genetic algorithm to detect botnet activities. By categorizing behavior based on time slots, we investigate the viability of detecting botnet behavior without seeing a whole network data flow. We also evaluate the efficacy of two well-known classification methods with reference to this data. We demonstrate experimentally, using existing datasets, that it is possible to detect botnet activities with high precision.
Faranak Rezaei, Maryam Abbasitabar, Shirin Mirzaei
et al.
Abstract Today's lifestyle has led to a significant increase in referrals to medical centers to diagnose various diseases. To this end, over the past few years, researchers have turned to new diagnostic methods, including data mining and artificial intelligence, intending to facilitate the detection process and increase reliability. The high volume of data available in medical centers can be considered one of the main problems in using these methods. The optimal selection of essential and influential features reduces the maximum dimension for better diagnosis with more reliability of results. In this paper, a new approach uses a Binary Exchange Market Algorithm (BEMA) to identify essential and practical features in the diabetes dataset and determine the best algorithm binary function (type of sigmoid function) to improve the performance of the EMA algorithm is presented. For validation and efficiency of the proposed BEMA algorithm, several SVM, KNN, and NB classification models have been used to train and test the final model. The results obtained from the evaluations show that the proposed BEMA-SVM combined method has a better performance than the previous methods to improve accuracy and offer an effect equivalent to 98.502%. Also, to provide better results and more reliability than the proposed method, researchers can use a combination of several classes with the proposed method, which is outside the scope of this study.
Computer engineering. Computer hardware, Information technology
Recent neural network research has demonstrated a significant benefit in machine learning compared to conventional algorithms based on handcrafted models and features. In regions such as video, speech and image recognition, the neural network is now widely adopted. But the high complexity of neural network inference in computation and storage poses great differences on its application. These networks are computer-intensive algorithms that currently require the execution of dedicated hardware. In this case, we point out the difficulty of Adders (MOAs) and their high-resource utilization in a CNN implementation of FPGA .to address these challenge a parallel self-time adder is implemented which mainly aims at minimizing the amount of transistors and estimating different factors for PASTA, i.e. field, power, delay.
Abstract Background Emotion classification remains a challenging problem in affective computing. The large majority of emotion classification studies rely on electroencephalography (EEG) and/or electrocardiography (ECG) signals and only classifies the emotions into two or three classes. Moreover, the stimuli used in most emotion classification studies utilize either music or visual stimuli that are presented through conventional displays such as computer display screens or television screens. This study reports on a novel approach to recognizing emotions using pupillometry alone in the form of pupil diameter data to classify emotions into four distinct classes according to Russell’s Circumplex Model of Emotions, utilizing emotional stimuli that are presented in a virtual reality (VR) environment. The stimuli used in this experiment are 360° videos presented using a VR headset. Using an eye-tracker, pupil diameter is acquired as the sole classification feature. Three classifiers were used for the emotion classification which are Support Vector Machine (SVM), k-Nearest Neighbor (KNN), and Random Forest (RF). Findings SVM achieved the best performance for the four-class intra-subject classification task at an average of 57.05% accuracy, which is more than twice the accuracy of a random classifier. Although the accuracy can still be significantly improved, this study reports on the first systematic study on the use of eye-tracking data alone without any other supplementary sensor modalities to perform human emotion classification and demonstrates that even with a single feature of pupil diameter alone, emotions could be classified into four distinct classes to a certain level of accuracy. Moreover, the best performance for recognizing a particular class was 70.83%, which was achieved by the KNN classifier for Quadrant 3 emotions. Conclusion This study presents the first systematic investigation on the use of pupillometry as the sole feature to classify emotions into four distinct classes using VR stimuli. The ability to conduct emotion classification using pupil data alone represents a promising new approach to affective computing as new applications could be developed using readily-available webcams on laptops and other mobile devices that are equipped with cameras without the need for specialized and costly equipment such as EEG and/or ECG as the sensor modality.
Computer engineering. Computer hardware, Information technology
Diego R. Almeida, Patrícia D. L. Machado, Wilkerson L. Andrade
Abstract Context Mobile devices, such as smartphones, have increased their capacity of information processing and sensors have been aggregated to their hardware. Such sensors allow capturing information from the environment in which they are introduced. As a result, mobile applications that use the environment and user information to provide services or perform context-based actions are increasingly common. This type of application is known as context-aware application. While software testing is an expensive activity in general, testing context-aware applications is an even more expensive and challenging activity. Thus, efforts are needed to automate testing for context-aware applications, particularly in the scope of Android, which is currently the most used operating system by smartphones. Objective This paper aims to identify and discuss the state-of-the-art tools that allow the automation of testing Android context-aware applications. Method In order to do so, we carried out a systematic mapping study (SMS) to find out the studies in the existing literature that describe or present Android testing tools. The discovered tools were then analyzed to identify their potential in testing Android context-aware applications. Result A total of 68 works and 80 tools were obtained as a result of the SMS. From the identified tools, five are context-aware Android application testing tools, and five are general Android application testing tools, but support the test of the context-aware feature. Conclusion Although context-aware application testing tools do exist, they do not support automatic generation or execution of test cases focusing on high-level contexts. Moreover, they do not support asynchronous context variations.