Computers have undergone tremendous improvements in performance over the last 60 years, but those improvements have significantly slowed down over the last decade, owing to fundamental limits in the underlying computing primitives. However, the generation of data and demand for computing are increasing exponentially with time. Thus, there is a critical need to invent new computing primitives, both hardware and algorithms, to keep up with the computing demands. The brain is a natural computer that outperforms our best computers in solving certain problems, such as instantly identifying faces or understanding natural language. This realization has led to a flurry of research into neuromorphic or brain-inspired computing that has shown promise for enhanced computing capabilities. This review points to the important primitives of a brain-inspired computer that could drive another decade-long wave of computer engineering.
Gregorio Morales González, Jorge Gulín González, José Emilio Cuevas Chavez
et al.
El objetivo de la investigación se enfoca en diseñar una guía de entrenamiento biomecánico correctivo para optimizar la técnica de pivoteo en la finalización, con el fin de reducir los factores de riesgo de lesión de rodilla en las jugadoras de futsal femenino. El presente estudio de caso toma 15 jugadoras de la posición específica de pívot provenientes de diferentes universidades de la capital cubana. Las investigaciones realizadas indican que los movimientos del pívot en la finalización y especialmente los giros rápidos de espalda a la portería y los lanzamientos en apoyo monopodal, exhiben patrones biomecánicos que los convierten en gestos de alto riesgo para la lesión del ligamento cruzado anterior y otras lesiones de rodilla, identificados con un valgo dinámico >12° y una flexión de cadera y rodilla insuficiente durante acciones rotatorias, los que sirven como indicadores claves para evitar riesgos y lesiones. La guía para diseñar los ejercicios se enfoca en la optimización y automatización de los patrones corregidos y es importante para trasmitir los movimientos táctico-técnicos óptimos bajo fatiga en el juego real, reduciendo drásticamente el riesgo de lesión y mejorando la eficiencia del movimiento. Las correcciones biomecánicas a partir de las tres fases críticas del movimiento, con la prevención de lesiones en la cadena cinemática de manera integrada para el fortalecimiento del glúteo medio en el gesto técnico a partir de ejercicios pliométricos y el fortalecimiento del core, previenen el "colapso" de la rodilla en valgo que es una de las principales causas de lesión.
Abstract Unlike single-image steganography, the scheme of payload distribution on different images plays a pivotal role in the security performance of multi-image steganography. In this paper, a novel multi-image steganography scheme: image stitching sender (ISS) is proposed, which achieves optimal payload distribution by optimizing the stitching scheme of multi-cover-images. In the ISS scheme, we employ peak signal-to-noise ratio as the similarity evaluation metric for the stitched cover image and stego image. Besides, genetic algorithm is used to find the local optimal solution for the similarity, corresponding to a locally optimal multi-image steganographic stitching scheme. The experiment demonstrates that ISS exhibits enhanced anti-detection capabilities in comparison to other multi-image steganography schemes. Furthermore, when combined with non-additive embedding methods, the ISS can achieve a more substantial improvement in security compared to additive embedding methods.
Abdulrahman M. Abdulghani, Azizol Abdullah, A. R. Rahiman
et al.
Modern Software-Defined Wide Area Networks (SD-WANs) require adaptive controller placement addressing multi-objective optimization where latency minimization, load balancing, and fault tolerance must be simultaneously optimized. Traditional static approaches fail under dynamic network conditions with evolving traffic patterns and topology changes. This paper presents a novel hybrid framework integrating Gaussian Mixture Model (GMM) clustering with Multi-Agent Reinforcement Learning (MARL) for dynamic controller placement. The approach leverages probabilistic clustering for intelligent MARL initialization, reducing exploration requirements. Centralized Training with Decentralized Execution (CTDE) enables distributed optimization through cooperative agents. Experimental evaluation using real-world topologies demonstrates a noticeable reduction in the latency, improvement in network balance, and significant computational efficiency versus existing methods. Dynamic adaptation experiments confirm superior scalability during network changes. The hybrid architecture achieves linear scalability through problem decomposition while maintaining real-time responsiveness, establishing practical viability.
Leonhard Ziegler, Michael Grabatin, Daniela Pöhn
et al.
Abstract While self-sovereign identities (SSI) have been gaining more traction, the topic of SSI security has yet to be addressed. Especially regarding response procedures to security incidents, no prior work is available. However, incident response processes are essential to systematically respond to a security incident in a timely manner. We first evaluate the current state-of-the-art by conducting a literature survey and contacting organizations that offer SSI. The insights underpin the subject’s relevance, highlighting that incident response capabilities are just starting to be developed. Contributing to this development, we identify the challenges of building a security incident response process for SSI. Mainly, the decentralized nature inhibits the utilization of known best practices, which all focus on building a centralized incident response capability. However, even in the case of SSI, some centralized entities may exist. Therefore, we design two variants of SIR processes: one more centralized and one more decentralized. For the latter, the problem size is reduced in the first step by identifying all the stakeholders within an SSI ecosystem and then analyzing possible proactive and reactive measures each participant can access. This procedure leads to the grouping of SSI system participants into three distinct domains of incident response. For each domain, different capabilities for handling incidents are introduced depending on the involved stakeholders, their infrastructure, and their goals. To demonstrate the procedures, incident scenarios for each domain highlight the workflows during incident handling.
Quantum computers promise tremendous impact across applications -- and have shown great strides in hardware engineering -- but remain notoriously error prone. Careful design of low-level controls has been shown to compensate for the processes which induce hardware errors, leveraging techniques from optimal and robust control. However, these techniques rely heavily on the availability of highly accurate and detailed physical models which generally only achieve sufficient representative fidelity for the most simple operations and generic noise modes. In this work, we use deep reinforcement learning to design a universal set of error-robust quantum logic gates on a superconducting quantum computer, without requiring knowledge of a specific Hamiltonian model of the system, its controls, or its underlying error processes. We experimentally demonstrate that a fully autonomous deep reinforcement learning agent can design single qubit gates up to $3\times$ faster than default DRAG operations without additional leakage error, and exhibiting robustness against calibration drifts over weeks. We then show that $ZX(-\pi/2)$ operations implemented using the cross-resonance interaction can outperform hardware default gates by over $2\times$ and equivalently exhibit superior calibration-free performance up to 25 days post optimization using various metrics. We benchmark the performance of deep reinforcement learning derived gates against other black box optimization techniques, showing that deep reinforcement learning can achieve comparable or marginally superior performance, even with limited hardware access.
Neighbor search is of fundamental importance to many engineering and science fields such as physics simulation and computer graphics. This paper proposes to formulate neighbor search as a ray tracing problem and leverage the dedicated ray tracing hardware in recent GPUs for acceleration. We show that a naive mapping under-exploits the ray tracing hardware. We propose two performance optimizations, query scheduling and query partitioning, to tame the inefficiencies. Experimental results show 2.2X - 65.0X speedups over existing neighbor search libraries on GPUs. The code is available at https://github.com/horizon-research/rtnn.
Quantum information science harnesses the principles of quantum mechanics to realize computational algorithms with complexities vastly intractable by current computer platforms. Typical applications range from quantum chemistry to optimization problems and also include simulations for high energy physics. The recent maturing of quantum hardware has triggered preliminary explorations by several institutions (including Fermilab) of quantum hardware capable of demonstrating quantum advantage in multiple domains, from quantum computing to communications, to sensing. The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermilab, is dedicated to providing breakthroughs in quantum computing and sensing, mediating quantum engineering and HEP based material science. The main goal of the Center is to deploy quantum systems with superior performance tailored to the algorithms used in high energy physics. In this Snowmass paper, we discuss the two most promising superconducting quantum architectures for HEP algorithms, i.e. three-level systems (qutrits) supported by transmon devices coupled to planar devices and multi-level systems (qudits with arbitrary N energy levels) supported by superconducting 3D cavities. For each architecture, we demonstrate exemplary HEP algorithms and identify the current challenges, ongoing work and future opportunities. Furthermore, we discuss the prospects and complexities of interconnecting the different architectures and individual computational nodes. Finally, we review several different strategies of error protection and correction and discuss their potential to improve the performance of the two architectures. This whitepaper seeks to reach out to the HEP community and drive progress in both HEP research and QIS hardware.
Francisco Cejas Rodríguez, Elizabeth Roig Villariño, Dayniel Hernández Mestre
et al.
A partir de la información extraída de la Base de datos de fanerógamas (plantas con flores) de Cuba, depositada en el Instituto de geografía Tropical, la presente investigación conformó una tabla en Excel que recoge la información sistemática detallada en cuanto a: familia botánica, género, especie, autor de la especie y sinonimia, entre otros, junto al nombre común, cuyo empleo facilita el reconocimiento de la especie a nivel del público en general. Con diferentes Macros implementadas sobre Visual Basic for Applications (VBA), que automatizaron la revisión del entorno, se procedió a un análisis de la información compilada. Se obtuvo una aproximación al estado de conocimiento sobre la composición y distribución de las especies arbóreas en las formaciones boscosas naturales cubanas, señalándose las principales dificultades para acometer esta tarea y recomendaciones para llevarla a término felizmente.
Vladimir Barannik, Serhii Sidchenko, Dmitriy Barannik
et al.
The subject of research in the article are the video images compression and encryption processes during the critically important objects managing process. The goal is to develop a method for compressing video images based on floating positional coding with an uneven codegrams length to simultaneously ensure information reliability and confidentiality during its transmission with a given time delay. Objectives: analyzing existing approaches to ensuring the video images confidentiality; development a method for compressing video images based on floating positional coding with an uneven codegrams length; evaluate the developed method effectiveness. The methods used are: digital image processing methods, digital image compression methods, image encryption and scrambling methods, structural-combinatorial coding methods, statistical analysis methods. The following results are obtained. The technology of floating encoding of an uneven sequence of blocks is proposed. Code values are formed from elements of different video image blocks. For this, a scheme for linearizing an image point coordinates from its four-dimensional representation on a plane into a one-dimensional element coordinate in a vector has been developed. The four-dimensional element coordinate on the plane describes the image block coordinates and the coordinates of the element in this block. Code values are formed under conditions of control their binary representation's length. At the same time, coding is implemented for an indeterminate number of video image elements. The number of elements depends on the length of the code word. Accordingly, codegrams with an indeterminate length are formed. Their length depends on the service data values, generated during the encoding process. Service data acts as a key element. Conclusions. The one-stage polyadic image encoding method in a differentiated basis has been further improved. The developed encoding method provides image compression without information quality loss. The original images volume compression provides by 3–20 % better compared to the TIFF data presentation format and by 4–15 % compared to the PNG format. The overhead amount is less than 2.5 % of the entire codestream size.
XIONG Zhongmin, ZENG Qi, LU Peng, WANG Zhenhua, ZHENG Zongsheng
Logical reasoning is the ability to perceive patterns and connections between visual elements. Endowing computers with human-like reasoning ability is a critical area of research;state-of-the-art deep neural networks have achieved superhuman performance in image processing and other fields.However,the concept of logical reasoning through images requires further research.To address the problems of insufficient feature extraction and generalization of Multi-scale Relation Network(MRNet),an improved logical reasoning method,called Residual Attention Multi-scale Relation Network(ResAMRNet),is proposed. In the backbone network,shallow features are integrated into the deep network training process by utilizing residual structures and combining jump and long jump. This reduces the loss of feature information and improves the feature extraction capability of the model. In the reasoning module,the channel attention mechanism and residuals are combined to detect the relationship features between each image line.It can differentiate the significance of each feature channel,learn the attention weight adaptively,and extract key features.In this study,a Double-pooled Efficient Channel Attention(DECA) mechanism is proposed to combine global maximum pooling to further obtain feature information regarding objects and to improve generalization.Experimental results on representative logical reasoning datasets,Relational and Analogical Visual rEasoNing(RAVEN) and Improved RAVEN(I-RAVEN),show that the accuracy of the proposed method using these datasets is higher by 8.3 and 18.1 percentage points,respectively,than that of MRNet. Therefore,it demonstrates strong logical reasoning capabilities.
Robert Karam, S. Katkoori, Mehran Mozaffari Kermani
Practical, hands-on hardware experience is an essential component of computer engineering education. Due to the COVID-19 pandemic, courses with laboratory components such as Computer Logic Design or FPGA Design were subject to interruption from sudden changes in course modality. While simulators can cover some aspects of laboratory work, they cannot fully replace the hands-on experience students receive working with and debugging hardware. For hardware security in particular, experimenting with attacks and countermeasures on real hardware is vital. In this paper, we describe our approach to designing a practical, hands-on hardware security course that is suitable for HyFlex delivery. We have developed a total of nine experiments utilizing two inexpensive, portable, and selfcontained development boards which generally obviate the need for bench equipment. We discuss the trade-offs inherent in the course and experiment design, as well as issues relating to deployment and support for the required design software.
Static Transfer Switch (STS) is required for high-speed transfer of essential load to the alternate power
source when the main source fails due to power disturbance (PD). A fast and accurate PD detection method
is required to ensure transfer time recommended by Computer Business Equipment Manufacturers Association
(CBEMA) and IEEE Std. 446. This study encompasses the machine learning technique to reduce detection time
for the disturbance on the preferred source. The 10 sample frames of acquired voltage signal were first
differentiated and then distinctive features, i.e., Mean Absolute Deviation (MAD) and Energy (E) were
extracted from the resultant frames. The features were fed to the Linear Support Vector Machine (L-SVM)
classifier to detect the occurrence of PD events. The proposed approach achieved 100% accuracy for PD
detection and detection time was significantly reduced. The system is robust in terms of unbalanced and
marginal PDs. The system was validated using both simulated and real voltage signals. The proposed
algorithm is easy to implement on an embedded system. Hence, detection time according to STS
requirements can be achieved under various power system conditions.
Lin-Shen Liew, Giedre Sabaliauskaite, Nandha Kumar Kandasamy
et al.
Cyber-Physical Systems (CPSs) are getting increasingly complex and interconnected. Consequently, their inherent safety risks and security risks are so intertwined that the conventional analysis approaches which address them separately may be rendered inadequate. STPA (Systems-Theoretic Process Analysis) is a top-down hazard analysis technique that has been incorporated into several recently proposed integrated Safety and Security (S&S) analysis methods. This paper presents a novel methodology that leverages not only STPA, but also custom matrices to ensure a more comprehensive S&S analysis. The proposed methodology is demonstrated using a case study of particular commercial cloud-based monitoring and control system for residential energy storage systems.
Loop closure detection serves as the fulcrum of improving the accuracy and precision in simultaneous localization and mapping (SLAM). The majority of loop detection methods extract artificial features, which fall short of learning comprehensive data information, but unsupervised learning as a typical deep learning method excels in self-access learning and clustering to analyze the similarity without handling the data. Moreover, the unsupervised learning method does solve restrictions on image quality and singleness semantics in many traditional SLAM methods. Therefore, a loop closure detection strategy based on an unsupervised learning method is proposed in this paper. The main component adopts BigBiGAN to extract features and establish an original bag of words. Then, the complete bag of words is used to detect loop closing. Finally, a considerable validation check of the ORB descriptor is added to verify the result and output outcome of loop closure detection. The proposed algorithm and other compared algorithms are, respectively, applied on Autolabor Pro1 to execute the indoor visual SLAM. The experiment shows that the proposed algorithm increases the recall rate by 20% compared with ORB-SLAM2 and LSD-SLAM. And it also improves at least 40.0% accuracy than others and reduces 14% time loss of ORB-SLAM2. Therefore, the presented SLAM based on BigBiGAN does benefit much the visual SLAM in the indoor environment.
Desde su surgimiento, la Universidad de las Ciencias Informáticas ha conferido vital importancia a la asimilación de las Tecnologías de la Información y la Comunicación en todos sus procesos sustantivos. Desde el año 2002 que dio comienzo la asimilación de dichas tecnologías en el proceso de enseñanza aprendizaje hasta la actualidad el camino transitado ha sido largo y lleno de experiencias. En el presente trabajo se describen las etapas de la evolución del Ecosistema Digital de Aprendizaje en la Universidad de las Ciencias Informáticas presentándolo desde su surgimiento hasta la actualidad, así como algunas perspectivas para el enriquecimiento del mismo.
La presente investigación se centra en la implementación de un sistema informático para le gestión de usuarios en la Unión Nacional Eléctrica. Dicha investigación científica está respaldada por una extensa metodología, la cual estructuró todo este proceso.
Se aplicaron métodos científicos como entrevistas, encuestas y guías de observación para la detección del problema real en la UNE y sugerencias de los empleados de la empresa. Se realizó un estudio de las tecnologías que se utilizan actualmente para la gestión de usuarios, brindando una visión integral de todas sus partes y funciones. Se hizo una comparación entre las principales soluciones que existen en este campo, brindando además algunos criterios útiles para la selección de este tipo de productos. También se consolidó una metodología para el diseño de sistemas de gestión de identidades, tratando desde los aspectos tecnológicos hasta los organizacionales.
Por último, se implementó una aplicación Web para la gestión de los usuarios de la UNE, en conjunto con una serie de funcionalidades para la integración con otros sistemas.