Beth Finton
Hasil untuk "Computer engineering. Computer hardware"
Menampilkan 20 dari ~8499222 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
Cristian Valero-Abundio, Emilio Sansano-Sansano, Raúl Montoliu et al.
Handling geometric transformations, particularly rotations, remains a challenge in deep learning for computer vision. Standard neural networks lack inherent rotation invariance and typically rely on data augmentation or architectural modifications to improve robustness. Although effective, these approaches increase computational demands, require specialised implementations, or alter network structures, limiting their applicability. This paper introduces General Intensity Direction (GID), a preprocessing method that improves rotation robustness without modifying the network architecture. The method estimates a global orientation for each image and aligns it to a canonical reference frame, allowing standard models to process inputs more consistently across different rotations. Unlike moment-based approaches that extract invariant descriptors, this method directly transforms the image while preserving spatial structure, making it compatible with convolutional networks. Experimental evaluation on the rotated MNIST dataset shows that the proposed method achieves higher accuracy than state-of-the-art rotation-invariant architectures. Additional experiments on the CIFAR-10 dataset, confirm that the method remains effective under more complex conditions.
XIAO Zhipeng, HE Shufeng, TIAN Chunqi
This study presents a facial emotion recognition network based on UniRepLKNet to address the difficulty in effectively capturing feature information and preventing key facial information from occupying a more prominent position in the facial emotion recognition process. Moreover, to extract facial emotional features more accurately, the study designs a masked polarized self-attention module that combines U-Net and a polarized self-attention mechanism. This module can deeply mine the dependency between channels and spaces. It can also strengthen the influence of local key information of the face on emotion recognition through a multi-scale feature fusion strategy. The study optimizes UniRepLKNet, a universal large kernel Convolutional Neural Network (CNN), and proposes the EmoRepLKNet neural network structure. In EmoRepLKNet, the mask-polarized self-attention module enables the network to extract key information for facial emotion recognition. Combined with the wide receptive field of large kernel CNN, facial emotions can be recognized effectively. Experimental results show that on the facial emotion recognition dataset FER2013, EmoRepLKNet achieves an accuracy of 76.20%, outperforming existing comparison models and significantly improving facial emotion recognition accuracy compared to that of UniRepLKNet. Additionally, on the single-label portion of the RAF-DB dataset, the proposed method achieves an accuracy of 89.67%.
V. Luzhetsky, M. Tsikhotskyi
In the conditions of processing large amounts of graphic data, the task arises of developing a reliable image encryption scheme with reduced computing costs. The purpose of the study was to develop a deterministic scheme for encrypting and evenly distributing vectorised images using a shift register with linear feedback and counters. Methods of research included converting a pixel matrix to a sequence of bytes using a row-wise traversal rule, splitting the index space into equal subranges, generating pseudo-random indexes based on shift register states, and using reversible counters. The results of statistical testing demonstrate the stable characteristics of the proposed image encryption method. Encrypted test images were also evaluated for attack resistance by determining correlation coefficients between the incoming image and the encrypted one. In particular, for coloured images with a size of 512 × 512, when divided into eight subranges, the number of pixel change rate reached 99.61%, and the unified average intensity of pixel change was 32.28%, which corresponds to the upper cluster of estimates of advanced methods. The entropy of encrypted data was close to the theoretical maximum of 7.999, and the correlation between neighbouring pixels was significantly reduced and approaches zero values. Image distribution and restoration was performed without errors. The algorithm was characterised by low computational costs. The practical significance of the study consisted in ensuring reproducibility of the distribution and high cryptographic stability using mathematically simple operations, pseudo-randomness, and expanding the image encryption space to the full volume, making the proposed approach suitable for systems requiring accurate recovery and operating under limited computational resources
Andrej Kolar-Požun, Gregor Kosec
The finite difference time domain method is one of the simplest and most popular methods in computational electromagnetics. This work considers two possible ways of generalising it to a meshless setting by employing local radial basis function interpolation. The resulting methods remain fully explicit and are convergent if properly chosen hyperviscosity terms are added to the update equations. We demonstrate that increasing the stencil size of the approximation has a desirable effect on numerical dispersion. Furthermore, our proposed methods can exhibit a decreased dispersion anisotropy compared to the finite difference time domain method.
Matteo Iaiani, Alessandro Tugnoli, Valerio Cozzani
The increasing interconnectivity with external networks and the higher reliance on digital systems make chemical and process industries, including waste and drinking water treatment plants, more vulnerable to cyber-attacks. Historical evidence shows that these attacks have the potential to cause events with severe consequences on property, people, and the surrounding environment, posing a serious threat. While the risks deriving from the malicious manipulation of the Basic Process Control System (BPCS) and the Safety Instrumented System (SIS) in chemical and Oil&Gas facilities have been systematically analysed in the available literature, including previous works of the Authors, the analysis of the consequences of cyber-attacks to drinking water treatment plants has not been conducted to date. To fill this gap, in the present study the methodology POROS 2.0 (Process Operability Analysis of Remote manipulations through the cOntrol System) developed by the Authors was applied to a drinking water treatment plant, providing valuable insights on possible critical scenarios originated by cyber-attacks in these facilities.
Dániel Gosztola, Peter Grubits, János Szép et al.
The growing importance of numerical simulations in the welding industry stems from their ability to enhance structural performance and sustainability by ensuring optimal manufacturing conditions. The use of the finite element method (FEM) allows for detailed and precise calculations of the mechanical and material changes caused by the welding process. Acquiring knowledge of these parameters not only serves to augment the quality of the manufacturing process but also yields consequential benefits, such as reducing adverse effects. Consequently, the enhancement of structural performance and prolonged lifespan becomes achievable, aligning with overarching sustainability goals. To achieve this goal, this paper utilizes numerical simulations of welding processes based on experimental tests, with a specific focus on analyzing temperatures generated within the structures. In the finite element analysis (FEA), a total of 12 welding cycles were systematically modeled to align with experimental conditions, incorporating cooling intervals, preheating considerations, and the relevant section of the connecting concrete structure with studs. The outcomes of this research exemplify the potential of numerical simulation in the welding industry, demonstrating a diverse range of results achieved through FEA to enhance the quality of structures within the context of sustainability.
Thais N. Guerrero, Ruth M. Fisher, Ademir A. Prata et al.
The beneficial reuse and recovery of biosolids is an attractive option instead of disposal. However, odour emissions present significant challenges to land application of biosolids, increasing operational costs and reducing community acceptance. This study aimed to assess the influence of conveying and storage conditions in wastewater treatment plants on the sensory impact from biosolids. For sensory assessment, samples of anaerobically digested biosolids were collected after centrifuge and during storage out-loading. The emissions were extracted over 15 days using a dynamic flux chamber and sensory analysis conducted using an ODP coupled to a TD-GC-MS. Odour descriptors and intensities (from 1 – weak to 4 – strong) were evaluated by expert panellists, providing insights into the sensory aspects of odour emissions. The ODP results showed variations in the number of occurrences, intensity and modified frequency of odour events across the stages of wastewater solids processing and laboratory storage. Conveying could potentially impact the release of volatile compounds due to the mechanical agitation that can aerate and disturb the structure and surface of the biosolids. On the other hand, storage can accelerate biological and chemical processes as a result of the development of anaerobic conditions leading to subsequent odour generation. The interplay between wastewater treatment processes and odour emissions is complex and requires targeted strategies. The application of sensorial analysis contributes to valuable insights into understanding and managing odour emissions in wastewater treatment plants, offering potential avenues for optimizing operational parameters to benefit biosolids reuse initiatives. Keywords: Wastewater sludge; Anaerobic digestion; Biosolids; Beneficial reuse; Land application; Gaseous emissions; Sensory emissions; Sensory analysis; Odour detection port.
Roberto Casadei, Gianluca Aguzzi, Giorgio Audrito et al.
Today's distributed and pervasive computing addresses large-scale cyber-physical ecosystems, characterised by dense and large networks of devices capable of computation, communication and interaction with the environment and people. While most research focusses on treating these systems as "composites" (i.e., heterogeneous functional complexes), recent developments in fields such as self-organising systems and swarm robotics have opened up a complementary perspective: treating systems as "collectives" (i.e., uniform, collaborative, and self-organising groups of entities). This article explores the motivations, state of the art, and implications of this "collective computing paradigm" in software engineering, discusses its peculiar challenges, and outlines a path for future research, touching on aspects such as macroprogramming, collective intelligence, self-adaptive middleware, learning, synthesis, and experimentation of collective behaviour.
Jiehua Chen, Christian Hatschka, Sofia Simola
We survey two key problems-Multi-Winner Determination and Hedonic Games in Computational Social Choice, with a special focus on their parameterized complexity, and propose some research challenges in the field.
Jonathan Álvarez Ariza
Active Learning (AL) is a well-known teaching method in engineering because it allows to foster learning and critical thinking of the students by employing debate, hands-on activities, and experimentation. However, most educational results of this instructional method have been achieved in face-to-face educational settings and less has been said about how to promote AL and experimentation for online engineering education. Then, the main aim of this study was to create an AL methodology to learn electronics, physical computing (PhyC), programming, and basic robotics in engineering through hands-on activities and active experimentation in online environments. N=56 students of two engineering programs (Technology in Electronics and Industrial Engineering) participated in the methodology that was conceived using the guidelines of the Integrated Course Design Model (ICDM) and in some courses combining mobile and online learning with an Android app. The methodology gathered three main components: (1) In-home laboratories performed through low-cost hardware devices, (2) Student-created videos and blogs to evidence the development of skills, and (3) Teacher support and feedback. Data in the courses were collected through surveys, evaluation rubrics, semi-structured interviews, and students grades and were analyzed through a mixed approach. The outcomes indicate a good perception of the PhyC and programming activities by the students and suggest that these influence motivation, self-efficacy, reduction of anxiety, and improvement of academic performance in the courses. The methodology and previous results can be useful for researchers and practitioners interested in developing AL methodologies or strategies in engineering with online, mobile, or blended learning modalities.
Ramesh Karri, Jeyavijayan Rajendran, Kurt Rosenfeld et al.
Serena Lima, Antonino Biundo, Giuseppe Caputo et al.
As known, microalgae are an appealing source of chemicals and high-value compounds which find application in nutraceuticals, cosmetics and pharmaceutics. Fatty acids (FA), in particular, have drawn attention to the possibility of employing them as a source of biodiesel alternatively to fossil fuels. In addition, several lipid derivatives have been found in microalgae and may be employed in several biotechnological applications. Hydroxy fatty acids can be substrates for several industrial applications thanks to their functionalization, which increases their reactivity and, for this reason, can be used as functional building blocks to produce a multitude of bio-based materials. Recently, a promising method for the chemical modification of unsaturated-FAs (U-FA) has appeared. In fact, U-FA may be modified by members of the hydratase enzyme family to produce saturated and unsaturated hydroxy fatty acids with high stereo- and regio-selectivity. These enzymes are able to introduce a water molecule to the double bond present in the free fatty acids (FFA) Oleic Acid (OA), Linoleic Acid (LA), producing 10-hydroxy fatty acids (10-hydroxy-FAs). Furthermore, the carbohydrate component of the microalgal biomass may be converted into furfuryl compounds and, in particular in 5-hydroxyl methyl furfural (5-HMF). This is one of the chemical bio-compound different from petroleum-derived ones with the highest added value and may be obtained through lignocellulosic biomasses or hexoses sugars through acid catalysis. It is defined platform molecule because it is the precursor of several compounds for the chemical industry. In this work, we aimed to optimize a circular bioprocess by performing, starting from the same biomass, two different processes: the biotransformation of microalgal FFAs through the employment of a genetically modified E. coli on one side, and the conversion of the remaining biomass in furfuryl products. The first process allowed the production of very interesting lipid derivatives with biotechnological applications, including 10 hydroxy-stearic acid and 10-hydroxy-octadecenoic acid. The second process was obtained through heterogeneous catalysis based on niobium phosphate. This procedure represents a high-innovative application of microalgal biomass and allows the simultaneous exploitation of FAs and carbohydrates. This may result in an increase in the commercial value of microalgal biomass.
Ramón Toala Dueñas, Cristhian Maldonado Toala, Diego Menéndez Navia
El objetivo principal del presente trabajo fue desarrollar un micro-servicio para el reconocimiento de emociones a través del uso del mouse y teclado, esta investigación se realizó de forma deductiva e inductiva las cuales permitieron conocer el funcionamiento que puede llegar a tener un micro-servicio como también el requerimiento necesario para cumplir las funcionalidades del mismo. Para el desarrollo de este trabajo se requirió el uso de una aplicación web para la obtención de información para así crear un DataSet que no es más que datos almacenados utilizados para el entrenamiento de una red neuronal, la cual fue necesaria para el desarrollo del micro-servicio. La metodología que se utilizó para el desarrollo del proyecto fue la metodología Cascada, la cual consta de cinco etapas a realizar: Análisis de Requerimientos, Diseño, Implementación, Verificación y Mantenimiento. Como entorno de desarrollo para la aplicación web fue JavaScript lo cual es un lenguaje de programación, como gestor de base de datos se utilizó MySQL y para su conexión se utilizó como parte principal PHP, también se utilizó los framework TensorFlow y Keras, para el desarrollo del micro-servicio se utilizó los datos almacenados del DataSet lo cual se implementó el lenguaje de programación Python para el entrenamiento de la red neuronal como también el uso del framework Flask. Una vez concluido la aplicación web como micro-servicio se realizó una verificación y mantenimientos necesarios, por lo que se concluye que el micro-servicio puede llegar a tener un mayor índice de acierto teniendo más datos en el dataset, lo que facilitaría en la detección de la emoción.
N. Even-Chen, D. Muratore, S. Stavisky et al.
DU Xinxin, HU Xiaohui, ZHAO Jianan
A Vehicular Ad-Hoc Network(VANET) is a Mobile Ad-Hoc Network(MANET) composed of mobile vehicular nodes.It does not rely on infrastructure to either establish a communication link orrealize communication. Owingto the high mobility of vehicles and limited wireless-communication resources, it is difficult for VANETs to guarantee Quality of Service(QoS).To solve this problem, this paper introduces a Software-Defined Network(SDN).In particular, amulti-constrained QoS routing algorithm suitable for Doftware-Defined Vehicular Ad-Hoc Network(SDN-VANET) is proposedthatharnessesthe advantages of SDN control and forwarding separation to ensurevehicle QoS.First, the SDN controller schedules a vehicle's service based on deadline constraints.Second, this paper proposesan Adaptive Hybrid Shuffled Frog-Leaping Algorithm(AH-SFLA).The SDN controller calculates the appropriate value of the data on the transmission link according to the QoS index and the global topology information and uses this as a benchmark to search for an optimized path.At the same time, alternative link mechanisms and QoS resource consumption thresholds are set to implement routing maintenance in order toreduce the probability of network failures.Finally, mininet-wifi and SUMO are combined to build an SDN-VANET environment, and the AH-SFLA routing algorithm is compared with the performances ofIGA and IICSFLA.The experimental results show that compared with IGA and IICSFL, AH-SFLA can improve the average end-to-end delay index by 57.74% and 46.6%, reduce the packet-loss rate by 29.9% and 18.6%, and increase the cost of standardized routing by 36.93% and 27.2%, respectively, effectively guaranteeingQoS in VANET.
XU Le, AN Hong, CHEN Junshi, ZHANG Pengfei, WU Zheng
The performance of unstructured grid computing on Sunway TaihuLight, a domestic heterogeneous many-core platform, is limited by sparse storage, discrete memory access, and data dependency.To relieve the sparse storage and discrete memory access problems, this paper proposes an N-order diagonal coloring algorithm, which effectively balances the computing between Management Processing Element (MPE) and Computing Processing Elements (CPEs) and convert global memory access to Local Device Memory (LDM) access using CPEs.To solve the computing competition caused by data dependence, this paper presents an adaptive and independent blocking method to avoid data conflicts in parallel computing.Furthermore, various optimizations are employed to overcome the performance bottlenecks:1.To leverage hardware resources, the authors use asynchronous parallelism between MPE and CPEs.2.To reduce synchronization costs, they avoid register communication, which increases the scalability of the next-generation Sunway platform.3.To hide the memory access latency, the authors overlap memory access with computing.The SpMV, Integration, and calcLudsFcc operations are generally used to verify the validity of the algorithm, and the results show that our algorithm achieves an average speedup of about 10 times and up to 24 times higher than that of the MPE implementation.Moreover, the N-order diagonal coloring algorithm has a 5.8 times higher speedup than that of the non-coloring blocking algorithm, which effectively improves data locality and computational parallelism.The algorithm also has good acceleration performance for dependent conflict operators, which verifies the effectiveness of adaptive and independent task partitioning methods.
Loc Vu-Quoc, Alexander Humer
Three recent breakthroughs due to AI in arts and science serve as motivation: An award winning digital image, protein folding, fast matrix multiplication. Many recent developments in artificial neural networks, particularly deep learning (DL), applied and relevant to computational mechanics (solid, fluids, finite-element technology) are reviewed in detail. Both hybrid and pure machine learning (ML) methods are discussed. Hybrid methods combine traditional PDE discretizations with ML methods either (1) to help model complex nonlinear constitutive relations, (2) to nonlinearly reduce the model order for efficient simulation (turbulence), or (3) to accelerate the simulation by predicting certain components in the traditional integration methods. Here, methods (1) and (2) relied on Long-Short-Term Memory (LSTM) architecture, with method (3) relying on convolutional neural networks. Pure ML methods to solve (nonlinear) PDEs are represented by Physics-Informed Neural network (PINN) methods, which could be combined with attention mechanism to address discontinuous solutions. Both LSTM and attention architectures, together with modern and generalized classic optimizers to include stochasticity for DL networks, are extensively reviewed. Kernel machines, including Gaussian processes, are provided to sufficient depth for more advanced works such as shallow networks with infinite width. Not only addressing experts, readers are assumed familiar with computational mechanics, but not with DL, whose concepts and applications are built up from the basics, aiming at bringing first-time learners quickly to the forefront of research. History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics, even in well-known references. Positioning and pointing control of a large-deformable beam is given as an example.
Clarisa V. Albarillo, Emely A. Munar, Maria Concepcion M. Balcita
The main objective of the study is to provide ICT awareness, literacy and skills development to the barangay officials of Agoo, La Union. Specifically, it aimed the following objectives: 1) to determine the profile of the respondents in terms of personal information, educational background and availability of computer unit and background in using computer; 2) to determine the effectiveness of the CILC in terms of services delivered, timeliness of the service, and improvement on the computer and internet knowledge of the trainees; and 3) to determine the level of relevance of the training sessions of the CILC. The study used a descriptive design. Data were gathered by using survey questionnaire and were analyzed by using statistical treatments such as frequency count, percentage and mean. As to the profile of the trainees, the study found that most of the trainees are female (88%); 84% are married, and 56% of them are at the age bracket of 30-39 years old. In terms of educational background, many are high school graduate (n= 17; 68%). In addition, most of them (84%) have background in computer. The result also shows that the CILC is at the high level of effectiveness (4.67) in terms of services delivered and is much relevant (4.45) in terms of its relevance.
D. D'Agostino, I. Merelli, Marco Aldinucci et al.
Energy consumption is one of the major issues in today’s computer science, and an increasing number of scientific communities are interested in evaluating the tradeoff between time-to-solution and energy-to-solution. Despite, in the last two decades, computing which revolved around centralized computing infrastructures, such as supercomputing and data centers, the wide adoption of the Internet of &ings (IoT) paradigm is currently inverting this trend due to the huge amount of data it generates, pushing computing power back to places where the data are generated—the so-called fog/edge computing. &is shift towards a decentralized model requires an equivalent change in the software engineering paradigms, development environments, hardware tools, languages, and computation models for scientific programming because the local computational capabilities are typically limited and require a careful evaluation of power consumption. &is paper aims to present how these concepts can be actually implemented in scientific software by presenting the state of the art of powerful, less power-hungry processors from one side and energy-aware tools and techniques from the other one.
Halaman 3 dari 424962