The Palm Leaf Manuscripts are a rich source of information about ancient India. It shares an enormous amount of knowledge about the past in terms of art, culture, literature and medicine. As the Manuscripts were developed organically, it is prone to getting damaged very fast. There are many mechanisms used to preserve the physical copies of the manuscripts, but because of the climatic conditions, the deterioration of the manuscripts is inevitable. This work outlines a comparative analysis of classical and deep learning-based approaches for denoising the distorted palm leaf manuscripts based on the segmentation quality of the text inscribed on the PLMs. The traditional pipeline consists of denoising, followed by binarisation and then segmentation of the entire image. We implemented this sequence using both Fast Non-Local Means and a self-trained Noise2Void (N2V) model for denoising. However, the segmented characters, particularly from the Fast NL-based approach, appeared visually distorted. In contrast, the N2V-based difference image showed better structural preservation and closer alignment with the ground truth. To tackle these limitations, we proposed a novel pipeline, which is an innovative processing pipeline that commences with denoising the Palm Leaf Manuscript images using the N2V model, proceeds with direct extraction of the text and culminates in the targeted application of binarisation exclusively on the segmented patches. This restructured approach minimises distortion, enhances text clarity, and preserves character details more effectively. Quantitative evaluation shows improved performance with lower MSE values (0.97, 1.15, 1.02), higher PSNR scores (27.17 dB, 26.61 dB, 29.09 dB) for various binarisation methods, and a structural similarity index (SSIM) of 91%, demonstrating the superiority of the proposed method over the traditional workflow.
This research paper presents novel approaches for detecting credit card risk through the utilization of Long Short-Term Memory (LSTM) networks and XGBoost algorithms. Facing the challenge of securing credit card transactions, this study explores the potential of LSTM networks for their ability to understand sequential dependencies in transaction data. This research sheds light on which model is more effective in addressing the challenges posed by imbalanced datasets in credit risk assessment. The methodology utilized for imbalanced datasets includes the use of the Synthetic Minority Oversampling Technique (SMOTE) to address any imbalance in class distribution. This paper conducts an extensive literature review, comparing various machine learning methods, and proposes an innovative framework that compares LSTM with XGBoost to improve fraud detection accuracy. LSTM, a recurrent neural network renowned for its ability to capture temporal dependencies within sequences of transactions, is compared with XGBoost, a formidable ensemble learning algorithm that enhances feature-based classification. By meticulously carrying out preprocessing tasks, constructing competent training models, and implementing ensemble techniques, our proposed framework demonstrates unwavering performance in accurately identifying fraudulent transactions. The comparison of LSTM and XGBoost shows that LSTM is more effective for our imbalanced dataset. Compared with XGBOOST’s 97% accuracy, LSTM’s accuracy is 99%. The final result emphasizes how crucial it is to select the optimal algorithm based on particular criteria within financial concerns, which will ultimately result in more reliable and knowledgeable credit score decisions.
Inter‐day motion intent recognition using wearable sensors, due to the change in position during multiple donning and doffing, i.e., surface electromyography, remains a challenge. Herein, an optimal optical sensing sleeve using a multilayer perceptron is introduced to achieve an accurate inter‐day motion intent recognition. This sleeve, demonstrating a high correlation (R2 = 0.93) with grasping force, incorporates six novel optical waveguides. Each waveguide is specifically designed to respond to pressing with high linearity, achieved by minimizing bending with a 3D‐printed base and limiting elongation through carbon fiber reinforcement. This novel configuration enhances the generalization of the optical waveguides across multiple donning and doffing sessions. Furthermore, the multilayer perceptron model, which maps sensing signals to grasping forces, shows optimal performance compared to linear, quadratic, cubic, and quartic polynomial models. Remarkably, the correlation in mapping does not decrease in inter‐day experiments; instead, it increases by 4.54%, indicating improved model generalization. Additionally, 12 commonly used items are grasped and held by a prosthetic hand, controlled by the optical sensing sleeve, which suggests the robustness in daily life, for an amputee. The optimal optical sensing sleeve holds promise for contributing to advancements in other wearable robots and achieving an inter‐day model generalization.
Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
Abstract Deep Neural Networks (DNNs) have demonstrated outstanding performance in various medical image processing tasks. However, recent studies have revealed a heightened vulnerability of medical DNNs to adversarial attacks compared to their natural counterparts. In this work, we present a novel perspective by analyzing the disparities between medical datasets and natural datasets, specifically focusing on the dataset collection process. Our analysis uncovers unique differences in the data distribution across different image classes in medical datasets, a phenomenon absent in natural datasets. To gain deeper insights into medical datasets, we employ Fourier analysis tools to investigate medical DNNs. Intriguingly, we discover that high-frequency components in medical images exhibit stronger associations with corresponding labels compared to those in natural datasets. These high-frequency components distract the attention of medical DNNs, rendering them more susceptible to adversarial images. To mitigate this vulnerability, we propose a preprocessing technique called Removing High-frequency Components (RH) training. Our experimental results demonstrate that the application of RH training significantly enhances the robustness of medical DNNs against adversarial attacks. Notably, in certain scenarios, RH training even outperforms traditional adversarial training methods, particularly when subjected to black-box attacks.
Linear interpolation is often used by in over-sampling techniques to synthesize samples, but the its disadvantages include the lack of randomness in the sampling results and a tendency to increase the degree of class between samples of different categories, making it difficult to improve the classification ability for imbalanced sample sets. This paper proposes a generation method for minority samples with coaxial-symmetric parabolic constraints. First, for minority class samples, an adaptive weighting strategy based on the risk factor and similarity factor is established. The weight can determine the direction and range of sample synthesis during the sampling process. Then, a pair of coaxial symmetric parabolas based on minority class samples and corresponding sample weights is constructed, the closed region surrounded by a pair of coaxial symmetric parabolas is taken as the nonlinear synthesis region. Finally, when introducing a new sample, determine whether this sampling can effectively avoid invading the distribution areas of other categories of samples by observing the changes in the Bhattacharyya coefficient in the neighboring domain of the new sample, thereby improving the sampling quality. Comparison experiments on six public sample sets from the UCI show that when C4.5 is used as a classifier, the integrated oversampling method improves precision by 7.85 percentage points, recall by 2.87 percentage points, and G-means by 2.00 percentage points compared to the original sampling method.
Claudia Susie C. Rodrigues, Vitoria Nazareth, Ramon O. Azevedo
et al.
The growing significance of accessibility, particularly in the realm of disability rights, is unmistakable. "Unseen" emerges as a prototype leveraging binaural audio technology to craft an immersive 3D gaming experience, placing a paramount focus on promoting accessibility and digital inclusion. Its primary objective is to deliver a immersive audiogaming encounter, catering to both sighted gamers and individuals with visual impairments. Through a comprehensive evaluation of the prototype, the efficacy of its audiogame interactions and mechanisms was assessed. Valuable insights gleaned from laboratory tests not only pinpointed areas for game enhancement but also shed light on elements that fostered user satisfaction and motivation. These research results notably exemplify promoting digital accessibility in gaming, particularly through the utilization of binaural audio, and the active engagement of individuals with visual impairments in the virtual environment.
This paper presents OpenRSSI, a novel motion capture system that leverages ultra-wideband (UWB) radio signal strength indicators combined with inertial measurement units (IMUs) to achieve high-precision tracking without the positional drift common in pure inertial systems. Our approach utilizes an adaptive sensor fusion algorithm that dynamically adjusts to environmental conditions and movement patterns, providing robust tracking across varied use cases.
Devira Anggi Maharani, Carmadi Machbub, Lenni Yulianti
et al.
Abstract Real-time object tracking and occlusion handling are critical research areas in computer vision and machine learning. Developing an efficient and accurate object-tracking method that can operate in real-time while handling occlusion is essential for various applications, including surveillance, autonomous driving, and robotics. However, relying solely on a single hand-crafted feature results in less robust tracking. As a hand-crafted feature extraction technique, HOG effectively detects edges and contours, which is essential in localizing objects in images. However, it does not capture fine details in object appearance and is sensitive to changes in lighting conditions. On the other hand, the grayscale feature has computational efficiency and robustness to changes in lighting conditions. The deep feature can extract features that express the image in more detail and discriminate between different objects. By fusing different features, the tracking method can overcome the limitations of individual features and capture a complete representation of the object. The deep features can be generated with transfer learning networks. However, selecting the right network is difficult, even in real-time applications. This study integrated the deep feature architecture and hand-crafted features HOG and grayscale in the KCF method to solve this problem. The object images were obtained through at least three convolution blocks of transfer learning architecture, such as Xception, DenseNet, VGG16, and MobileNet. Once the deep feature was extracted, the HOG and grayscale features were computed and combined into a single stack. In the KCF method, the stacked features acquired the actual object location by conveying a maximum response. The result shows that this proposed method, especially in the combination of Xception, grayscale, and HOG features, can be implemented in real-time applications with a small center location error.
Computer engineering. Computer hardware, Information technology
Memristor-based crossbar architecture emerges as a promising candidate for 3-D memory and neuromorphic computing. However, the sneak current through the unselected cells becomes a fundamental roadblock to their development, resulting in misreading and high power consumption. In this regard, we theoretically investigate the Pt/Ti/NbO2/Nb2O5−x/Pt-based self-selective memristor, which combines the inherent nonlinearity of the NbO2 switching layer and the non-volatile operation of the Nb2O5−x memory layer in a single device. The results show that the Pt/Ti/NbO2/Nb2O5−x/Pt-based self-selective memristor offers the sneak current of 310 nA, selectivity of around 174, and on/off current ratio of 75, compared to the sneak current of approximately 70 μA, selectivity of about 4.02, and on/off current ratio of around 1.55 for the Pt/Ti/Nb2O5−x/Pt-based memristor device. Our self-selective memristor minimizes the sneak current, but a small on/off current ratio limits their readout margin and power efficiency for crossbar array size greater than 4KB. Further, we demonstrate that breaking down a large-scale crossbar array into smaller subarrays and separating them by transistor switches, called the split crossbar array, is a more efficient way of achieving a practical size crossbar array with improved readout margin and power efficiency. Our results shed light on the potential of the Pt/Ti/NbO2/Nb2O5−x/Pt-based self-selective memristor and explore the split crossbar array architecture as a practical solution to augment readout margins and power efficiency in a large-scale crossbar array.
Electric apparatus and materials. Electric circuits. Electric networks, Computer engineering. Computer hardware
Abstract Link prediction in social networks has been an active field of study in recent years fueled by the rapid growth of many social networks. Many link prediction methods are harmed by users’ intention of avoiding being traced across networks. They may provide inaccurate information or overlook a great deal of information in multiple networks. This problem was overcome by developing methods for predicting links in a network based on known links in another network. Node alignment between the two networks significantly improves the efficiency of those methods. This research proposes a new embedding method to improve link prediction and node alignment results. The proposed embedding method is based on the Expanded Graph, which is our new novel network that has edges from both networks in addition to edges across the networks. Matrix factorization on the Finite Step Transition and Laplacian similarity matrices of the Expanded Graph has been used to obtain the embeddings for the nodes. Using the proposed embedding techniques, we jointly run network alignment and link prediction tasks iteratively to let them optimize each other’s results. We performed extensive experiments on many datasets to examine the proposed method. We achieved significant improvements in link prediction precision, which was 50% better than the peer’s method, and in recall, which was 500% better in some datasets. We also scale down the processing time of the solution to be more applicable to big social networks. We conclude that computed embedding in this type of problem is more suitable than learning the embedding since it shortens the processing time and gives better results.
Computer engineering. Computer hardware, Information technology
Approximate Nearest Neighbor Search(ANNS) algorithms based on neighbor graphs typically organize vectors in a database into a neighbor graph structure and obtain the Approximate Nearest Neighbor(ANN) of the query vector by leveraging user-specified search parameter configurations.An adaptive method named AdaptNNS is proposed to improve the search efficiency of ANNS algorithms based on neighbor graphs given specific recall rate requirements.First, AdaptNNS samples vectors in the database and clusters the sampling results.Second, AdaptNNS uses centroids as nearest neighbor classifiers to extract query load features.Finally, AdaptNNS concatenates different recall rate targets and features from the query load to create a model input, which it then uses to train a Gradient Boosting Decision Tree(GBDT) model.During the query processing phase, AdaptNNS obtains input features of the trained model with queries and specified recall rate values and improves the throughput of the ANNS by predicting optimal search parameters. Experiments are performed on Text-to-Image, DEEP, and Turing-ANNS datasets using DiskANN and HNSW algorithms.The results show that AdaptNNS can increase the throughput by a maximum of 1.3 times as compared with the Baseline method when the same target recall rate is reached.Thus, AdaptNNS searches ANNs more efficiently.
Freddy Enrique Triana Litardo, Magali Gioconda Calero Lara, Víctor Hugo Bayas Vaca
et al.
La administración estratégica constituye un método de dirección que posibilita evaluar de manera sistemática el funcionamiento de una organización. Debido a sus resultados novedosos, se han desarrollado herramientas tecnológicas que permiten soportar los procesos y actividades administrativas, con lo cual los resultados obtenidos posibilitan mejorar aún más las decisiones administrativas. El objetivo es realizar un análisis de las principales herramientas tecnológicas existentes para la gestión por procesos en la administración estratégica de las organizaciones, de manera que se apoyen las decisiones basadas en datos. Para ello, la metodología empleada se caracterizó por tener un alcance descriptivo, un diseño no experimental y un enfoque mixto para el análisis de la temática. Los hallazgos arrojados en el análisis documental fueron soportados en la aplicación de encuestas a directivos organizacionales (n=72) para conocer el estado actual en el empleo de tecnología para la gestión por procesos en entornos organizacionales. Los resultados indican que las organizaciones han ganado una cultura tecnológica superior en la última década, como resultado de diversas situaciones contextuales que se han venido encadenando. En primer lugar, la tercera y cuarta revolución industrial basada en el empleo de las tecnologías que rige al mundo desde finales del siglo pasado, así como la pandemia por COVID-19 vivida, que obligó a automatizar todas las operaciones posibles para que las empresas no detuvieran sus actividades. Debido a ello, se identificó un número considerable de herramientas tecnológicas que las organizaciones utilizan para gestionar sus procesos de manera eficiente, todo lo cual soporta la administración estratégica, siendo evaluadas en consenso como una opción efectiva para mejorar la toma de decisiones organizacionales basadas en datos. Del mismo modo, se recomienda a las organizaciones la adopción, tanto de la administración estratégica, como de las herramientas tecnológicas, para mejorar su eficiencia, calidad, rendimiento, productividad y competitividad.
Heavy metals such as Pb(II) are toxic to ecosystems and humans, and removing heavy metals from water has always been a hot issue. Polyvinyl chloride (PVC) is a widely used synthetic polymer, however, the disposal of waste PVC is still challenging. In this study, polyethylenimine-crosslinked PVC fiber (PEI-PVCF) was developed to remove Pb(II) from aqueous solutions, which not only helps to remove heavy metals, but also provides a possible method for the recycling of waste PVC. The FTIR analysis verified that the PEI was successfully crosslinked with PVC. The effect of pH, contact time, and initial concentration on the adsorption of Pb(II) by PEI-PVCF were evaluated. Through the pH effect experiment, pH 6 was determined as the most suitable pH for Pb(II) removal from aqueous solutions. The isotherm experimental data was well explained by Langmuir model and the maximum Pb(II) uptake was evaluated to be 233.3 mg/g. Pseudo-second-order kinetic model well described the adsorption kinetic data of Pb(II) on PEI-PVCF. The adsorption equilibrium was reached within 120 min at all initial concentrations evaluated. Besides, intraparticle diffusion model proved multiple rate-limiting steps involved in Pb(II) adsorption process. As a result, PEI-PVCF can be considered as a promising adsorbent for Pb(II) removal due to its low cost, high adsorption capacity, and short adsorption equilibrium time.
Chemical engineering, Computer engineering. Computer hardware
Martina Damizia, Maria Paola Bracciale, Benedetta De Caprariis
et al.
The use of H2 as fuel of the future is closely linked to the development of Fuel Cells, among them Proton Exchange Membrane Fuel Cells (PEMFCs) are the most attractive. To avoid the irreversible poisoning of the platinum-based catalyst placed on the PEMFC electrodes, pure H2 (CO < 10 ppm) is required. Steam iron process (SIP) is a cyclical process which allows, at high temperature and low pressure, the direct production of pure H2 by redox cycles of iron. Syngas is generally used as reducing agent while steam water is used to oxidize iron and to produce pure H2. However, iron oxides powders suffer from deactivation in few redox cycles due to their low thermal stability. The aim of this study is to improve iron oxides resistance adding Al2O3 as high thermal stability material. Bioethanol is used as renewable sources of syngas to makes the process totally sustainable. To evaluate the effect of Al2O3 addition, different Fe2O3 / Al2O3 ratios were tested (40 wt%, 10 wt%, 5 and 2 wt%). The stability of the synthetized particles was evaluated with 10 redox cycles comparing the results with that of commercial Fe2O3 powders. Al2O3 does not behave as inert material in the process but it actively participates in the reduction step, catalysing coke formation due its acidity. With the sample 98 wt% Fe2O3- 2 wt% Al2O3 the best performances in terms of particles stability and hydrogen purity were obtained.
Chemical engineering, Computer engineering. Computer hardware
In order to decrease the interference between small cells in Ultra Dense Network(UDN),this paper a propose an user-centric semi-dynamic clustering method for Coordinated Multiple Point(CoMP) Joint Transmission(JT) scenarios.It divides small stations into non-overlapping clusters,and takes the small stations in the cluster and the small stations located outside the cluster but having large interference with the cluster as the optional service stations of the user.Zero-forcing precoding is used to eliminate interference to users from non-serving base stations in the optional service stations.Users select a number of candidate service stations from the optional service stations service and selected the cluster heads,with the goal of maximizing the sum of throughput of each user select the station clusters for users from the optional service stations.At the meanwhile a suboptimal method for selecting service station clusters for users is given to reduce the complexity.Simulation results show that compared with the existing scheme in the same scenario the proposed scheme improves the system throughput.