Hasil untuk "Computer software"

Menampilkan 20 dari ~8147639 hasil · dari DOAJ, Semantic Scholar, CrossRef

JSON API
DOAJ Open Access 2025
Predicting land use and land cover changes for sustainable land management using CA-Markov modelling and GIS techniques

Zainab Tahir, Muhammad Haseeb, Syed Amer Mahmood et al.

Abstract This study addresses the significant issue of rapid land use and land cover (LULC) changes in Lahore District, which is critical for supporting ecological management and sustainable land-use planning. Understanding these changes is crucial for mitigating adverse environmental impacts and promoting sustainable development. The main goal is to evaluate historical LULC changes from 1994 to 2024 and forecast future trends for 2034 and 2044 utilizing the CA-Markov hybrid model combined with GIS methodologies. Landsat images from various sensors (TM, OLI) were employed for supervised classification, attaining high accuracy (> 90%). Historical LULC changes from 1994 to 2024 were analyzed, revealing significant transformations in Lahore. The build-up area expanded by 359.8 km², indicating rapid urbanization, while vegetation cover decreased by 198.7 km² and barren lands by 158.5 km². Water bodies remained relatively stable during this period. Future LULC trends were projected for 2034 and 2044 using the CA-Markov hybrid model (CA-MHM), which achieved a high prediction accuracy with a kappa coefficient of 0.92. The research indicated significant urban growth at the expense of vegetation and barren land. Future forecasts suggest ongoing urbanization, underscoring the necessity for sustainable land management techniques. This research is a significant framework for urban planners, providing insights that combine development with ecological conservation. The results highlight the necessity of incorporating predictive models into urban policy to promote sustainable development and environmental preservation in quickly changing areas such as Lahore.

Medicine, Science
DOAJ Open Access 2025
Vehicle detection in drone aerial views based on lightweight OSD-YOLOv10

Yang Zhang, Xiaobing Chen, Su Sun et al.

Abstract To address the challenges of low performance in vehicle image detection from UAV aerial imagery, difficulties in small target feature extraction, and the large parameter size of existing models, we propose the OSD-YOLOv10 algorithm, an enhanced version based on YOLOv10n. The proposed algorithm incorporates several key innovations: First, we employ online convolutional reparameterization to construct the OCRConv module and design a lightweight feature extraction structure, SPCC, to replace the conventional C2f module, thereby reducing computational load and parameter count. Second, we integrate an efficient dual-layer feed-forward hybrid attention module to enhance the model’s feature extraction capabilities. We also construct a dual small-target detection layer that combines shallow and ultra-shallow features to improve small-target detection. Finally, we introduce the DySample dynamic upsampling module to enhance feature fusion in the neck network from a point sampling perspective. Extensive experiments on the VisDrone-DET2019 and UAVDT datasets demonstrate that OSD-YOLOv10 achieves a 40.7% reduction in parameter count and a 3.6% decrease in floating-point operations, while improving accuracy and mean average precision by 1.3% and 1.6%, respectively. Compared to other YOLO series and lightweight models, OSD-YOLOv10 exhibits superior detection accuracy and lower computational complexity, achieving an optimal balance between high accuracy and low resource consumption. These advancements make it particularly suitable for deployment in UAV onboard hardware for vehicle target detection tasks. Code will be available online ( https://github.com/Z76y/OSD-YOLO ).

Medicine, Science
DOAJ Open Access 2025
Design of MPPT characteristic measurement and control system for wind power generation system

Lv Fuyong, Wang Jie, Du Tong et al.

In small and medium-sized permanent magnet synchronous wind turbines, the bridge rectifier + DC / DC converter topology has the advantages of simple structure. The DC / DC circuit has Boost and Buck topologies. Exploring the difference between the maximum power point tracking ( MPPT ) control performance of the two is the key to its optimal design. The measurement and control system is an important means of performance analysis. Through the overall scheme design of the system, the system parameters and indexes are confirmed. The four-leg DC / DC topology is analyzed, and the acquisition of system voltage, current, power and frequency is realized based on high-end current sampling and isolated sampling technology. The power drive and single chip microcomputer control circuit are designed. The software of MPPT climbing search method suitable for single chip microcomputer is discussed. The software design and implementation of the upper computer control system based on LabVIEW is given. The measurement and control system is built, and the MPPT and two topology measurement and control results are compared. The experimental results show that the measurement and control system has achieved the design goal.

DOAJ Open Access 2025
Optimization Method for Classifier Output Repeatability Based on Siamese Networks

YU Yongtao, SUN Ao, LI Ang, ZHU Linlin

In industrial surface Quality Control (QC) scenarios, deep classification neural networks are widely used to classify product images for qualified judgment or quality grading. However, surface QC equipment equipped with deep classification neural networks must meet Attribute Reproducibility and Repeatability (AR&R) assessment requirements. Perturbations in product images, caused by assembly tolerance, equipment vibrations, and other factors, lead to variations in position, angle, brightness, and blurring. These perturbations result in inconsistent classification outputs, causing the surface QC equipment to fail the AR&R assessment, a problem referred to as the network output reproducibility issue. To address this issue, this study proposes a training method for classification neural networks based on Siamese networks. The Siamese primary network is trained using original samples for supervised learning to learn correct classification categories. The Siamese secondary network copies the weights of the primary network via exponential smoothing and generates feature embeddings of perturbed samples corresponding to the original ones. These embeddings are used for comparative learning training of the primary network, enabling it to output consistent classification probabilities for both original and perturbed sample inputs. During inference, only the primary network is retained for product defect classification. The results show that the classification accuracy reaches 99.346 2%, with a classification probability variance of 0.001 016. The described method effectively improves the output reproducibility of deep classification neural networks for industrial product image classification by reducing classification probability variance and enhancing accuracy.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2024
Optimal pre-train/fine-tune strategies for accurate material property predictions

Reshma Devi, Keith T. Butler, Gopalakrishnan Sai Gautam

Abstract A pathway to overcome limited data availability in materials science is to use the framework of transfer learning, where a pre-trained (PT) machine learning model (on a larger dataset) can be fine-tuned (FT) on a target (smaller) dataset. We systematically explore the effectiveness of various PT/FT strategies to learn and predict material properties and create generalizable models by PT on multiple properties (MPT) simultaneously. Specifically, we leverage graph neural networks (GNNs) to PT/FT on seven diverse curated materials datasets, with sizes ranging from 941 to 132,752. Besides identifying optimal PT/FT strategies and hyperparameters, we find our pair-wise PT-FT models to consistently outperform models trained from scratch on target datasets. Importantly, our MPT models outperform pair-wise models on several datasets and, more significantly, on a 2D material band gap dataset that is completely out-of-domain. Finally, we expect our PT/FT and MPT frameworks to accelerate materials design and discovery for various applications.

Materials of engineering and construction. Mechanics of materials, Computer software
DOAJ Open Access 2024
Transient Phenomena in Information Technology for Branching Processes with an Infinite Set of Types

Sergii Degtyar, Oleh Kopiika, Yurii Shusharin

Branching processes as a mathematical concept has applications in various fields, including information technology. In information technology, branching processes can be used to model and analyze various scenarios, such as the propagation of data or information in a network, the growth of computer viruses, the spread of software bugs, and more. Branching processes are particularly useful for understanding the dynamics of systems where events can lead to multiple new events in a probabilistic manner. Overall, branching processes provide a valuable mathematical framework for modeling and analyzing various aspects of information technology, helping to make informed decisions and optimize IT systems and networks. We have studied transient phenomena for branching processes with an infinite number of types close to critical. The analytical apparatus for this study is Markov renewal theorems. Branched processes were used to evaluate the performance of IT systems and predict their behavior under different conditions. This is important for capacity planning and resource allocation.

Electronic computers. Computer science, Technology
DOAJ Open Access 2024
Study on Building Business-oriented Resource On-demand Resolution Model

LIU Yao, QIN Xun, LIU Tianji

To address the issue of re-analyzing and repeating development of natural language processing tools and resource ana-lysis plugins when new requirements arise during project development,this paper proposes a business-oriented on-demand resource analysis solution.Firstly,a demand-driven resource analysis method from requirement to code is proposed,focusing on the construction of a demand concept indexing model for the requirement text itself.The constructed demand concept indexing model outperforms other classification models in terms of accuracy,recall,and F1 score.Secondly,this paper establishes a mapping mechanism from requirement text to code library categories based on the correlation between requirement text and code.For the mapping results,the precison@K is used as an evaluation metric,with an ultimate accuracy rate of 60%,demonstrating a certain practical value.In summary,this paper explores a set of key technologies for on-demand resource analysis with demand parsing capabilities and implements the correlation between requirements and code,covering the entire process from requirement text classification,code library classification,code library retrieval to plugin generation.The proposed method forms a complete business loop of “requirement-code-plugin-analysis” and experimentally verifies to be effective for on-demand resource analysis.Compared to existing large language models for business requirement analysis and code generation,this method focuses on the implementation of the full process of plugin code reuse within specific business domains,containing business characteristics.

Computer software, Technology (General)
DOAJ Open Access 2024
Handwritten Hiragana Letter Detection Using CNN

Arya Fernandi, Sofia Sa'idah, Rita Magdalena

Hiragana is one of the primary alphabets used in Japanese. Hiragana is a phonetic symbol; each letter represents one syllable. Hiragana letters are formed from curved lines and strokes. However, detecting Hiragana letters causes many errors because people still rely on their vision to detect the letters, especially people familiar with them for the first time. It will be difficult and not very clear to read the letters. Therefore, a Convolutional Neural Network (CNN) method is used to detect handwritten Hiragana letters and help people who first get to know Hiragana letters when the letters are too complicated for human eyes to detect. This research uses the YOLOv8 model as a handwritten Hiragana letter detection algorithm. The Hiragana letters to be detected are basic letters with 46 characters. This research uses the YOLOv8 model run on Google Collaboratory with the Ultralytics library version 8.0.20 using the Python programming language. The dataset is collected from the internet and annotated using the Roboflow framework and dataset 4600 Hiragana letters. From the test results, the best model is YOLOv8l using SGD optimizer and learning rate 0.01 with a precision value of 98.5%, recall value of 95.7%, f1-score value of 97.1%, and mAP value of 95.5%. In the future, we aim to expand the number of datasets and employ a broader range of hyperparameter values to optimize the classification precision and accuracy of the Hiragana Letter Detection system.

Computer software
DOAJ Open Access 2023
Characteristics of Multi-Class Suicide Risks Tweets Through Feature Extraction and Machine Learning Techniques

Yan Qian Lim, Yim Ling Loo

This paper presents a detailed analysis of the linguistic characteristics connected to specific levels of suicide risks, providing insight into the impact of the feature extraction techniques on the effectiveness of the predictive models of suicide ideation. Prevalent initiatives of research works had been observed in the detection of suicide ideation from social media posts through feature extraction and machine learning techniques but scarcely on the multiclass classification of suicide risks and analysis of linguistic characteristics' impact on predictability. To address this issue, this paper proposes the implementation of a machine learning framework that is capable of analyzing multiclass classification of suicide risks from social media posts with extended analysis of linguistic characteristics that contribute to suicide risk detection. A total of 552 samples of a supervised dataset of Twitter posts were manually annotated for suicide risk modeling. Feature extraction was done through a combination of feature extraction techniques of term frequency-inverse document frequency (TF-IDF), Part-of-Speech (PoS) tagging, and valence-aware dictionary for sentiment reasoning (VADER). Data training and modeling were conducted through the Random Forest technique. Testing of 138 samples with scenarios of detections in real-time data for the performance evaluation yielded 86.23% accuracy, 86.71% precision, and 86.23% recall, an improved result with a combination of feature extraction techniques rather than data modeling techniques. An extended analysis of linguistic characteristics showed that a sentence's context is the main contributor to suicide risk classification accuracy, while grammatical tags and strong conclusive terms were not.

Computer software
DOAJ Open Access 2023
Semantic Matching Method Integrating Multi-head Attention Mechanism and Siamese Network

ZANG Jie, ZHOU Wanlin, WANG Yan

Considering the matching problem of enterprise resources and customer requirements,the existing methods have the problems that the resource and requirement encapsulation is not accurate enough and the matching effect can't satisfy uses' requirement.In order to solve the problem of diversity and ambiguity of enterprise resource and requirement description,this paper proposes the dynamic user-defined template encapsulation.According to the feature that most of the encapsulated requirements and resources are Chinese short texts,an interactive text matching model which integrates multi-head attention mechanism and sia-mese network is proposed.The semantic differences and similarities between sentences are considered in this model.It uses word mixing vectors as input to enhance the semantic information of the text,combines the Siamese network with the multi-head attention mechanism,and extractes the semantic features of the context as an independent unit to fully interact with the semantic features.In order to verify the effectiveness of the model,the classical data set LCQMC and the self-constructed CSMD data set are used to conduct experiments on the model.The results show that the accuracy and performance of the model are improved in different degrees,which provides a more accurate matching method for enterprise resources and requirements.

Computer software, Technology (General)
DOAJ Open Access 2023
A computer program to assess the bone scan index for Tc-99m hydroxymethylene diphosphonate: evaluation of jaw pathologies of patients with bone metastases using SPECT/CT

Ruri Ogawa, Ichiro Ogura

PURPOSEThis study aimed to evaluate the jaw pathologies of patients with bone metastases using a computer program to assess the bone scan index (BSI) for Tc-99m hydroxymethylene diphosphonate (HMDP) with single-photon emission computed tomography/computed tomography (SPECT/CT).METHODSNinety-seven patients with jaw pathologies (24 with bone metastases and 73 without) were evaluated. High-risk hot spots and BSI in the patients were evaluated using the VSBONE BSI (ver.1.1) analysis software for Tc-99m HMDP that scanned SPECT/CT and automatically defined the data. The two groups were compared using the Pearson chi-square test and Mann–Whitney U test for high-risk hot spots and BSI, respectively. A P value of <0.05 was considered statistically significant.RESULTSHigh-risk hot spot occurrence was significantly correlated to bone metastases [sensitivity, 21/24 (87.5%); specificity, 40/73 (54.8%); accuracy, 61/97 (62.9%); P < 0.001]. The number of high-risk hot spots was higher in patients with bone metastases (5.96 ± 10.30) than in those without (0.90 ± 1.50; P < 0.001). Furthermore, the BSI for patients with bone metastases (1.44 ± 2.18%) was significantly higher than for those without (0.22 ± 0.44%; P < 0.001).CONCLUSIONA computer program that assessed BSI for Tc-99m HMDP may be useful in the evaluation of patients with bone metastases using SPECT/CT.

Medical physics. Medical radiology. Nuclear medicine
DOAJ Open Access 2023
AnimalTA: A highly flexible and easy‐to‐use program for tracking and analysing animal movement in different environments

Violette Chiara, Sin‐Yeon Kim

Abstract Computer programs for video tracking of animal movement are evolving increasingly efficient, using complex algorithms or artificial intelligence systems. Despite the consequent progress in this field, researchers still face some fundamental problems in the use of such programs. For example, the best‐performing programs are often uneasy to use, and user‐friendly programs require source videos recorded under strict conditions (e.g. homogenous environments, constant lighting and high resolution), which may be difficult to meet in both laboratory and field studies. We present here AnimalTA, a new program that tracks and analyses animal movement in diverse environments. This program aims to be accessible to everyone, including those without knowledge of coding and image analysis. AnimalTA allows to process rapidly a high number of videos and manage multi‐arena tracking. It is adapted to follow the movement of targets in variable conditions, including heterogenous and complex environments, or in low‐quality videos. AnimalTA provides tools for editing videos and correcting problems caused by camera tremors, light changes or perspective deformation. AnimalTA also allows the user to easily correct tracking errors and repeat the tracking in a subsample of the video. The target's identity can be personalized to facilitate video analysis. The tracking results can be analysed in AnimalTA to obtain many different variables related to the trajectory of each target, such as average speed, total distance travelled, latency to reach defined areas, distance to a defined point, distance to other individuals, number of contacts with others, explored surface, among others. Users can set and control different parameters for these analyses and directly view the results.

Ecology, Evolution
DOAJ Open Access 2022
Survey of Quantum Computing Simulation and Optimization Methods

YU Zhichao, LI Yangzhong, LIU Lei, FENG Shengzhong

Through superposition and entanglement, a quantum computing displays significant advantages over classical computers in dealing with problems that require large-scale parallel processing capabilities.At present, a physical quantum computer is limited in scalability, coherence time, and precision of quantum gate operations, so it is feasible to simulate quantum computing on a classical computer for studying quantum advantage and quantum algorithms.However, the computer resources required for quantum computing simulation grow exponentially with the number of qubits.Therefore, it is of great importance to study how to reduce the resources required for large-scale simulation with ensured computational accuracy, precision and efficiency.This paper describes the basic principles and background knowledge of quantum computing, including qubits, quantum gates, quantum circuits and quantum operating systems.Meanwhile, this paper summarizes the classical computer-based methods for simulating quantum computing, and analyzes their design ideas, advantages and disadvantages.Some commonly used simulators are also listed.On this basis, this paper discusses the communication overhead problem of quantum computing simulation, and presents some supercomputer-based methods for optimizing quantum computing simulation from the two aspects of node analysis and communication optimization.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2022
Lean Implementation Framework: A Case of Performance Improvement of Casting Process

Muhammad Aslam Khan, Muhammad Khurram Ali, Muhammad Sajid

Globalization breeds increasing competition. In today&#x2019;s dynamic climate, lean thinking has been found a promising business continuous improvement strategy for improving quality while reducing product cost and delivery time. However, its implementation has dynamic nature of challenges that varies from industry to industry and country to country, necessitating a specific framework by taking all stakeholders onboard. This study aims to propose a lean implementation framework to reduce defects and waste to improve the performance of the metal casting industry. The structure of the framework has been divided into three phases namely the lean conception phase, lean implementation phase, and lean sustainability phase. The proposed framework integrates the six sigma DMAIC methodology with lean tools and techniques to reduce defects and achieve performance improvement. A solid cast software has been used as a computer-assisted casting simulation tool to perform the analysis of defects within the casting. Further, the proposed framework is demonstrated and validated by employing a real-time case study that was manufactured using the sand casting process. The obtained results show remarkable improvements in poured metal weight (33.3&#x0025;), mold weight including gating system (40&#x0025;), casting yield (24.56&#x0025;, rejection rate (90&#x0025;), and financial saving (24.63&#x0025;). As a result of analysis of percentage improvements data, the proposed framework can provide the practitioners with a standard roadmap and motivate the casting industries to implement lean for performance improvement through organizational change. Through the effective application of the lean implementation framework, quality enhancement has been demonstrated.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2021
Collaborative Filtering Recommendation Algorithm Based on Semi-Autoencoder

ZHANG Haobo, XUE Feng, LIU Kai

To effectively use the user-item interaction history and auxiliary information in recommendation systems,this paper proposes an improved collaborative filtering recommendation algorithm.Based on semi-autoencoder,the features of auxiliary information of users and items are extracted,and then mapped into the Matrix Factorization(MF) model.By using the back propagation algorithm,the semi-autoencoder and the matrix factorization model are jointly updated to improve the recommendation performance.Experimental results on the public datasets of MovieLens-100K and Book-Crossing show that the proposed algorithm provides better recommendation effects than the traditional recommendation algorithms,including the Biased Singular Value Decomposition(Biased SVD) and the Probabilistic Matrix Factorization(PMF) algorithm.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2021
Analisis Kinerja, Disiplin, dan Produktivitas Kerja Karyawan Dalam Mempengaruhi Pemanfaatan Sistem Informasi Sumber Daya Manusia

Gugus Wijonarko

Era digital secara konsep menuntut pemanfaatan teknologi pada seluruh aspek pekerjaan manusia, khususnya adanya sinergi dan hubungan antara teknologi dengan faktor manusia yang dianggap sebagai salah satu aset perusahaan. Tujuan penelitian ini adalah untuk mengukur dan mengetahui faktor-faktor yang membuat karyawan memutuskan untuk menggunakan aplikasi sistem informasi sumber daya manusia dalam rutinitas pekerjaan dengan melihat pada variabel penelitian manajemen SDM yaitu kinerja, disiplin, dan produktivitas kerja dalam mempengaruhi keputusan menggunakan aplikasi HRIS. Sampel penelitian ini adalah pengguna dari beberapa perusahaan di Surabaya yang menggunakan aplikasi HRIS dalam rutinitas operasional pekerjaan sehari-hari mereka. Pada penelitian ini ditemukan sampel penelitian sebanyak 55 responden dari berbagai perusahaan di Surabaya dan hasil tanggapan responden tersebut dilakukan pengolahan data menggunakan teknik analisis data regresi linear berganda dengan tingkat kepercayaan 95% dan pembuktian hipotesis menggunakan uji T dan uji F. Hasilnya adalah faktor kinerja karyawan dan faktor disiplin mempengaruhi secara signifikan keputusan pengguna menggunakan sistem informasi sumber daya manusia. Hal ini dikarenakan para responden merasa adanya urgensi terhadap proses pencatatan adminsitrasi yang lebih baik. Sedangkan untuk variabel produktivitas kerja tidak mempengaruhi keputusan penggunaan aplikasi SISDM dikarenakan aplikasi hanya dipandang sebagai alat penunjang operasional pekerjaan sehari hari.

Information technology, Computer software
DOAJ Open Access 2017
Peculiarity and harmonization of enterprises’ development in Ukraine

Feschenko O.M. , Samokina G.V.

The article consists of the settlement of the problem, analysis of the latest researches and publications, purpose of the article, statement of the research, conslusions, and perspectives for further studies. The article encloses 10 pages, has 4 tables, 3 figures. References include 13 sources. Nowadays small and middle enterprises are very important and irreplaceable element of market economy regulation. Thanks to the development of small and middle enterprises the state reaches its social and economic development and every citizen reaches their well-being as well. The paper examines enterprises from different aspects such as regional and the type of economic activity. The author proposes the principal methods of increasing the development of small enterprises. The article applies modern software tools and computer technologies of information processing. The necessary scientific and theoretical, statistical and analytical data were partially obtained from the Internet.

DOAJ Open Access 2017
Imputation of missing genotypes within LD-blocks relying on the basic coalescent and beyond: consideration of population growth and structure

Maria Kabisch, Ute Hamann, Justo Lorenzo Bermejo

Abstract Background Genotypes not directly measured in genetic studies are often imputed to improve statistical power and to increase mapping resolution. The accuracy of standard imputation techniques strongly depends on the similarity of linkage disequilibrium (LD) patterns in the study and reference populations. Here we develop a novel approach for genotype imputation in low-recombination regions that relies on the coalescent and permits to explicitly account for population demographic factors. To test the new method, study and reference haplotypes were simulated and gene trees were inferred under the basic coalescent and also considering population growth and structure. The reference haplotypes that first coalesced with study haplotypes were used as templates for genotype imputation. Computer simulations were complemented with the analysis of real data. Genotype concordance rates were used to compare the accuracies of coalescent-based and standard (IMPUTE2) imputation. Results Simulations revealed that, in LD-blocks, imputation accuracy relying on the basic coalescent was higher and less variable than with IMPUTE2. Explicit consideration of population growth and structure, even if present, did not practically improve accuracy. The advantage of coalescent-based over standard imputation increased with the minor allele frequency and it decreased with population stratification. Results based on real data indicated that, even in low-recombination regions, further research is needed to incorporate recombination in coalescence inference, in particular for studies with genetically diverse and admixed individuals. Conclusions To exploit the full potential of coalescent-based methods for the imputation of missing genotypes in genetic studies, further methodological research is needed to reduce computer time, to take into account recombination, and to implement these methods in user-friendly computer programs. Here we provide reproducible code which takes advantage of publicly available software to facilitate further developments in the field.

Biotechnology, Genetics

Halaman 8 dari 407382