Hasil untuk "Computer Science"

Menampilkan 20 dari ~22600543 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv

JSON API
DOAJ Open Access 2026
Analyzing Compost Fermentation Accuracy Through Fuzzy Logic and R-Square Techniques

Reza Firmansyah Putranto, Novita Kurnia Ningrum

The accumulation of unmanaged organic waste remains a critical environmental issue, highlighting the need for technological support to improve composting efficiency and monitoring. This study proposes an Internet of Things (IoT)-based system for monitoring compost fermentation conditions using temperature and humidity sensors, combined with Fuzzy Logic and R-square (R²) analysis to evaluate fermentation quality. The system employs a DHT11 sensor integrated with an ESP8266 microcontroller to collect temperature and humidity data in real time over a 20-day observation period, resulting in 1,008 data points. Fuzzy Logic is applied through fuzzification, rule-based inference, and defuzzification to classify compost conditions into four categories: poor, good, very good, and cooling needed. The model’s performance is further validated using multiple linear regression, with temperature and humidity as independent variables and average temperature as the dependent variable. The results show that compost temperature ranged between 28–32°C and humidity between 50–87%, indicating that the fermentation process was predominantly in the mesophilic or early composting phase. The fuzzy inference results demonstrate that most conditions fell within the “good” category, while the R² value of 0.87 indicates a strong relationship between the observed variables. These findings confirm that the integration of IoT, Fuzzy Logic, and statistical analysis is effective as a real-time monitoring and decision support system for compost management, while also highlighting the need for additional parameters to achieve a more comprehensive compost quality assessment.

Electronic computers. Computer science
DOAJ Open Access 2025
Predicting Ship Waiting Times Using Machine Learning for Enhanced Port Operations

Min-Hwa Choi, Woongchang Yoon

Port congestion and prolonged ship waiting times pose challenges for global trade and increase operational costs and inefficiencies. In this study, a novel machine learning-based predictive approach was proposed to improve port operations by accurately forecasting vessel waiting times. By using a dataset of 121,401 voyage records, we evaluated nine regression models, including conventional, ensemble-based, and deep learning models. Shapley additive explanation (SHAP)-based feature selection is typically applied to enhance interpretability, and its effect is compared with principal component analysis-based dimensionality reduction and nonselection methods. The XGBoost Regressor (XGBR) is optimized using genetic-algorithm-based hyperparameter tuning, reducing mean squared error (RMSE) from 20.9531 to 19.6387, mean absolute error (MAE) from 13.6821 to 12.6753, and improving coefficient of determination (R2) from 0.2791 to 0.2949. A stacking ensemble model, integrating random forest regressor, XGBR, LightGBM regressor, and CatBoost regressor, improves performance, achieving an RMSE of 18.9023, MAE of 12.3287, and an R2 of 0.3265. ANOVA tests confirm numerous differences in model performance and computational complexity. The results demonstrated that tree-based ensemble models outperform deep learning models in this setting. The proposed approach enables proactive scheduling, reduces congestion, and cost savings. The scalability of the model renders it suitable for broad maritime logistics and intelligent transportation systems.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2025
Image-based cotton leaf disease diagnosis using YOLO and faster R-CNN techniques

S. Chinnadurai, S. Selvakumar

Abstract Cotton has, in recent years, become one of the most important cash crops worldwide while being impacted in yield from leaf disease which generally goes unnoticed in the early stage. Detection methods depend on manual efforts producing slow processes and human errors. Automated detection methods establish low accuracies, limited scalability and real time applications. To tackle the research issue, this study proposes the CLD-Net which stands for Cotton Leaf Disease Detection Network a novel deep learning-based framework which combines Faster-RCNN and YOLOv5 algorithms into a single action to achieve ultimately real time detection of accurate diseases the combination helps identify both the high detection speed of YOLOv5 along with Faster-RCNN regional proposal accuracy. The new method is that the compilation of these two modern object detection methods has been compiled and designed specifically for detecting leaf disease across varying environmental conditions. Notable contributions to this method include increases in classification accuracy, processing speed, real time detection making these methods suitable for farmers agronomists and sensor deployment. CLD-Net integrates YOLOv5 and Faster R-CNN, combining real-time detection capability with precise classification, to deliver robust cotton leaf disease identification. Experimental validation on a curated dataset of cotton leaf images demonstrates the superiority of CLD-Net, achieving an accuracy of 96.7%, which surpasses that of traditional models. These results confirm the potential of the proposed approach to revolutionize crop disease detection, leading to timely intervention and increased yield.

Medicine, Science
arXiv Open Access 2025
When Anti-Fraud Laws Become a Barrier to Computer Science Research

Madelyne Xiao, Andrew Sellars, Sarah Scheffler

Computer science research sometimes brushes with the law, from red-team exercises that probe the boundaries of authentication mechanisms, to AI research processing copyrighted material, to platform research measuring the behavior of algorithms and users. U.S.-based computer security research is no stranger to the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA) in a relationship that is still evolving through case law, research practices, changing policies, and legislation. Amid the landscape computer scientists, lawyers, and policymakers have learned to navigate, anti-fraud laws are a surprisingly under-examined challenge for computer science research. Fraud brings separate issues that are not addressed by the methods for navigating CFAA, DMCA, and Terms of Service that are more familiar in the computer security literature. Although anti-fraud laws have been discussed to a limited extent in older research on phishing attacks, modern computer science researchers are left with little guidance when it comes to navigating issues of deception outside the context of pure laboratory research. In this paper, we analyze and taxonomize the anti-fraud and deception issues that arise in several areas of computer science research. We find that, despite the lack of attention to these issues in the legal and computer science literature, issues of misrepresented identity or false information that could implicate anti-fraud laws are actually relevant to many methodologies used in computer science research, including penetration testing, web scraping, user studies, sock puppets, social engineering, auditing AI or socio-technical systems, and attacks on artificial intelligence. We especially highlight the importance of anti-fraud laws in two research fields of great policy importance: attacking or auditing AI systems, and research involving legal identification.

en cs.CY
DOAJ Open Access 2024
Developing an Ethical Regulatory Framework for Artificial Intelligence: Integrating Systematic Review, Thematic Analysis, and Multidisciplinary Theories

Jian Wang, Yujia Huo, Jinli Mahe et al.

Artificial intelligence (AI) ethics has emerged as a global discourse within both academic and policy spheres. However, translating these principles into concrete, real-world applications for AI development remains a pressing need and a significant challenge. This study aims to bridge the gap between principles and practice from a regulatory government perspective and promote best practices in AI governance. To this end, we developed the Ethical Regulatory Framework for AI (ERF-AI) to guide regulatory bodies in constructing mechanisms, including role setups, procedural configurations, and strategy design. The framework was developed through a systematic review, thematic analysis, and the integration of interdisciplinary concepts. A comprehensive search was conducted across four electronic databases (PubMed, IEEE Xplore, Web of Science, and Scopus) and four additional sources containing AI standards and guidelines from various countries and international organizations, focusing on studies published from 2014 to 2024. Thematic analysis identified and refined key themes from the included literature and integrated concepts from process control theory, computer science, organizational management, information technology, and behavioral psychology. This study adhered to the PRISMA guidelines and employed NVivo for thematic analysis. The resulting framework encompasses 23 themes, particularly emphasizing three feedback-loop processes: the ethical review process, the incentive and penalty process, and the mechanism improvement process, offering theoretical guidance for the construction of ethical regulatory mechanisms. Based on this framework, a seven-step process and case examples for mechanism design are presented, enhancing the practicality of ERF-AI in developing ethical regulatory mechanisms. Future research is expected to explore customization of the framework to remain responsive to emerging AI trends and challenges, supported by empirical studies and rigorous testing for further refinement and expansion.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2024
Adaptive habitat biogeography-based optimizer for optimizing deep CNN hyperparameters in image classification

Jiayun Xin, Mohammad Khishe, Diyar Qader Zeebaree et al.

Deep Convolutional Neural Networks (DCNNs) have shown remarkable success in image classification tasks, but optimizing their hyperparameters can be challenging due to their complex structure. This paper develops the Adaptive Habitat Biogeography-Based Optimizer (AHBBO) for tuning the hyperparameters of DCNNs in image classification tasks. In complicated optimization problems, the BBO suffers from premature convergence and insufficient exploration. In this regard, an adaptable habitat is presented as a solution to these problems; it would permit variable habitat sizes and regulated mutation. Better optimization performance and a greater chance of finding high-quality solutions across a wide range of problem domains are the results of this modification's increased exploration and population diversity. AHBBO is tested on 53 benchmark optimization functions and demonstrates its effectiveness in improving initial stochastic solutions and converging faster to the optimum. Furthermore, DCNN-AHBBO is compared to 23 well-known image classifiers on nine challenging image classification problems and shows superior performance in reducing the error rate by up to 5.14%. Our proposed algorithm outperforms 13 benchmark classifiers in 87 out of 95 evaluations, providing a high-performance and reliable solution for optimizing DNNs in image classification tasks. This research contributes to the field of deep learning by proposing a new optimization algorithm that can improve the efficiency of deep neural networks in image classification.

Science (General), Social sciences (General)
arXiv Open Access 2024
A guideline for the methodology chapter in computer science dissertations

Marco Araujo

Rather than simply offering suggestions, this guideline for the methodology chapter in computer science dissertations provides thorough insights on how to develop a strong research methodology within the area of computer science. The method is structured into several parts starting with an overview of research strategies which include experiments, surveys, interviews and case studies. The guide highlights the significance of defining a research philosophy and reasoning by talking about paradigms such as positivism, constructivism and pragmatism. Besides, it reveals the importance of types of research including deductive and inductive methodologies; basic versus applied research approaches. Moreover, this guideline discusses data collection and analysis intricacies that divide data into quantitative and qualitative typologies. It explains different ways in which data can be collected from observation to experimentation, interviews or surveys. It also mentions ethical considerations in research emphasizing ethical behavior like following academic principles. In general, this guideline is an essential tool for undertaking computer science dissertations that help researchers structure their work while maintaining ethical standards in their study design.

en cs.GL
arXiv Open Access 2024
Intelligent Computing Social Modeling and Methodological Innovations in Political Science in the Era of Large Language Models

Zhenyu Wang, Dequan Wang, Yi Xu et al.

The recent wave of artificial intelligence, epitomized by large language models (LLMs),has presented opportunities and challenges for methodological innovation in political science,sparking discussions on a potential paradigm shift in the social sciences. However, how can weunderstand the impact of LLMs on knowledge production and paradigm transformation in thesocial sciences from a comprehensive perspective that integrates technology and methodology? What are LLMs' specific applications and representative innovative methods in political scienceresearch? These questions, particularly from a practical methodological standpoint, remainunderexplored. This paper proposes the "Intelligent Computing Social Modeling" (ICSM) methodto address these issues by clarifying the critical mechanisms of LLMs. ICSM leverages thestrengths of LLMs in idea synthesis and action simulation, advancing intellectual exploration inpolitical science through "simulated social construction" and "simulation validation." Bysimulating the U.S. presidential election, this study empirically demonstrates the operationalpathways and methodological advantages of ICSM. By integrating traditional social scienceparadigms, ICSM not only enhances the quantitative paradigm's capability to apply big data toassess the impact of factors but also provides qualitative paradigms with evidence for socialmechanism discovery at the individual level, offering a powerful tool that balances interpretabilityand predictability in social science research. The findings suggest that LLMs will drivemethodological innovation in political science through integration and improvement rather thandirect substitution.

en cs.CY, cs.AI
arXiv Open Access 2024
EmpireDB: Data System to Accelerate Computational Sciences

Daniel Alabi, Eugene Wu

The emerging discipline of Computational Science is concerned with using computers to simulate or solve scientific problems. These problems span the natural, political, and social sciences. The discipline has exploded over the past decade due to the emergence of larger amounts of observational data and large-scale simulations that were previously unavailable or unfeasible. However, there are still significant challenges with managing the large amounts of data and simulations. The database management systems community has always been at the forefront of the development of the theory and practice of techniques for formalizing and actualizing systems that access or query large datasets. In this paper, we present EmpireDB, a vision for a data management system to accelerate computational sciences. In addition, we identify challenges and opportunities for the database community to further the fledgling field of computational sciences. Finally, we present preliminary evidence showing that the optimized components in EmpireDB could lead to improvements in performance compared to contemporary implementations.

en cs.DB
DOAJ Open Access 2023
A hybrid anomaly detection method for high dimensional data

Xin Zhang, Pingping Wei, Qingling Wang

Anomaly detection of high-dimensional data is a challenge because the sparsity of the data distribution caused by high dimensionality hardly provides rich information distinguishing anomalous instances from normal instances. To address this, this article proposes an anomaly detection method combining an autoencoder and a sparse weighted least squares-support vector machine. First, the autoencoder is used to extract those low-dimensional features of high-dimensional data, thus reducing the dimension and the complexity of the searching space. Then, in the low-dimensional feature space obtained by the autoencoder, the sparse weighted least squares-support vector machine separates anomalous and normal features. Finally, the learned class labels to be used to distinguish normal instances and abnormal instances are outputed, thus achieving anomaly detection of high-dimensional data. The experiment results on real high-dimensional datasets show that the proposed method wins over competing methods in terms of anomaly detection ability. For high-dimensional data, using deep methods can reconstruct the layered feature space, which is beneficial for gaining those advanced anomaly detection results.

Electronic computers. Computer science
DOAJ Open Access 2023
A deep-reinforcement learning approach for optimizing homogeneous droplet routing in digital microfluidic biochips

Basudev Saha, Bidyut Das, Mukta Majumder

Over the past two decades, digital microfluidic biochips have been in much demand for safety-critical and biomedical applications and increasingly important in point-of-care analysis, drug discovery, and immunoassays, among other areas. However, for complex bioassays, finding routes for the transportation of droplets in an electrowetting-on-dielectric digital biochip while maintaining their discreteness is a challenging task. In this study, we propose a deep reinforcement learning-based droplet routing technique for digital microfluidic biochips. The technique is implemented on a distributed architecture to optimize the possible paths for predefined source–target pairs of droplets. The actors of the technique calculate the possible routes of the source–target pairs and store the experience in a replay buffer, and the learner fetches the experiences and updates the routing paths. The proposed algorithm was applied to benchmark suites I and III as two different test benches, and it achieved significant improvements over state-of-the-art techniques.

Technology, Engineering (General). Civil engineering (General)

Halaman 20 dari 1130028