Hasil untuk "Electronic computers. Computer science"

Menampilkan 20 dari ~17680537 hasil · dari DOAJ, arXiv, CrossRef

JSON API
DOAJ Open Access 2026
A review of optimization strategies for deep and machine learning in diabetic macular edema

A. M. Mutawa, Khalid Sabti, Bibin Shalini Sundaram Thankaleela et al.

Diabetic macular edema (DME) is a primary contributor to visual impairment in diabetic patients, necessitating precise and prompt analysis for optimal treatment. Recent breakthroughs in deep learning (DL) and machine learning (ML) have yielded promising outcomes in ophthalmic image analysis. However, researchers often overlook the significance of optimization algorithms in enhancing the efficacy of their models for DME-related tasks. This review aims to consolidate, seek, discover, assess, and integrate existing work on the application of DL and ML, with emphasis on the integration and impact of optimization algorithms in enhancing their efficacy, robustness, and performance for DME in the fields of computer science and engineering. The population, intervention, comparison, and outcome framework was employed in this study to facilitate a clear and comprehensive analysis. The procedural superiority of the included investigations was evaluated using the Joanna Briggs Institute Critical Appraisal Tools for assessing methodological quality. The Auto-Metric Graph Neural Network achieved the greatest accuracy of 99.57% for combined diabetic retinopathy-DME grading, illustrating the higher efficacy of hybrid architectures augmented by meta-heuristic optimizers, such as Jaya and ant colony optimization. Successful deployment, however, depends on overcoming hurdles, such as the low mean average precision in lesion identification (0.1540) in YOLO-based models in the test set performance, and improved clinical interpretability to foster clinician trust. A Sankey diagram visually analyzes the flow of quantities between different entities of the survey.Systematic review registrationB. (2025, November 2). A Review of Optimization Strategies for Deep and Machine Learning in DME. Retrieved from osf.io/qsh4j.

Electronic computers. Computer science
arXiv Open Access 2026
CSLib: The Lean Computer Science Library

Clark Barrett, Swarat Chaudhuri, Fabrizio Montesi et al.

We introduce CSLib, an open-source framework for proving computer-science-related theorems and writing formally verified code in the Lean proof assistant. CSLib aims to be for computer science what Lean's Mathlib is for mathematics. Mathlib has been tremendously impactful: it is a key reason for Lean's popularity within the mathematics research community, and it has also played a critical role in the training of AI systems for mathematical reasoning. However, the base of computer science knowledge in Lean is currently quite limited. CSLib will vastly enhance this knowledge base and provide infrastructure for using this knowledge in real-world verification projects. By doing so, CSLib will (1) enable the broad use of Lean in computer science education and research, and (2) facilitate the manual and AI-aided engineering of large-scale formally verified systems.

en cs.LO, cs.PL
DOAJ Open Access 2025
Assessing patient preferences for medical decision making - a comparison of different methods

Jakub Fusiak, Andreas Wolkenstein, Verena S. Hoffmann

BackgroundPatient preferences are a critical component of shared decision-making (SDM), particularly when choosing between treatment options with differing risks and outcomes. Many methods exist to elicit these preferences, but their complexity, usability, and acceptance vary.ObjectiveWe aim to gain insight into the acceptance, effort and preferences of participants regarding five different methods of preference assessment. Additionally, we investigate the influence of health status, experiences within the health system and of demographic factors on the results.MethodsWe conducted a cross-sectional online survey including five preference elicitation Methods: best-worst scaling, direct weighting, PAPRIKA (Potentially All Pairwise Rankings of all Possible Alternatives), time trade-off, and standard gamble. The questionnaire was distributed via academic and patient advocacy mailing lists, reaching both healthy individuals and those with acute or chronic illnesses. Participants rated each method using six standardized statements on a 5-point Likert scale. Additional items assessed general acceptance of algorithm-assisted preference assessments and the clarity of the questionnaire.ResultsOf 258 initiated questionnaires, 123 (48%) were completed and included in the analysis. Participants were diverse in age, gender, and health status, but predominantly highly educated and digitally literate. Across all measures, the PAPRIKA method received the highest ratings for clarity, usability, and perceived ability to express preferences. Simpler methods (best-worst scaling, direct weighting) were rated as less useful for capturing nuanced preferences, while abstract utility-based methods (standard gamble, time trade-off) were seen as cognitively demanding. Subgroup analyses showed minimal variation across demographic groups. Most participants (82%) could imagine using at least one of the presented methods in real clinical settings, but also emphasized the importance of physician involvement in interpreting results.ConclusionThe interactive PAPRIKA method best balanced cognitive demand and expressiveness and was preferred by most participants. Structured methods for preference elicitation may enhance SDM when integrated into clinical workflows and supported by healthcare professionals. Further research is needed to evaluate their use in real-world decisions and among more diverse patient populations.

Medicine, Public aspects of medicine
DOAJ Open Access 2025
Attention-based functional-group coarse-graining: a deep learning framework for molecular prediction and design

Ming Han, Ge Sun, Paul F. Nealey et al.

Abstract Machine learning (ML) offers considerable promise for the design of new molecules and materials. In real-world applications, the design problem is often domain-specific, and suffers from insufficient data, particularly labeled data, for ML training. In this study, we report a data-efficient, deep-learning framework for molecular discovery that integrates a coarse-grained functional-group representation with a self-attention mechanism to capture intricate chemical interactions. Our approach exploits group-contribution concepts to create a graph-based intermediate representation of molecules, serving as a low-dimensional embedding that substantially reduces the data demands typically required for training. Using a self-attention mechanism to learn the subtle but highly relevant chemical context of functional groups, the method proposed here consistently outperforms existing approaches for predictions of multiple thermophysical properties. In a case study focused on adhesive polymer monomers, we train on a limited dataset comprising only 6,000 unlabeled and 600 labeled monomers. The resulting chemistry prediction model achieves over 92% accuracy in forecasting properties directly from SMILES strings, exceeding the performance of current state-of-the-art techniques. Furthermore, the latent molecular embedding is invertible, enabling the design pipeline to automatically generate new monomers from the learned chemical subspace. We illustrate this functionality by targeting several properties, including high and low glass transition temperatures (Tg), and demonstrate that our model can identify new candidates with values that surpass those in the training set. The ease with which the proposed framework navigates both chemical diversity and data scarcity offers a promising route to accelerate and broaden the search for functional materials.

Materials of engineering and construction. Mechanics of materials, Computer software
arXiv Open Access 2025
Evaluating LLMs for Career Guidance: Comparative Analysis of Computing Competency Recommendations Across Ten African Countries

Precious Eze, Stephanie Lunn, Bruk Berhane

Employers increasingly expect graduates to utilize large language models (LLMs) in the workplace, yet the competencies needed for computing roles across Africa remain unclear given varying national contexts. This study examined how six LLMs, namely ChatGPT 4, DeepSeek, Gemini, Claude 3.5, Llama 3, and Mistral AI, describe entry-level computing career expectations across ten African countries. Using the Computing Curricula 2020 framework and drawing on Digital Colonialism Theory and Ubuntu Philosophy, content analysis of 60 LLM responses to standardized prompts reveals consistent coverage of technical competencies such as cloud computing and programming, but notable differences in non-technical competencies, particularly ethics and responsible AI use. Models vary considerably in recognizing country-specific factors, including local technology ecosystems, language requirements, and national policies averaging only 35.4% contextual awareness overall. Open-source models demonstrated stronger contextual awareness and better balance between technical and professional skills, with Llama (4.47/5) and DeepSeek (4.25/5) outperforming proprietary alternatives ChatGPT-4 (3.90/5) and Claude (3.46/5). However, Mistral's poor contextual performance (0.00/4) despite being open-source indicates that development philosophy alone does not guarantee contextual responsiveness. This first comprehensive comparison of LLM career guidance for African computing students uncovers entrenched infrastructure assumptions and Western-centric biases that create gaps between technical recommendations and local realities. The findings challenge assumptions about AI tool quality in resource-constrained settings and underscore the need for decolonial approaches to AI in education, emphasizing contextual relevance and hybrid human-AI guidance models.

en cs.CY, cs.AI
DOAJ Open Access 2024
Analogue Computation Converter for Nonhomogeneous Second-Order Linear Ordinary Differential Equation

Gabriel Nicolae Popa, Corina Maria Diniș

Among many other applications, electronic converters can be used with sensors with analogue outputs (DC voltage). This article presents an analogue computation converter with two DC voltages at the inputs (one input changes the frequency of the output signal, another input changes the amplitude of the output signal) that provide a periodic sinusoidal signal (with variable frequency and amplitude) at the output. On the basis of the analogue computation converter is a nonhomogeneous second-order linear ordinary differential equation which is solved analogically. The analogue computation converter consists of analogue multipliers and operational amplifiers, composed of seven function circuits: two analogue multiplication circuits, two analogue addition circuits, one non-inverting amplifier, and two integration circuits (with RC time constants). At the output of an oscillator is a sinusoidal signal which depends on the DC voltages applied on two inputs (0 ÷ 10 V): at one input, a DC voltage is applied to linearly change the sinusoidal frequency output (up to tens of kHz, according to two time constants), and at the other input, a DC voltage is applied to linearly change the amplitude of the oscillator output signal (up to 10 V). It can be used with sensors which have a DC output voltage and must be converted to a sine wave signal with variable frequency and amplitude with the aim of transmitting information over longer distances through wires. This article presents the detailed theory of the functioning, simulations, and experiments of the analogue computation converter.

Electronic computers. Computer science
DOAJ Open Access 2024
Quantum Criticality Under Imperfect Teleportation

Pablo Sala, Sara Murciano, Yue Liu et al.

Entanglement, measurement, and classical communication together enable teleportation of quantum states between distant parties, in principle, with perfect fidelity. To what extent do correlations and entanglement of a many-body wave function transfer under imperfect teleportation protocols? We address this question for the case of an imperfectly teleported quantum critical wave function, focusing on the ground state of a critical Ising chain. We demonstrate that imperfections, e.g., in the entangling gate adopted for a given protocol, effectively manifest as weak measurements acting on the otherwise pristinely teleported critical state. Armed with this perspective, we leverage and further develop the theory of measurement-altered quantum criticality to quantify the resilience of critical-state teleportation. We identify classes of teleportation protocols for which imperfection (i) preserves both the universal long-range entanglement and correlations of the original quantum critical state, (ii) weakly modifies these quantities away from their universal values, and (iii) obliterates long-range entanglement altogether while preserving power-law correlations, albeit with a new set of exponents. We also show that mixed states describing the average over a series of sequential imperfect teleportation events retain pristine power-law correlations due to a “built-in” decoding algorithm, though their entanglement structure measured by the negativity depends on errors similarly to individual protocol runs. These results may allow one to design teleportation protocols that optimize against errors—highlighting a potential practical application of measurement-altered criticality.

Physics, Computer software
arXiv Open Access 2024
Faculty Perspectives on the Potential of RAG in Computer Science Higher Education

Sagnik Dakshit

The emergence of Large Language Models (LLMs) has significantly impacted the field of Natural Language Processing and has transformed conversational tasks across various domains because of their widespread integration in applications and public access. The discussion surrounding the application of LLMs in education has raised ethical concerns, particularly concerning plagiarism and policy compliance. Despite the prowess of LLMs in conversational tasks, the limitations of reliability and hallucinations exacerbate the need to guardrail conversations, motivating our investigation of RAG in computer science higher education. We developed Retrieval Augmented Generation (RAG) applications for the two tasks of virtual teaching assistants and teaching aids. In our study, we collected the ratings and opinions of faculty members in undergraduate and graduate computer science university courses at various levels, using our personalized RAG systems for each course. This study is the first to gather faculty feedback on the application of LLM-based RAG in education. The investigation revealed that while faculty members acknowledge the potential of RAG systems as virtual teaching assistants and teaching aids, certain barriers and features are suggested for their full-scale deployment. These findings contribute to the ongoing discussion on the integration of advanced language models in educational settings, highlighting the need for careful consideration of ethical implications and the development of appropriate safeguards to ensure responsible and effective implementation.

en cs.CY, cs.ET
arXiv Open Access 2024
Benefits and Risks of Using ChatGPT4 as a Teaching Assistant for Computer Science Students

Yaiza Aragonés-Soria, Julia Kotovich, Chitsutha Soomlek et al.

Upon release, ChatGPT3.5 shocked the software engineering community by its ability to generate answers to specialized questions about coding. Immediately, many educators wondered if it was possible to use the chatbot as a support tool that helps students answer their programming questions. This article evaluates this possibility at three levels: fundamental Computer Science knowledge (basic algorithms and data structures), core competency (design patterns), and advanced knowledge (quantum computing). In each case, we ask normalized questions several times to ChatGPT3.5, then look at the correctness of answers, and finally check if this creates issues. The main result is that the performances of ChatGPT3.5 degrades drastically as the specialization of the domain increases: for basic algorithms it returns answers that are almost always correct, for design patterns the generated code contains many code smells and is generally of low quality, but it is still sometimes able to fix it (if asked), and for quantum computing it is often blatantly wrong.

en cs.CY, cs.AI
arXiv Open Access 2024
Adoption and Impact of ChatGPT in Computer Science Education: A Case Study on a Database Administration Course

Daniel López-Fernández, Ricardo Vergaz

Contribution: The combination of ChatGPT with traditional learning resources is very effective in computer science education. High-performing students are the ones who are using ChatGPT the most. So, a new digital trench could be rising between these students and those with lower degree of fundamentals and worse prompting skills, who may not take advantage of all the ChatGPT possibilities. Background: The irruption of GenAI such as ChatGPT has changed the educational landscape. Therefore, methodological guidelines and more empirical experiences in computer science education are needed to better understand these tools and know how to use them to their fullest potential. Research Questions: This article addresses three questions. The first two explore the degree of use and perceived usefulness of ChatGPT among computer science students to learn database administration, where as the third one explore how the utilization of ChatGPT can impact academic performance. Methodology: This contribution presents an exploratory and correlational study conducted with 37 students who used ChatGPT as a support tool to learn database administration. The student grades and a comprehensive questionnaire were employed as research instruments. Findings: The obtained results indicate that traditional learning resources, such as teacher explanations and student reports, were widely used and correlated positively with student grade. The usage and perceived utility of ChatGPT were moderate, but positive correlations between student grade and ChatGPT usage were found. Indeed, a significantly higher use of this tool was identified among the group of outstanding students.

en cs.CY, cs.AI
arXiv Open Access 2024
Leveraging AI to Advance Science and Computing Education across Africa: Challenges, Progress and Opportunities

George Boateng

Across the African continent, students grapple with various educational challenges, including limited access to essential resources such as computers, internet connectivity, reliable electricity, and a shortage of qualified teachers. Despite these challenges, recent advances in AI such as BERT, and GPT-4 have demonstrated their potential for advancing education. Yet, these AI tools tend to be deployed and evaluated predominantly within the context of Western educational settings, with limited attention directed towards the unique needs and challenges faced by students in Africa. In this chapter, we discuss challenges with using AI to advance education across Africa. Then, we describe our work developing and deploying AI in Education tools in Africa for science and computing education: (1) SuaCode, an AI-powered app that enables Africans to learn to code using their smartphones, (2) AutoGrad, an automated grading, and feedback tool for graphical and interactive coding assignments, (3) a tool for code plagiarism detection that shows visual evidence of plagiarism, (4) Kwame, a bilingual AI teaching assistant for coding courses, (5) Kwame for Science, a web-based AI teaching assistant that provides instant answers to students' science questions and (6) Brilla AI, an AI contestant for the National Science and Maths Quiz competition. Finally, we discuss potential opportunities to leverage AI to advance education across Africa.

en cs.CY, cs.CL
DOAJ Open Access 2023
Superconvergence Analysis of Discontinuous Galerkin Methods for Systems of Second-Order Boundary Value Problems

Helmi Temimi

In this paper, we present an innovative approach to solve a system of boundary value problems (BVPs), using the newly developed discontinuous Galerkin (DG) method, which eliminates the need for auxiliary variables. This work is the first in a series of papers on DG methods applied to partial differential equations (PDEs). By consecutively applying the DG method to each space variable of the PDE using the method of lines, we transform the problem into a system of ordinary differential equations (ODEs). We investigate the convergence criteria of the DG method on systems of ODEs and generalize the error analysis to PDEs. Our analysis demonstrates that the DG error’s leading term is determined by a combination of specific Jacobi polynomials in each element. Thus, we prove that DG solutions are superconvergent at the roots of these polynomials, with an order of convergence of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>O</mi><mo>(</mo><msup><mi>h</mi><mrow><mi>p</mi><mo>+</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></semantics></math></inline-formula>.

Electronic computers. Computer science
DOAJ Open Access 2023
Predictive digital twin for optimizing patient-specific radiotherapy regimens under uncertainty in high-grade gliomas

Anirban Chaudhuri, Graham Pash, David A. Hormuth et al.

We develop a methodology to create data-driven predictive digital twins for optimal risk-aware clinical decision-making. We illustrate the methodology as an enabler for an anticipatory personalized treatment that accounts for uncertainties in the underlying tumor biology in high-grade gliomas, where heterogeneity in the response to standard-of-care (SOC) radiotherapy contributes to sub-optimal patient outcomes. The digital twin is initialized through prior distributions derived from population-level clinical data in the literature for a mechanistic model's parameters. Then the digital twin is personalized using Bayesian model calibration for assimilating patient-specific magnetic resonance imaging data. The calibrated digital twin is used to propose optimal radiotherapy treatment regimens by solving a multi-objective risk-based optimization under uncertainty problem. The solution leads to a suite of patient-specific optimal radiotherapy treatment regimens exhibiting varying levels of trade-off between the two competing clinical objectives: (i) maximizing tumor control (characterized by minimizing the risk of tumor volume growth) and (ii) minimizing the toxicity from radiotherapy. The proposed digital twin framework is illustrated by generating an in silico cohort of 100 patients with high-grade glioma growth and response properties typically observed in the literature. For the same total radiation dose as the SOC, the personalized treatment regimens lead to median increase in tumor time to progression of around six days. Alternatively, for the same level of tumor control as the SOC, the digital twin provides optimal treatment options that lead to a median reduction in radiation dose by 16.7% (10 Gy) compared to SOC total dose of 60 Gy. The range of optimal solutions also provide options with increased doses for patients with aggressive cancer, where SOC does not lead to sufficient tumor control.

Electronic computers. Computer science
DOAJ Open Access 2023
Logical Reasoning Based on Residual Attention Multi-scale Relation Network

XIONG Zhongmin, ZENG Qi, LU Peng, WANG Zhenhua, ZHENG Zongsheng

Logical reasoning is the ability to perceive patterns and connections between visual elements. Endowing computers with human-like reasoning ability is a critical area of research;state-of-the-art deep neural networks have achieved superhuman performance in image processing and other fields.However,the concept of logical reasoning through images requires further research.To address the problems of insufficient feature extraction and generalization of Multi-scale Relation Network(MRNet),an improved logical reasoning method,called Residual Attention Multi-scale Relation Network(ResAMRNet),is proposed. In the backbone network,shallow features are integrated into the deep network training process by utilizing residual structures and combining jump and long jump. This reduces the loss of feature information and improves the feature extraction capability of the model. In the reasoning module,the channel attention mechanism and residuals are combined to detect the relationship features between each image line.It can differentiate the significance of each feature channel,learn the attention weight adaptively,and extract key features.In this study,a Double-pooled Efficient Channel Attention(DECA) mechanism is proposed to combine global maximum pooling to further obtain feature information regarding objects and to improve generalization.Experimental results on representative logical reasoning datasets,Relational and Analogical Visual rEasoNing(RAVEN) and Improved RAVEN(I-RAVEN),show that the accuracy of the proposed method using these datasets is higher by 8.3 and 18.1 percentage points,respectively,than that of MRNet. Therefore,it demonstrates strong logical reasoning capabilities.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2023
Understanding Practices around Computational News Discovery Tools in the Domain of Science Journalism

Sachita Nishal, Jasmine Sinchai, Nicholas Diakopoulos

Science and technology journalists today face challenges in finding newsworthy leads due to increased workloads, reduced resources, and expanding scientific publishing ecosystems. Given this context, we explore computational methods to aid these journalists' news discovery in terms of time-efficiency and agency. In particular, we prototyped three computational information subsidies into an interactive tool that we used as a probe to better understand how such a tool may offer utility or more broadly shape the practices of professional science journalists. Our findings highlight central considerations around science journalists' agency, context, and responsibilities that such tools can influence and could account for in design. Based on this, we suggest design opportunities for greater and longer-term user agency; incorporating contextual, personal and collaborative notions of newsworthiness; and leveraging flexible interfaces and generative models. Overall, our findings contribute a richer view of the sociotechnical system around computational news discovery tools, and suggest ways to improve such tools to better support the practices of science journalists.

en cs.HC, cs.AI
DOAJ Open Access 2022
Inventory Information System Audit Using Cobit 5 Domain MEA at PT. Telkom Akses Pontianak

Noor Hellyda Hermawati, Susy Rosyida

PT. Telkom Akses Pontianak memiliki sistem informasi Inventory yang selama ini digunakan, selama melakukan penelitian ditemukanlah beberarapa temuan, yaitu seperti informasi terkait ketersedian material, sistem yang kurang efektif terkait data pengeluaran barang yang berdampak pada laporan periodik perusahaan, dan kurangnya optimalisasi Sumber Daya Manusia yang ada. sehingga dengan permsalahan yang ada menjadi dasar untuk melakukan audit sistem informasi yang digunakan. Audit mengacu pada framework COBIT 5 dengan menggunakan Domain MEA ditemukanlah hasil dari tingkat kapabilitas masing-masing sub domain MEA itu sendiri dan juga Gap Analisisnya. Dengan nilai kapabilitas dari subdomain MEA 01 senilai 3,83, Subdomain MEA 02 senilai 3,60, dan Subdomain MEA 03 senilai 3,69, dengan nilai rata-rata yaitu 3,70 dengan keterangan Predictable Process yang berarti objek yang diteliti sudah mencapai proses yang ditetapkan berjalan dalam suatu batas yang ditentukan untuk mencapai tujuan prosesnya. Serta dengan perhitungan Gap Analisis yaitu pada subdomain MEA 01 senilai 1,2, Subdomain MEA 02 senilai 1,4, dan Subdomain MEA 03 senilai 1,3, dengan nilai rata-rata yaitu 1,3 yang berarti perusahaan masih perlu meningkatkan terkait sistem informasi Inventory yang digunakan agar dapat memperoleh hasil yang optimal bagi seluruh pemangku kepentingan.

Electronic computers. Computer science, Management information systems
DOAJ Open Access 2022
Strategic guidelines for the development of enterprises of the construction sector

Nikolay Chepachenko, Marina Yudenko, Anna Gospodinova et al.

The current trend of globalization of the world economy necessitates the use of high-tech developments and innovations that allow achieving strategic goals at the national, regional, and sectoral levels. The prerequisites of the study are determined by the urgency of finding solutions to problematic issues of formation and implementation of priority strategic guidelines for the development of enterprises of the construction sector, designed to ensure an adequate contribution to the strategic vector of advanced industrial, technological and socio-economic development of the construction industry and the national economy. This determines the need to find a solution to the problem of forming and implementing priority strategic guidelines for the development of enterprises mainly by increasing technological and innovative potentials that form the economic potential of the development of enterprises by the type of activity "Construction". The purpose of the study is to identify strategic guidelines for the development of enterprises of the construction sector that meet the targets of the fourth scientific and technological revolution and the achievement of strategic goals for the development of national economies. The findings of the paper outline the key signs of development, inherent in the nature of the development of material objects and economic entities of the economy are revealed. This allowed us to propose a systematization of the formation of priority strategic guidelines for the economic development of construction enterprises, reflecting the relationship with the targets for achieving national goals and strategic objectives for the development of economies of various countries and meeting the targets of the fourth scientific and technological revolution Industry 4.0. The practical implications refer to enterprises of the construction sector.

Electronic computers. Computer science, Economics as a science
DOAJ Open Access 2022
Application Characteristics and Innovation of Digital Technology in Visual Communication Design

Jiasui Cai, Jie Su

While China has made major social and economic breakthroughs, it has also raised the level of research, development, and application of science and technology, especially the application of digital technology. Combining digital technology with visual communication design to meet diversified design needs can maximize the level of innovation in visual communication design work. The effective use and continuous innovation of digital technology in visual communication design make visual information more intuitive and image . and constantly bring people a fresh and unique visual experience, and the development of visual communication design has been strongly promoted. This paper analyzes the advantages of the application of digital technology in visual communication design, focusing on the application of digital technology in visual communication design, and from different perspectives such as art space tools, it is extremely critical and important to apply and expand the theory of advanced innovation. From the aspects of artistry, diversification, and science and technology, the application of digital technology in visual communication design is discussed, and the application innovation strategy of digital technology in visual communication design is further discussed. We hope that this research can provide some useful references for the development of modern visual communication design.

Electronic computers. Computer science
arXiv Open Access 2022
Base rate neglect in computer science education

Koby Mike, Orit Hazzan

Machine learning (ML) algorithms are gaining increased importance in many academic and industrial applications, and such algorithms are, accordingly, becoming common components in computer science curricula. Learning ML is challenging not only due to its complex mathematical and algorithmic aspects, but also due to a) the complexity of using correctly these algorithms in the context of real-life situations and b) the understanding of related social and ethical issues. Cognitive biases are phenomena of the human brain that may cause erroneous perceptions and irrational decision-making processes. As such, they have been researched thoroughly in the context of cognitive psychology and decision making; they do, however, have important implications for computer science education as well. One well-known cognitive bias, first described by Kahneman and Tversky, is the base rate neglect bias, according to which humans fail to consider the base rate of the underlying phenomena when evaluating conditional probabilities. In this paper, we explore the expression of the base rate neglect bias in ML education. Specifically, we show that about one third of students in an Introduction to ML course, from varied backgrounds (computer science students and teachers, data science, engineering, social science and digital humanities), fail to correctly evaluate ML algorithm performance due to the base rate neglect bias. This failure rate should alert educators and promote the development of new pedagogical methods for teaching ML algorithm performance.

en cs.CY

Halaman 7 dari 884027