W. Stallings
Hasil untuk "Electronic computers. Computer science"
Menampilkan 20 dari ~18057749 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv
Qiming Zhang, H. Yu, M. Barbiero et al.
The growing demands of brain science and artificial intelligence create an urgent need for the development of artificial neural networks (ANNs) that can mimic the structural, functional and biological features of human neural networks. Nanophotonics, which is the study of the behaviour of light and the light–matter interaction at the nanometre scale, has unveiled new phenomena and led to new applications beyond the diffraction limit of light. These emerging nanophotonic devices have enabled scientists to develop paradigm shifts of research into ANNs. In the present review, we summarise the recent progress in nanophotonics for emulating the structural, functional and biological features of ANNs, directly or indirectly. Technologies that manipulate light at the nanoscale will help researchers develop artificial neural networks (ANNs) with uses including brain disease research and machine learning. Despite advances in neuroscience, understanding the human brain remains a considerable challenge. Constructing physical or computer-based ANNs can help scientists analyse brain function and harness its power. Min Gu and colleagues at RMIT University in Melbourne, Australia, reviewed research into emerging ANNs enabled by nanophtonics that harness photons’ ability to carry vast amounts of information. Three-dimensional printing and laser writing techniques are allowing researchers to fabricate tiny optical or electronic components for building artificial neurons and ANN scaffolding platforms. Another all-optical ANN design uses multiple layers of diffractive holograms to create a highly efficient machine learning engine. Transplantable ANNs combining nanophotonic technology such as nanosensing with biological tissues could one day help study and treat severe brain disorders or injuries.
Christopher J. Kingsbury, M. Senge
Abstract Porphyrin molecules are a widely exploited biochemical moiety, with uses in medicinal chemistry, sensing and materials science. The shape of porphyrins, as an aromatic unit, is reductively imagined to be approximately flat, with regular, rigid shape, owing to the popular depiction as a simplified skeletal model. While this regular conformation does exist, the array of substitution patterns in synthetic porphyrins or interactions with the apoprotein in biochemical moieties often induce distortions both in-plane and out-of-plane. Structural deviation reduces symmetry from the ideal D4h and can introduce changes in the physical and electronic structure; physical changes can introduce pockets for favorable intermolecular interactions, and electronic distortion can introduce new electronic transitions and properties. A quantification of porphyrin distortion is presented based on the Normal-coordinate Structural Decomposition method (NSD) pioneered by Shelnutt. NSD transforms crystallographically-determined atomic positions of each porphyrin into a summation of common concerted atom vectors, allowing for quantification of porphyrin anisotropy by symmetry. This method has been used previously for comparison of small data sets of synthetic and biological porphyrins. In the twenty-five years since the method was pioneered, the volume and variety of available crystal structure data has ballooned, and data analysis tools available have become more sophisticated, while the method has languished. Using modern data-science methods, clusters of porphyrin distortions are grouped to show the average effect that a substitution pattern has on porphyrin shape. Aiming to provide an overview on the shape and conformation of these key macrocycles we here provide context to the strategies employed for introducing porphyrin distortion and to provide a quantitative comparative basis for analysis of novel structures. This is achieved by demonstrating that porphyrin molecules often have a predictable NSD pattern, and therefore solid-state conformation, based on chemical arguments. This quantification allows for assessment of predicted structures and forms the basis of a symmetry-by-design motif for a range of porphyrinoids. A modernized computer program used in this structural determination is provided for analysis, with this treatise acting as a guide to the interpretation of results in new structure determinations. New features include simple report generation, prediction of symmetry and assessment of cluster behavior for a range of porphyrin moieties, as well as convenient plotting functions and data reductions.
Aisha Zulfiqar, Ebroul Izquierdo, Krishna Chandramouli
Jiayin Wang, Weidong Zhao
Objective This study aims to develop and evaluate an autonomous surgical system based on the Toumai laparoscopic surgical robot, focusing on improving the precision and reliability of automated cutting and suturing operations. Methods The proposed system integrates several key components: (1) Robotic arms and associated control systems. (2) An endoscopic system supporting advanced visual image algorithms. (3) Specialized surgical instruments for cutting and suturing. A binocular stereo matching algorithm is employed to obtain depth information from the field of binocular camera. The DarkPose image key point localization algorithm and the Yolov5 image detection algorithm are utilized to accurately determine the positions of surgical instruments, suture needles, and target points. Additionally, an image classification discriminator is introduced to assess the success of the surgical tasks. A finite state machine model is used to guide the robotic arm's end-effector through real-time trajectory planning and execution, ensuring precise completion of surgical tasks. Results Experimental evaluation demonstrated that the autonomous system achieves high precision and reliability in both cutting and suturing tasks. Quantitative analysis shows that the system maintains an 85% success rate in automatic cutting, with a mean time of 5.10 s per cutting action. The automatic suturing task achieves a 92% accuracy rate in instrument positioning and a 90% success rate in needle grasping. Conclusion The developed system shows significant promise in automating key laparoscopic surgical tasks, with the potential to enhance surgical efficiency and improve outcomes in clinical practice. Further development and validation of this system could lead to its broader adoption in the field of autonomous surgery.
Md Javeed Khan, Mohammed Raahil Ahmed, Mohammed Abdul Aziz Taha et al.
Nanhao Liang, Xiaoyuan Yang, Yingwei Xia et al.
Abstract Panoptic Scene Graph Generation (PSG) aims to segment objects and predict the relation triplets <subject, relation, object> within an image. Despite the impressive achievements in PSG, current methods still struggle to capture fine-grained visual context, eschewing spatial and situational information in favor of visual features related to object identity. This limitation naturally impedes the model’s ability to distinguish subtle visual differences between relation triplets, such as “cat-on-person” and “cat-lying on-person”. To address this challenge, we propose CVCPSG, a novel DETR-based method that uncovers composite visual clues for PSG. Specifically, drawing inspiration from how humans capture visual context using diverse visual clues, we first construct a composite visual clues bank based on three key aspects: object, spatial, and situational. Then, we introduce a multi-level visual extractor to align visual features from objects, interactions, and image levels with the composite visual clues bank. Additionally, we incorporate a cross-modal learning module with a multitower architecture to seamlessly integrate visual clues into the relation decoder, thereby improving PSG detection. Extensive experiments on two PSG benchmarks confirm the effectiveness and interpretability of CVCPSG.
Liza Efriyanti, Ihwana As'ad
The design of curricula in Islamic universities frequently encounters difficulties in addressing the evolving needs of students, industry demands and the distinctive integration of Islamic values. Conventional methodologies are inadequate in their capacity to adapt to the evolving needs of the modern educational landscape. Furthermore, the integration of artificial intelligence (AI) in this domain remains underdeveloped, with many instances overlooking the crucial role of religious principles and institutional characteristics. This study addresses this gap by developing a Decision Support System (DSS) using Mamdani type 1 fuzzy logic, with the objective of assisting in determining an independent curriculum learning model tailored to Islamic higher education. The system incorporates a number of input variables, including student needs, industry requirements, institutional characteristics and data analysis. The output variables include an evaluation of the suitability of the learning model and a recommendation as to the most appropriate model. To illustrate, in situations where student needs are high, industry demands are moderate, institutional characteristics are high, and data analysis is moderate, the recommended model places an emphasis on balancing theoretical knowledge with practical application, while also aligning with Islamic values. The validation of this AI-based model, utilizing 2023 historical data from five Islamic universities in West Sumatra, yielded an average Mean Absolute Error (MAE) of 0.64, thereby demonstrating good predictive accuracy. The integration of AI in this system facilitates data-driven decision-making, thereby enhancing the relevance and adaptability of the curriculum. It has the potential to improve the quality of education, support balanced student learning outcomes, and ensure alignment with Islamic principles, making it a transformative tool for curriculum development in Islamic higher education.
Ye Wang, Yaling Deng, Ge Wang et al.
Modern Large Language Models (LLMs) exhibit complexity and granularity similar to humans in the field of natural language processing, challenging the boundaries between humans and machines in language understanding and creativity. However, whether the semantic network of LLMs is similar to humans is still unclear. We examined the representative closed-source LLMs, GPT-3.5-Turbo and GPT-4, with open-source LLMs, LLaMA-2-70B, LLaMA-3-8B, LLaMA-3-70B using semantic fluency tasks widely used to study the structure of semantic networks in humans. To enhance the comparability of semantic networks between humans and LLMs, we innovatively employed role-playing to generate multiple agents, which is equivalent to recruiting multiple LLM participants. The results indicate that the semantic network of LLMs has poorer interconnectivity, local association organization, and flexibility compared to humans, which suggests that LLMs have lower search efficiency and more rigid thinking in the semantic space and may further affect their performance in creative writing and reasoning.
Jumiati Usman, Irwan Syarif, Wakhid Yunendar
Penelitian ini bertujuan merancang dan mengimplementasikan sistem informasi event dan pemesanan tiket. Fokus utama adalah memberikan informasi lengkap tentang event, memfasilitasi pemesanan tiket dengan cepat, serta meningkatkan pengalaman pengguna. Penelitian ini menggunakan metode analisis waterfall yang melibatkan pengumpulan data melalui observasi dan wawancara. Analisis kebutuhan dilakukan untuk memperoleh informasi mendalam tentang sistem informasi dan pemesanan tiket pada aplikasi ini berbasis Android. Hasil analisis ini menghasilkan lima jenis informasi penting, yaitu data lokasi, tanggal event, harga event, dan data formulir pendaftaran. Hasil penelitian adalah implementasi sukses sistem informasi event dan pemesanan tiket di aplikasi Android. Ini memberikan informasi lengkap tentang event, antarmuka pengguna yang intuitif, dan kemudahan pemesanan tiket online. Hasil penelitian menunjukkan bahwa aplikasi ini memudahkan pengguna dari segi antarmuka dan proses pemesanan tiket. Hal ini dibuktikan melalui hasil dari 21 responden, di mana 87% dari mereka menganggap aplikasi ini sangat memudahkan
Jonathan Álvarez Ariza
Active Learning (AL) is a well-known teaching method in engineering because it allows to foster learning and critical thinking of the students by employing debate, hands-on activities, and experimentation. However, most educational results of this instructional method have been achieved in face-to-face educational settings and less has been said about how to promote AL and experimentation for online engineering education. Then, the main aim of this study was to create an AL methodology to learn electronics, physical computing (PhyC), programming, and basic robotics in engineering through hands-on activities and active experimentation in online environments. N=56 students of two engineering programs (Technology in Electronics and Industrial Engineering) participated in the methodology that was conceived using the guidelines of the Integrated Course Design Model (ICDM) and in some courses combining mobile and online learning with an Android app. The methodology gathered three main components: (1) In-home laboratories performed through low-cost hardware devices, (2) Student-created videos and blogs to evidence the development of skills, and (3) Teacher support and feedback. Data in the courses were collected through surveys, evaluation rubrics, semi-structured interviews, and students grades and were analyzed through a mixed approach. The outcomes indicate a good perception of the PhyC and programming activities by the students and suggest that these influence motivation, self-efficacy, reduction of anxiety, and improvement of academic performance in the courses. The methodology and previous results can be useful for researchers and practitioners interested in developing AL methodologies or strategies in engineering with online, mobile, or blended learning modalities.
Xiaoshuai Song, Muxi Diao, Guanting Dong et al.
Large language models (LLMs) have demonstrated significant potential in advancing various fields of research and society. However, the current community of LLMs overly focuses on benchmarks for analyzing specific foundational skills (e.g. mathematics and code generation), neglecting an all-round evaluation of the computer science field. To bridge this gap, we introduce CS-Bench, the first multilingual (English, Chinese, French, German) benchmark dedicated to evaluating the performance of LLMs in computer science. CS-Bench comprises approximately 10K meticulously curated test samples, covering 26 subfields across 4 key areas of computer science, encompassing various task forms and divisions of knowledge and reasoning. Utilizing CS-Bench, we conduct a comprehensive evaluation of over 30 mainstream LLMs, revealing the relationship between CS performance and model scales. We also quantitatively analyze the reasons for failures in existing LLMs and highlight directions for improvements, including knowledge supplementation and CS-specific reasoning. Further cross-capability experiments show a high correlation between LLMs' capabilities in computer science and their abilities in mathematics and coding. Moreover, expert LLMs specialized in mathematics and coding also demonstrate strong performances in several CS subfields. Looking ahead, we envision CS-Bench serving as a cornerstone for LLM applications in the CS field and paving new avenues in assessing LLMs' diverse reasoning capabilities. The CS-Bench data and evaluation code are available at https://github.com/csbench/csbench.
Md Sakib Hasan, Catherine D. Schuman, Zhongyang Zhang et al.
Neuromorphic Computing promises orders of magnitude improvement in energy efficiency compared to traditional von Neumann computing paradigm. The goal is to develop an adaptive, fault-tolerant, low-footprint, fast, low-energy intelligent system by learning and emulating brain functionality which can be realized through innovation in different abstraction layers including material, device, circuit, architecture and algorithm. As the energy consumption in complex vision tasks keep increasing exponentially due to larger data set and resource-constrained edge devices become increasingly ubiquitous, spike-based neuromorphic computing approaches can be viable alternative to deep convolutional neural network that is dominating the vision field today. In this book chapter, we introduce neuromorphic computing, outline a few representative examples from different layers of the design stack (devices, circuits and algorithms) and conclude with a few exciting applications and future research directions that seem promising for computer vision in the near future.
A. D. Santos, aNaoto Suzuki, F. Medola et al.
Wearable devices have been developed to improve the navigation of blind and visually impaired people. With technological advancements, the application of wearable devices has been increasing. This systematic review aimed to explore existing literature on technologies used in wearable devices to provide independent and safe mobility for visually impaired people. Searches were conducted in six electronic databases (PubMed, Web of Science, Scopus, Cochrane, ACM Digital Library and SciELO). Our systematic review included 61 studies. The results show that the majority of studies used audio information as a feedback interface and a combination of technologies for obstacle detection - especially the integration of sensor-based and computer vision-based technologies. The findings also showed the importance of including visually impaired individuals during prototype evaluation and the need for including safety evaluation which is currently lacking. These results have important implications for developing wearable devices for the safe mobility of visually impaired people.
Shijie Wei, Hang Li, Guilu Long
Quantum simulation of quantum chemistry is one of the most compelling applications of quantum computing. It is of particular importance in areas ranging from materials science, biochemistry, and condensed matter physics. Here, we propose a full quantum eigensolver (FQE) algorithm to calculate the molecular ground energies and electronic structures using quantum gradient descent. Compared to existing classical-quantum hybrid methods such as variational quantum eigensolver (VQE), our method removes the classical optimizer and performs all the calculations on a quantum computer with faster convergence. The gradient descent iteration depth has a favorable complexity that is logarithmically dependent on the system size and inverse of the precision. Moreover, the FQE can be further simplified by exploiting a perturbation theory for the calculations of intermediate matrix elements and obtaining results with a precision that satisfies the requirement of chemistry application. The full quantum eigensolver can be implemented on a near-term quantum computer. With the rapid development of quantum computing hardware, the FQE provides an efficient and powerful tool to solve quantum chemistry problems.
D. Button, A. Harrington, I. Belan
David John Lemay, Paul Bazelais, Tenzin Doleck
Background: With the new pandemic reality that has beset us, teaching and learning activities have been thrust online. While much research has explored student perceptions of online and distance learning, none has had a social laboratory to study the effects of an enforced transition on student perceptions of online learning. Purpose: We surveyed students about their perceptions of online learning before and after the transition to online learning. As student perceptions are influenced by a range of contextual and institutional factors beyond the classroom, we expected that students would be overall sanguine to the project given that access, technology integration, and family and government support during the pandemic shutdown would mitigate the negative consequences. Results: Students overall reported positive academic outcomes. However, students reported increased stress and anxiety and difficulties concentrating, suggesting that the obstacles to fully online learning were not only technological and instructional challenges but also social and affective challenges of isolation and social distancing. Conclusion: Our analysis shows that the specific context of the pandemic disrupted more than normal teaching and learning activities. Whereas students generally responded positively to the transition, their reluctance to continue learning online and the added stress and workload show the limits of this large scale social experiment. In addition to the technical and pedagogical dimensions, successfully supporting students in online learning environments will require that teachers and educational technologists attend to the social and affective dimensions of online learning as well.
Jorge Enrique Lana Cisneros, Carlos López Barrionuevo, Elsy Labrada González et al.
Currently, humanity has made significant progress in the development of telecommunications and the economic, social, and health sectors; probably, in the same way, a series of pathogenic organisms have evolved considerably, causing harm to humanity. That is why Health Sciences has resorted to the technological advances offered by the industrial and telecommunications era. Among the tools of great help to combat infectious agents are statistical tools, which contribute a decisive step in advancing scientific studies aimed at communities and society. The application of Statistics in Health Sciences is essential to apply its knowledge in preventive activities, health promotion, and clinical studies. This knowledge allows students to face more complex courses and content and formulate better scientific criteria for analyzing and developing healthcare and research activities. Although a level of evidence has been achieved in the recommendations for tracking the health problems faced by the communities and the possible treatments to be applied in patients, there are still certain levels of indeterminacy in the analyzed data that generate arbitrary or discretionary opinions outside the scope of the classical statistics which can be better covered if processed by neutrosophic statistics.
Long Zhang, Chuang Zhu, YueWei Wu et al.
Abstract Ischemic stroke is the most common stroke and the leading cause of disability and death in the world. Computed tomography (CT) is a popular and economical diagnostic device for the stroke, However the ischemic stroke lesions are not evident on CT images and the diagnostic result relies on the visual observation of neurologists, which may vary from doctor to doctor. To facilitate the treatment, a computer‐aided detection algorithm on CT images is proposed to help clinician for the ischemic stroke screening. In order to obtain accurate lesion annotation on CT images, novel automatic algorithms are developed to achieve image pairing, calibration, and registration. Then, a new framework with the symmetric feature extraction and comparison is proposed to identify and locate the ischemic stroke lesion. Experimental results show that this method achieves 75% of DICE in the detection of ischemic stroke lesions, which is higher than other methods by 4%. Its competitive results compared with seven latest methods is shown in terms of extensive qualitative and quantitative evaluation. This method can accurately detect the lesion in the CT images through the comparison of symmetric regional features, which has contributed to the clinical diagnosis of ischemic stroke.
Rushit Dave, Naeem Seliya, Nyle Siddiqui
Recent advancements in technology now allow for the generation of massive quantities of data. There is a growing need to transmit this data faster and more securely such that it cannot be accessed by malicious individuals. Edge computing has emerged in previous research as a method capable of improving data transmission times and security before the data ends up in the cloud. Edge computing has an impressive transmission speed based on fifth generation (5G) communication which transmits data with low latency and high bandwidth. While edge computing is sufficient to extract important features from the raw data to prevent large amounts of data requiring excessive bandwidth to be transmitted, cloud computing is used for the computational processes required for developing algorithms and modeling the data. Edge computing also improves the quality of the user experience by saving time and integrating quality of life (QoL) features. QoL features are important for the healthcare sector by helping to provide real-time feedback of data produced by healthcare devices back to patients for a faster recovery. Edge computing has better energy efficiency, can reduce the electricity cost, and in turn help people reduce their living expenses. This paper will take a detailed look into edge computing applications around Internet of Things (IoT) devices, smart city infrastructure, and benefits to healthcare.
Halaman 21 dari 902888