Hasil untuk "Computer engineering. Computer hardware"

Menampilkan 19 dari ~8513910 hasil · dari DOAJ, arXiv, CrossRef, Semantic Scholar

JSON API
S2 Open Access 2021
Materials challenges and opportunities for quantum computing hardware

Nathalie P. de Leon, K. Itoh, Dohun Kim et al.

Combatting noise on the platform The potential of quantum computers to solve problems that are intractable for classical computers has driven advances in hardware fabrication. In practice, the main challenge in realizing quantum computers is that general, many-particle quantum states are highly sensitive to noise, which inevitably causes errors in quantum algorithms. Some noise sources are inherent to the current materials platforms. de Leon et al. review some of the materials challenges for five platforms for quantum computers and propose directions for their solution. Science, this issue p. eabb2823 BACKGROUND The past two decades have seen intense efforts aimed at building quantum computing hardware with the potential to solve problems that are intractable on classical computers. Several hardware platforms for quantum information processing (QIP) are under active development. To realize large-scale systems based on these technologies, we must achieve error rates much lower than have been demonstrated thus far in a scalable platform, or devise a new platform entirely. These activities will require major advances in materials science and engineering, new fabrication and synthesis techniques, and new measurement and materials analysis techniques. We identify key materials challenges that currently limit progress in five quantum computing hardware platforms, propose how to tackle these problems, and discuss some new areas for exploration. Addressing these materials challenges will necessitate interdisciplinary approaches from scientists and engineers beyond the current boundaries of the quantum computing field. ADVANCES This Review constitutes a roadmap of the current challenges and opportunities for materials science in quantum information processing. We provide a comprehensive review of materials issues in each physical platform by describing the evidence that has led to the current understanding of each problem. For each platform, we present reasons for particular material choices, survey the current understanding of sources of noise and dissipation, describe materials limitations to scaling, and discuss potential new material platforms. Despite major differences among physical implementations in each hardware technology, there are several common themes: Material selection is driven by heterogeneity, impurities, and defects in available materials. Poorly controlled and characterized surfaces lead to noise and dissipation beyond limits imposed by bulk properties. Scaling to larger systems gives rise to new materials problems that are not evident in single-qubit measurements. OUTLOOK We identify three principal materials research frontiers of interest in this context. First, understanding the microscopic mechanisms that lead to noise, loss, and decoherence is crucial. This would be accelerated by developing high-throughput methods for correlating qubit measurement with direct materials spectroscopy and characterization. Second, relatively few material platforms for solid-state QIP have been explored thus far, and the discovery of a new platform is often serendipitous. It is thus important to develop materials discovery pipelines that exploit directed, rational material searches in concert with high-throughput characterization approaches aimed at rapid screening for properties relevant to QIP. Third, there are several materials issues that do not affect single-qubit operations but appear as limitations in scaling to larger systems. Many problems faced by these platforms are reminiscent of some that have been addressed over the past five decades for complementary metal-oxide semiconductor electronics and other areas of the semiconductor industry, and approaches and solutions adopted by that industry may be applicable to QIP platforms. Materials issues will be critical to address in the coming years as we transition from noisy intermediate-scale systems to large-scale, fault-tolerant systems. Quantum computing began as a fundamentally interdisciplinary effort involving computer science, information science, and quantum physics; the time is now ripe for expanding the field by including new collaborations and partnerships with materials science. Five quantum computing hardware platforms. From top left: Optical image of an IBM superconducting qubit processor (inset: cartoon of a Josephson junction); SEM image of gate-defined semiconductor quantum dots (inset: cartoon depicting the confining potential); ultraviolet photoluminescence image showing emission from color centers in diamond (inset: atomistic model of defects); picture of a surface-electrode ion trap (inset: cartoon of ions confined above the surface); false-colored SEM image of a hybrid semiconductor/superconductor [inset: cartoon of an epitaxial superconducting Al shell (blue) on a faceted semiconducting InAs nanowire (orange)]. IBM IMAGE, CC BY-ND 2.0; SEM IMAGE COURTESY OF S. NEYENS AND M. A. ERIKSSON; PHOTOLUMINESCENCE IMAGE COURTESY OF N. P. DE LEON; FALSE-COLORED SEM IMAGE COURTESY OF C. MARCUS, P. KROGSTRUP, AND D. RAZMADZE Quantum computing hardware technologies have advanced during the past two decades, with the goal of building systems that can solve problems that are intractable on classical computers. The ability to realize large-scale systems depends on major advances in materials science, materials engineering, and new fabrication techniques. We identify key materials challenges that currently limit progress in five quantum computing hardware platforms, propose how to tackle these problems, and discuss some new areas for exploration. Addressing these materials challenges will require scientists and engineers to work together to create new, interdisciplinary approaches beyond the current boundaries of the quantum computing field.

606 sitasi en Medicine
S2 Open Access 2020
There’s plenty of room at the Top: What will drive computer performance after Moore’s law?

C. Leiserson, Neil C. Thompson, J. Emer et al.

From bottom to top The doubling of the number of transistors on a chip every 2 years, a seemly inevitable trend that has been called Moore's law, has contributed immensely to improvements in computer performance. However, silicon-based transistors cannot get much smaller than they are today, and other approaches should be explored to keep performance growing. Leiserson et al. review recent examples and argue that the most promising place to look is at the top of the computing stack, where improvements in software, algorithms, and hardware architecture can bring the much-needed boost. Science, this issue p. eaam9744 BACKGROUND Improvements in computing power can claim a large share of the credit for many of the things that we take for granted in our modern lives: cellphones that are more powerful than room-sized computers from 25 years ago, internet access for nearly half the world, and drug discoveries enabled by powerful supercomputers. Society has come to rely on computers whose performance increases exponentially over time. Much of the improvement in computer performance comes from decades of miniaturization of computer components, a trend that was foreseen by the Nobel Prize–winning physicist Richard Feynman in his 1959 address, “There’s Plenty of Room at the Bottom,” to the American Physical Society. In 1975, Intel founder Gordon Moore predicted the regularity of this miniaturization trend, now called Moore’s law, which, until recently, doubled the number of transistors on computer chips every 2 years. Unfortunately, semiconductor miniaturization is running out of steam as a viable way to grow computer performance—there isn’t much more room at the “Bottom.” If growth in computing power stalls, practically all industries will face challenges to their productivity. Nevertheless, opportunities for growth in computing performance will still be available, especially at the “Top” of the computing-technology stack: software, algorithms, and hardware architecture. ADVANCES Software can be made more efficient by performance engineering: restructuring software to make it run faster. Performance engineering can remove inefficiencies in programs, known as software bloat, arising from traditional software-development strategies that aim to minimize an application’s development time rather than the time it takes to run. Performance engineering can also tailor software to the hardware on which it runs, for example, to take advantage of parallel processors and vector units. Algorithms offer more-efficient ways to solve problems. Indeed, since the late 1970s, the time to solve the maximum-flow problem improved nearly as much from algorithmic advances as from hardware speedups. But progress on a given algorithmic problem occurs unevenly and sporadically and must ultimately face diminishing returns. As such, we see the biggest benefits coming from algorithms for new problem domains (e.g., machine learning) and from developing new theoretical machine models that better reflect emerging hardware. Hardware architectures can be streamlined—for instance, through processor simplification, where a complex processing core is replaced with a simpler core that requires fewer transistors. The freed-up transistor budget can then be redeployed in other ways—for example, by increasing the number of processor cores running in parallel, which can lead to large efficiency gains for problems that can exploit parallelism. Another form of streamlining is domain specialization, where hardware is customized for a particular application domain. This type of specialization jettisons processor functionality that is not needed for the domain. It can also allow more customization to the specific characteristics of the domain, for instance, by decreasing floating-point precision for machine-learning applications. In the post-Moore era, performance improvements from software, algorithms, and hardware architecture will increasingly require concurrent changes across other levels of the stack. These changes will be easier to implement, from engineering-management and economic points of view, if they occur within big system components: reusable software with typically more than a million lines of code or hardware of comparable complexity. When a single organization or company controls a big component, modularity can be more easily reengineered to obtain performance gains. Moreover, costs and benefits can be pooled so that important but costly changes in one part of the big component can be justified by benefits elsewhere in the same component. OUTLOOK As miniaturization wanes, the silicon-fabrication improvements at the Bottom will no longer provide the predictable, broad-based gains in computer performance that society has enjoyed for more than 50 years. Software performance engineering, development of algorithms, and hardware streamlining at the Top can continue to make computer applications faster in the post-Moore era. Unlike the historical gains at the Bottom, however, gains at the Top will be opportunistic, uneven, and sporadic. Moreover, they will be subject to diminishing returns as specific computations become better explored. Performance gains after Moore’s law ends. In the post-Moore era, improvements in computing power will increasingly come from technologies at the “Top” of the computing stack, not from those at the “Bottom”, reversing the historical trend. CREDIT: N. CARY/SCIENCE The miniaturization of semiconductor transistors has driven the growth in computer performance for more than 50 years. As miniaturization approaches its limits, bringing an end to Moore’s law, performance gains will need to come from software, algorithms, and hardware. We refer to these technologies as the “Top” of the computing stack to distinguish them from the traditional technologies at the “Bottom”: semiconductor physics and silicon-fabrication technology. In the post-Moore era, the Top will provide substantial performance gains, but these gains will be opportunistic, uneven, and sporadic, and they will suffer from the law of diminishing returns. Big system components offer a promising context for tackling the challenges of working at the Top.

392 sitasi en Computer Science, Medicine
DOAJ Open Access 2025
Practices, Process Stages and Examples of an Extreme Programming Proposal in a Playable Mode

Victor Travassos Sarinho

Background: There are several studies focused on identifying and defining gamification strategies in software development processes. These strategies are also applied by agile methods, which can create a context of recognition and reward for the completion of activities in a software project. Purpose: This paper presents a reinterpretation of the Extreme Programming (XP) practices and process stages in order to provide a “playable mode” for the XP development. Methods: XP practices and process stages are linked to terms and activities applied in digital games, enabling a reinterpretation from a playable and gamified perspective. Results: Gamified XP practices and process stages are explained and exemplified, demonstrating the feasibility of the proposed gamified reinterpretation for the XP software development. Conclusion: A software development methodology based on agile gameplays obtained by the XP reinterpretation was proposed, becoming a possible solution to improve the flow state in XP developers.

Computer software, Computer engineering. Computer hardware
arXiv Open Access 2025
Lightweight Social Computing Tools for Undergraduate Research Community Building

Noel Chacko, Hannah Vy Nguyen, Sophie Chen et al.

Many barriers exist when new members join a research community, including impostor syndrome. These barriers can be especially challenging for undergraduate students who are new to research. In our work, we explore how the use of social computing tools in the form of spontaneous online social networks (SOSNs) can be used in small research communities to improve sense of belonging, peripheral awareness, and feelings of togetherness within an existing CS research community. Inspired by SOSNs such as BeReal, we integrated a Wizard-of-Oz photo sharing bot into a computing research lab to foster community building among members. Through a small sample of lab members (N = 17) over the course of 2 weeks, we observed an increase in participants' sense of togetherness based on pre- and post-study surveys. Our surveys and semi-structured interviews revealed that this approach has the potential to increase awareness of peers' personal lives, increase feelings of community, and reduce feelings of disconnectedness.

arXiv Open Access 2025
Performance of a high-order MPI-Kokkos accelerated fluid solver

Filipp Sporykhin, Holger Homann

This work discusses the performance of a modern numerical scheme for fluid dynamical problems on modern high-performance computing architectures. Our code implements a spatial nodal discontinuous Galerkin scheme that we test up to an order of convergence of eight. It is temporally coupled to a set of Runge-Kutta methods of orders up to six. The code integrates the linear advection equations as well as the isothermal Euler equations in one, two, and three dimensions. In order to target modern hardware involving many-core Central Processing Units and accelerators such as Graphic Processing Units we use the Kokkos library in conjunction with the Message Passing Interface to run our single source code on various GPU systems. We find that the higher the order the faster is the code. Eighth-order simulations attain a given global error with much less computing time than third- or fourth-order simulations. The RK scheme has a smaller impact on the code performance and a classical fourth-order scheme seems to generally be a good choice. The code performs very well on all considered GPUs. The many-CPU performance is also very good and perfect weak scaling is observed up to many hundreds of CPU cores using MPI. We note that small grid-size simulations are faster on CPUs than on GPUs while GPUs win significantly over CPUs for simulations involving more than $10^7$ degrees of freedom ($\approx 3100^2$ grid points). When it comes to the environmental impact of numerical simulations we estimate that GPUs consume less energy than CPUs for large grid-size simulations but more energy on small grids. We observe a tendency that the more modern is the GPU the larger needs to be the grid in order to use it efficiently. This yields a rebound effect because larger simulations need longer computing times and in turn more energy that is not compensated by the energy efficiency gain of the newer GPUs.

arXiv Open Access 2025
Multivariate Time Series Forecasting with Gate-Based Quantum Reservoir Computing on NISQ Hardware

Wissal Hamhoum, Soumaya Cherkaoui, Jean-Frederic Laprade et al.

Quantum reservoir computing (QRC) offers a hardware-friendly approach to temporal learning, yet most studies target univariate signals and overlook near-term hardware constraints. This work introduces a gate-based QRC for multivariate time series (MTS-QRC) that pairs injection and memory qubits and uses a Trotterized nearest-neighbor transverse-field Ising evolution optimized for current device connectivity and depth. On Lorenz-63 and ENSO, the method achieves a mean square error (MSE) of 0.0087 and 0.0036, respectively, performing on par with classical reservoir computing on Lorenz and above learned RNNs on both, while NVAR and clustered ESN remain stronger on some settings. On IBM Heron R2, MTS-QRC sustains accuracy with realistic depths and, interestingly, outperforms a noiseless simulator on ENSO; singular value analysis indicates that device noise can concentrate variance in feature directions, acting as an implicit regularizer for linear readout in this regime. These findings support the practicality of gate-based QRC for MTS forecasting on NISQ hardware and motivate systematic studies on when and how hardware noise benefits QRC readouts.

en cs.LG, cs.ET
DOAJ Open Access 2024
Grape Instance De-Overlapping Occlusion Algorithm Based on Self-Supervised Learning

Mei ZENG, Yihan WANG, Zhiwei LEI, Xueyin LIU, Bailin LI

Conventional random occlusion algorithms used in generating synthetic occluded grape images often lead to data distortion, potentially rendering grape occlusion prediction ineffective. Therefore, this study proposes an occlusion data synthesis method suitable for grape occlusion prediction and further introduces a self-supervised grape instance de-occlusion prediction algorithm. During data synthesis, the proposed algorithm employs a proximity-based occlusion strategy to replace random occlusion methods for synthesizing different occluded instances from complete grape instances. Prior to the synthesis process, various preprocessing mechanisms are employed to control the sizes of mutually occluding grape instances, ensuring that the synthesized occluded grapes align with real-world conditions without distortion. Subsequently, the proposed approach splits occlusion prediction into mask reconstruction and semantic inpainting components. The study selects the corresponding synthetic data to train a generic Unet-based mask reconstruction network and a semantic inpainting network. To address the inability to predict complete instances owing to the limitations of instance segmentation cropping sizes, our algorithm fully considers both the occluded and occluder instances during data synthesis. The study introduces corresponding reconstruction and inpainting functions. In the occlusion prediction phase, an instance segmentation network, Pointrend, trained on an open-source architecture, the proposed mask reconstruction network, and a semantic inpainting network are sequentially applied to predict occluded grapes. When applied to the collected occlusion estimation dataset, the proposed algorithm achieves an Intersection-over-Union (IoU) value of 81.16% between the predicted occluded grape masks and ground truth annotations, outperforming other comparative methods. Experimental results demonstrate that the proposed synthesis algorithm and reconstruction framework are effective for grape occlusion prediction.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2024
Machine learning inspired models for Hall effects in non-collinear magnets

Jonathan Kipp, Fabian R Lux, Thorben Pürling et al.

The anomalous Hall effect has been front and center in solid state research and material science for over a century now, and the complex transport phenomena in nontrivial magnetic textures have gained an increasing amount of attention, both in theoretical and experimental studies. However, a clear path forward to capturing the influence of magnetization dynamics on anomalous Hall effect even in smallest frustrated magnets or spatially extended magnetic textures is still intensively sought after. In this work, we present an expansion of the anomalous Hall tensor into symmetrically invariant objects, encoding the magnetic configuration up to arbitrary power of spin. We show that these symmetric invariants can be utilized in conjunction with advanced regularization techniques in order to build models for the electric transport in magnetic textures which are, on one hand, complete with respect to the point group symmetry of the underlying lattice, and on the other hand, depend on a minimal number of order parameters only. Here, using a four-band tight-binding model on a honeycomb lattice, we demonstrate that the developed method can be used to address the importance and properties of higher-order contributions to transverse transport. The efficiency and breadth enabled by this method provides an ideal systematic approach to tackle the inherent complexity of response properties of noncollinear magnets, paving the way to the exploration of electric transport in intrinsically frustrated magnets as well as large-scale magnetic textures.

Computer engineering. Computer hardware, Electronic computers. Computer science
DOAJ Open Access 2024
Human Errors in the Inspection of Hydrogen Refueling Stations: a Bayesian Network Approach

Alessandro Campari, Antonio Javier Nakhal Akel, Leonardo Giannini et al.

The widespread use of hydrogen as an energy carrier for road transport and industrial applications was indicated as a promising solution for reducing pollutant emissions. The high flammability of this substance and its tendency to permeate and embrittle most structural materials make hydrogen handling and storage inherently challenging. Hence, inspection and maintenance activities are essential to guarantee the components' integrity and fitness for service. However, guidelines for inspecting and maintaining hydrogen refueling stations are still under development. The manufacturer is responsible for indicating the optimal inspection procedures for each facility. The lack of a unified regulatory framework and the limited operational experience with these technologies make human errors a potential cause of undesired events. In this context, the study evaluates the probability of human error during the high-pressure storage system inspection procedures in hydrogen refueling stations. The Petro-HRA methodology has been used to quantify the likelihood of unsafe or inappropriate actions. In addition, a Bayesian Network approach is proposed to investigate the conditional dependencies among human errors and performance shaping factors. The critical analysis of the results allowed the authors to provide recommendations regarding safety procedures that operators can adopt to reduce the likelihood of accidents in the hydrogen industry.

Chemical engineering, Computer engineering. Computer hardware
DOAJ Open Access 2024
Extending Multi-Output Methods for Long-Term Aboveground Biomass Time Series Forecasting Using Convolutional Neural Networks

Efrain Noa-Yarasca, Javier M. Osorio Leyton, Jay P. Angerer

Accurate aboveground vegetation biomass forecasting is essential for livestock management, climate impact assessments, and ecosystem health. While artificial intelligence (AI) techniques have advanced time series forecasting, a research gap in predicting aboveground biomass time series beyond single values persists. This study introduces RECMO and DirRecMO, two multi-output methods for forecasting aboveground vegetation biomass. Using convolutional neural networks, their efficacy is evaluated across short-, medium-, and long-term horizons on six Kenyan grassland biomass datasets, and compared with that of existing single-output methods (Recursive, Direct, and DirRec) and multi-output methods (MIMO and DIRMO). The results indicate that single-output methods are superior for short-term predictions, while both single-output and multi-output methods exhibit a comparable effectiveness in long-term forecasts. RECMO and DirRecMO outperform established multi-output methods, demonstrating a promising potential for biomass forecasting. This study underscores the significant impact of multi-output size on forecast accuracy, highlighting the need for optimal size adjustments and showcasing the proposed methods’ flexibility in long-term forecasts. Short-term predictions show less significant differences among methods, complicating the identification of the best performer. However, clear distinctions emerge in medium- and long-term forecasts, underscoring the greater importance of method choice for long-term predictions. Moreover, as the forecast horizon extends, errors escalate across all methods, reflecting the challenges of predicting distant future periods. This study suggests advancing hybrid models (e.g., RECMO and DirRecMO) to improve extended horizon forecasting. Future research should enhance adaptability, investigate multi-output impacts, and conduct comparative studies across diverse domains, datasets, and AI algorithms for robust insights.

Computer engineering. Computer hardware
arXiv Open Access 2024
Efficient and Green Large Language Models for Software Engineering: Literature Review, Vision, and the Road Ahead

Jieke Shi, Zhou Yang, David Lo

Large Language Models (LLMs) have recently shown remarkable capabilities in various software engineering tasks, spurring the rapid growth of the Large Language Models for Software Engineering (LLM4SE) area. However, limited attention has been paid to developing efficient LLM4SE techniques that demand minimal computational cost, time, and memory resources, as well as green LLM4SE solutions that reduce energy consumption, water usage, and carbon emissions. This paper aims to redirect the focus of the research community towards the efficiency and greenness of LLM4SE, while also sharing potential research directions to achieve this goal. It commences with a brief overview of the significance of LLM4SE and highlights the need for efficient and green LLM4SE solutions. Subsequently, the paper presents a vision for a future where efficient and green LLM4SE revolutionizes the LLM-based software engineering tool landscape, benefiting various stakeholders, including industry, individual practitioners, and society. The paper then delineates a roadmap for future research, outlining specific research paths and potential solutions for the research community to pursue. While not intended to be a definitive guide, the paper aims to inspire further progress, with the ultimate goal of establishing efficient and green LLM4SE as a central element in the future of software engineering.

en cs.SE
arXiv Open Access 2024
Diffusion-Enhanced Test-time Adaptation with Text and Image Augmentation

Chun-Mei Feng, Yuanyang He, Jian Zou et al.

Existing test-time prompt tuning (TPT) methods focus on single-modality data, primarily enhancing images and using confidence ratings to filter out inaccurate images. However, while image generation models can produce visually diverse images, single-modality data enhancement techniques still fail to capture the comprehensive knowledge provided by different modalities. Additionally, we note that the performance of TPT-based methods drops significantly when the number of augmented images is limited, which is not unusual given the computational expense of generative augmentation. To address these issues, we introduce IT3A, a novel test-time adaptation method that utilizes a pre-trained generative model for multi-modal augmentation of each test sample from unknown new domains. By combining augmented data from pre-trained vision and language models, we enhance the ability of the model to adapt to unknown new test data. Additionally, to ensure that key semantics are accurately retained when generating various visual and text enhancements, we employ cosine similarity filtering between the logits of the enhanced images and text with the original test data. This process allows us to filter out some spurious augmentation and inadequate combinations. To leverage the diverse enhancements provided by the generation model across different modals, we have replaced prompt tuning with an adapter for greater flexibility in utilizing text templates. Our experiments on the test datasets with distribution shifts and domain gaps show that in a zero-shot setting, IT3A outperforms state-of-the-art test-time prompt tuning methods with a 5.50% increase in accuracy.

en cs.CV
arXiv Open Access 2024
Quantum Software Engineering: Roadmap and Challenges Ahead

Juan M. Murillo, Jose Garcia-Alonso, Enrique Moguel et al.

As quantum computers advance, the complexity of the software they can execute increases as well. To ensure this software is efficient, maintainable, reusable, and cost-effective -key qualities of any industry-grade software-mature software engineering practices must be applied throughout its design, development, and operation. However, the significant differences between classical and quantum software make it challenging to directly apply classical software engineering methods to quantum systems. This challenge has led to the emergence of Quantum Software Engineering as a distinct field within the broader software engineering landscape. In this work, a group of active researchers analyse in depth the current state of quantum software engineering research. From this analysis, the key areas of quantum software engineering are identified and explored in order to determine the most relevant open challenges that should be addressed in the next years. These challenges help identify necessary breakthroughs and future research directions for advancing Quantum Software Engineering.

S2 Open Access 2020
Software tools for quantum control: improving quantum computer performance through noise and error suppression

H. Ball, M. Biercuk, A. Carvalho et al.

Effectively manipulating quantum computing (QC) hardware in the presence of imperfect devices and control systems is a central challenge in realizing useful quantum computers. Susceptibility to noise critically limits the performance and capabilities of today’s so-called noisy intermediate-scale quantum devices, as well as any future QC technologies. Fortunately, quantum control enables efficient execution of quantum logic operations and quantum algorithms with built-in robustness to errors, and without the need for complex logical encoding. In this manuscript we introduce software tools for the application and integration of quantum control in QC research, serving the needs of hardware R&D teams, algorithm developers, and end users. We provide an overview of a set of Python-based classical software tools for creating and deploying optimized quantum control solutions at various layers of the QC software stack. We describe a software architecture leveraging both high-performance distributed cloud computation and local custom integration into hardware systems, and explain how key functionality is integrable with other software packages and quantum programming languages. Our presentation includes a detailed mathematical overview of key features including a flexible optimization toolkit, engineering-inspired filter functions for analyzing noise susceptibility in high-dimensional Hilbert spaces, and new approaches to noise and hardware characterization. Pseudocode is presented in order to elucidate common programming workflows for these tasks, and performance benchmarking is reported for numerically intensive tasks, highlighting the benefits of the selected cloud-compute architecture. Finally, we present a series of case studies demonstrating the application of quantum control solutions derived from these tools in real experimental settings using both trapped-ion and superconducting quantum computer hardware.

110 sitasi en Computer Science, Physics
DOAJ Open Access 2023
Formas de organización del proceso de enseñanza-aprendizaje utilizadas en el Nivel Superior

Raquel Vera Velázquez, Washington Narváez Campana, Kirenia Maldonado Zúñiga et al.

La investigación se desarrolló en la universidad Estatal del Sur de Manabí, km 1,5 vía Novoa, en la carrera Agropecuaria, durante el desarrollo del Seminario Científico Metodológico. El objetivo de la investigación fue realizar un estudio bibliográfico de investigaciones, artículos, libros, tesis de grados y bibliografías sobre las formas de organización del proceso de enseñanza aprendizaje en el Nivel Superior, con el fin de mejorar la calidad del proceso docente educativo. Se utilizaron métodos de investigación revisión bibliográfica importante para el diseño teórico de la investigación. Se realizó un resumen de las formas de organización del proceso de enseñanza aprendizaje más utilizadas por los docentes, resultado de un intercambio de cátedra realizado en el seminario científico metodológico del periodo académico PII 2022, donde se desarrollaron relaciones interdisciplinarias entre las diferentes materias y se aportaron criterios importantes para el desarrollo del proceso educativo en general y se concluyó que el éxito de la utilización de estas formas de organización de la enseñanza dependerá de la creatividad de estudiantes y profesores así como de la preparación y auto preparación de los mismos, lo que propiciará la calidad del proceso de enseñanza – aprendizaje y la aplicación consecuente de los conocimientos, hábitos y habilidades de los futuros profesionales.

Computer engineering. Computer hardware
arXiv Open Access 2023
LGBTQIA+ (In)Visibility in Computer Science and Software Engineering Education

Ronnie de Souza Santos, Brody Stuart-Verner, Cleyton de Magalhaes

Modern society is diverse, multicultural, and multifaceted. Because of these characteristics, we are currently observing an increase in the debates about equity, diversity, and inclusion in different areas, especially because several groups of individuals are underrepresented in many environments. In computer science and software engineering, it seems counter-intuitive that these areas, which are responsible for creating technological solutions and systems for billions of users around the world, do not reflect the diversity of the society to which it serves. In trying to solve this diversity crisis in the software industry, researchers started to investigate strategies that can be applied to increase diversity and improve inclusion in academia and the software industry. However, the lack of diversity in computer science and related courses, including software engineering, is still a problem, in particular when some specific groups are considered. LGBTQIA+ students, for instance, face several challenges to fit into technology courses, even though most students in universities right now belong to Generation Z, which is described as open-minded to aspects of gender and sexuality. In this study, we aimed to discuss the state-of-art of publications about the inclusion of LGBTQIA+ students in computer science education. Using a mapping study, we identified eight studies published in the past six years that focused on this public. We present strategies developed to adapt curricula and lectures to be more inclusive to LGBTQIA+ students and discuss challenges and opportunities for future research

en cs.SE

Halaman 37 dari 425696