As we begin to reach the limits of classical computing, quantum computing has emerged as a technology that has captured the imagination of the scientific world. While for many years, the ability to execute quantum algorithms was only a theoretical possibility, recent advances in hardware mean that quantum computing devices now exist that can carry out quantum computation on a limited scale. Thus, it is now a real possibility, and of central importance at this time, to assess the potential impact of quantum computers on real problems of interest. One of the earliest and most compelling applications for quantum computers is Feynman's idea of simulating quantum systems with many degrees of freedom. Such systems are found across chemistry, physics, and materials science. The particular way in which quantum computing extends classical computing means that one cannot expect arbitrary simulations to be sped up by a quantum computer, thus one must carefully identify areas where quantum advantage may be achieved. In this review, we briefly describe central problems in chemistry and materials science, in areas of electronic structure, quantum statistical mechanics, and quantum dynamics that are of potential interest for solution on a quantum computer. We then take a detailed snapshot of current progress in quantum algorithms for ground-state, dynamics, and thermal-state simulation and analyze their strengths and weaknesses for future developments.
Daniyaal Farooqi, Gavin Pu, Shreyasha Paudel
et al.
The emerging widespread usage of AI has led to industry adoption to improve efficiency and increase earnings. However, a major consequence of this is AI displacing employees from their jobs, leading to feelings of job insecurity and uncertainty. This is especially true for computer science students preparing to enter the workforce. To investigate this, we performed semi-structured interviews with (n = 25) students across computer science undergraduate and graduate programs at the University of Toronto to determine the extent of job replacement anxiety. Through thematic analysis, it was determined that computer science students indeed face stress and anxiety from AI displacement of jobs, leading to different strategies of managing pressure. Subfields such as software engineering and web development are strongly believed to be vulnerable to displacement, while specialized subfields like quantum computing and AI research are deemed more secure. Many students feel compelled to upskill by using more AI technologies, taking AI courses, and specializing in AI through graduate school. Some students also reskill by pursuing other fields of study seen as less vulnerable to AI displacement. Finally, international students experience additional job replacement anxiety because of pressure to secure permanent residence. Implications of these findings include feelings of low security in computer science careers, oversaturation of computer science students pursuing AI, and potential dissuasion of future university students from pursuing computer science.
Abdulrahman M. Abdulghani, Azizol Abdullah, A. R. Rahiman
et al.
Modern Software-Defined Wide Area Networks (SD-WANs) require adaptive controller placement addressing multi-objective optimization where latency minimization, load balancing, and fault tolerance must be simultaneously optimized. Traditional static approaches fail under dynamic network conditions with evolving traffic patterns and topology changes. This paper presents a novel hybrid framework integrating Gaussian Mixture Model (GMM) clustering with Multi-Agent Reinforcement Learning (MARL) for dynamic controller placement. The approach leverages probabilistic clustering for intelligent MARL initialization, reducing exploration requirements. Centralized Training with Decentralized Execution (CTDE) enables distributed optimization through cooperative agents. Experimental evaluation using real-world topologies demonstrates a noticeable reduction in the latency, improvement in network balance, and significant computational efficiency versus existing methods. Dynamic adaptation experiments confirm superior scalability during network changes. The hybrid architecture achieves linear scalability through problem decomposition while maintaining real-time responsiveness, establishing practical viability.
ABSTRACT This paper proposes a controlled signal technique for visible light non‐orthogonal multiple access (VL‐NOMA) communication in an interference‐controlled environment with intelligent reflecting surfaces (IRS) for beyond 5G (B5G) and 6G communication networks. The light‐emitting diode (LED) is used for carrier signal generation to transmit signals to the two users (photodiodes, PDs) due to its advantages, such as its programmable nature and flexibility. The potential challenge is how the signals could be controlled with an IRS approach, which prompted this research. We have used IRS, which is a cutting‐edge enabling technology that modifies the signal's reflection by utilizing numerous inexpensive passive reflecting elements to improve the signal's performance. Furthermore, deep reinforcement learning (DRL) is deployed to control the reflected signals, simulate, make decisions, and link LED‐IRS‐PDs, redirecting the signals. The entire system is successfully synchronized, and then the bit error rate (BER), line of sight (LOS), and non‐line of sight (NLOS) performances are investigated. Furthermore, we place a blocker at the center of the model as a NLOS to check how the transmitted signals will perform. We observed that the propagated signal improved the BER as per LOS, hence, the NLOS blocker reduced the signal's performance. Furthermore, we optimized the signals to investigate BER, LOS, and NLOS signal performance. We observed that LOS signals performed better than NLOS signals.
To ensure effective management of medical devices, it is imperative that medical devices must be safe and inoffensive, and their management must be based on evidence. Thus, to help enhance the safety of medical devices, a new mechanism for the periodic compliance assessment of medical devices has been developed. The mechanism involves the assessment of general safety, electrical safety and performance parameters in line with international best practice. At the same time, the effective management of medical devices requires data and information related to medical devices and their lifecycle events, which can be obtained through the medical device management information system. The establishment and implementation of efficient management of medical devices, involves strengthening the capacities of medical devices’ management, in order to be able to respond to the current requirements of the health system, in such a way as to ensure the functionality of medical devices and the safe and efficient use of medical devices. Accordingly, the implementation of efficient management of medical devices is fundamental for providing qualitative, safe and efficient medical devices, which contributes to increasing the quality of medical services.
Abstract With the rapid development of information technology, new educational models using virtual reality technology have received widespread attention from relevant researchers. In the field of vocational education, vocational colleges and training institutions can effectively mobilize students' learning initiative and improve their learning efficiency by using virtual reality technology. This study details the development process and system evaluation of a bespoke virtual reality system that offers a solution to the issues of uncertainty regarding hazards, high teaching expenses, and spatial constraints inherent in the practical training of elevator maintenance. By establishing a virtual environment that is highly reproducible and designing abundant interaction methods, this system facilitates students in attaining mastery over the structural make‐up of elevators, the principles of their operation, and the techniques involved in calibrating elevator governors. The system underwent testing by multiple users, and the satisfaction level of the system was ascertained through a questionnaire study, while the effectiveness of the system was evaluated using independent samples t test for data statistics concerning students' performance. The results of the study indicate that the system gained widespread praise among users, and it notably enhanced the students' learning drive, practical abilities, and on‐site adaptability.
Educational robots offer a platform for training aspiring engineers and building trust in technology that is envisioned to shape how we work and live. In education, accessibility and modularity are significant in the choice of such a technological platform. In order to foster continuous development of the robots as well as to improve student engagement in the design and fabrication process, safe production methods with low accessibility barriers should be chosen. In this paper, we present Robotont 3, an open-source mobile robot that leverages Fused Deposition Modeling (FDM) 3D-printing for manufacturing the chassis and a single dedicated system board that can be ordered from online printed circuit board (PCB) assembly services. To promote accessibility, the project follows open hardware practices, such as design transparency, permissive licensing, accessibility in manufacturing methods, and comprehensive documentation. Semantic Versioning was incorporated to improve maintainability in development. Compared to the earlier versions, Robotont 3 maintains all the technical capabilities, while featuring an improved hardware setup to enhance the ease of fabrication and assembly, and modularity. The improvements increase the accessibility, scalability and flexibility of the platform in an educational setting.
Mechanical engineering and machinery, Electronic computers. Computer science
Automated verification has become an essential part in the security evaluation of cryptographic protocols. In this context privacy-type properties are often modelled by indistinguishability statements, expressed as behavioural equivalences in a process calculus. In this paper we contribute both to the theory and practice of this verification problem. We establish new complexity results for static equivalence, trace equivalence and labelled bisimilarity and provide a decision procedure for these equivalences in the case of a bounded number of protocol sessions. Our procedure is the first to decide trace equivalence and labelled bisimilarity exactly for a large variety of cryptographic primitives -- those that can be represented by a subterm convergent destructor rewrite system. We also implemented the procedure in a new tool, DeepSec. We showed through extensive experiments that it is significantly more efficient than other similar tools, while at the same time raises the scope of the protocols that can be analysed.
We present a comprehensive study on the emergence of Computational Social Science (CSS) - an interdisciplinary field leveraging computational methods to address social science questions - and its impact on adjacent social sciences. We trained a robust CSS classifier using papers from CSS-focused venues and applied it to 11 million papers spanning 1990 to 2021. Our analysis yielded three key findings. First, there were two critical inflections in the rise of CSS. The first occurred around 2005 when psychology, politics, and sociology began engaging with CSS. The second emerged in approximately 2014 when economics finally joined the trend. Sociology is currently the most engaged with CSS. Second, using the density of yearly knowledge embeddings constructed by advanced transformer models, we observed that CSS initially lacked a cohesive identity. From the early 2000s to 2014, however, it began to form a distinct cluster, creating boundaries between CSS and other social sciences, particularly in politics and sociology. After 2014, these boundaries faded, and CSS increasingly blended with the social sciences. Third, shared data-driven methods homogenized CSS papers across disciplines, with politics and economics showing the most alignment due to the combined influence of CSS and causal identification. Nevertheless, non-CSS papers in sociology, psychology, and politics became more divergent. Taken together, these findings highlight the dynamics of division and unity as new disciplines emerge within existing knowledge landscapes. A live demo of CSS evolution can be found in https://evolution-css.netlify.app/
Lev Raskin , Larysa Sukhomlyn, Dmytro Sokolov
et al.
Object of research is technical state of deteriorating systems whose operating conditions depend on a large number of interacting factors. The caused inhomogeneity of the sample of initial data on the technical state leads to impossibility of correct use of traditional methods of assessing the state of a system (meaning methods using mathematical tools of regression analysis). Subject of research is developing a method for constructing a regression polynomial based on the results of processing a set of controlled system parameters. Non-linearity of the polynomial describing the evolution of the technical state of real systems leads to an increase in the number of regression polynomial coefficients subject to estimation. The problem is further complicated by the growing number of factors affecting the technical state of the system. In these circumstances, the so-called <small sample effect> occurs. Goal the research consists in developing a method for constructing an approximation polynomial that describes evolution of the system state in a situation where the volume of the initial data sample is insufficient for correct estimating coefficients of this polynomial. The results obtained. The paper proposes a method for solving the given problem, based on implementation of a two-stage procedure. At the first stage a functional description of the approximation polynomial coefficients is performed; and this radically reduces the number of regression polynomial parameters to be estimated. This polynomial is used for preliminary estimation of its coefficients with the aim of filtering out insignificant factors and their interactions. At the second stage, parameters of the truncated polynomial are estimated by means of using standard technologies of mathematical statistics. Two approaches to constructing a modified polynomial have been studied: the additive one and the multiplicative one. It has been shown that the additive approach is, on average, an order of magnitude more effective than the multiplicative one.
The Partitioned Global Address Space (PGAS) library DASH provides C++ container classes for distributed N-dimensional structured grids. This article presents enhancements on top of the DASH library to support stencil operations and halo areas to conveniently and efficiently parallelize structured grids. The improvements include definitions of multiple stencil operators, automatic derivation of halo sizes, efficient halo data exchanges, as well as communication hiding optimizations. The main contributions of this article are two-fold. First, the halo abstraction concept and the halo wrapper software components are explained. Second, the code complexity and the runtime of an example code implemented in DASH and pure Message Passing Interface (MPI) are compared.
Navaneethakrishna Makaram, Sarvagya Gupta, Matthew Pesce
et al.
In drug-resistant epilepsy, a visual inspection of intracranial electroencephalography (iEEG) signals is often needed to localize the epileptogenic zone (EZ) and guide neurosurgery. The visual assessment of iEEG time-frequency (TF) images is an alternative to signal inspection, but subtle variations may escape the human eye. Here, we propose a deep learning-based metric of visual complexity to interpret TF images extracted from iEEG data and aim to assess its ability to identify the EZ in the brain. We analyzed interictal iEEG data from 1928 contacts recorded from 20 children with drug-resistant epilepsy who became seizure-free after neurosurgery. We localized each iEEG contact in the MRI, created TF images (1–70 Hz) for each contact, and used a pre-trained VGG16 network to measure their visual complexity by extracting unsupervised activation energy (UAE) from 13 convolutional layers. We identified points of interest in the brain using the UAE values via patient- and layer-specific thresholds (based on extreme value distribution) and using a support vector machine classifier. Results show that contacts inside the seizure onset zone exhibit lower UAE than outside, with larger differences in deep layers (L10, L12, and L13: <i>p</i> < 0.001). Furthermore, the points of interest identified using the support vector machine, localized the EZ with 7 mm accuracy. In conclusion, we presented a pre-surgical computerized tool that facilitates the EZ localization in the patient’s MRI without requiring long-term iEEG inspection.
GeoAI, or geospatial artificial intelligence, is an exciting new area that leverages artificial intelligence (AI), geospatial big data, and massive computing power to solve problems with high automation and intelligence. This paper reviews the progress of AI in social science research, highlighting important advancements in using GeoAI to fill critical data and knowledge gaps. It also discusses the importance of breaking down data silos, accelerating convergence among GeoAI research methods, as well as moving GeoAI beyond geospatial benefits.
The advantages of quantum computers are believed to significantly change the research paradigm of chemical and materials sciences, where computational characterization and theoretical design play an increasingly important role. It is especially desirable to solve the electronic structure problem, a central problem in chemistry and materials science, efficiently and accurately with well-designed quantum algorithms. Various quantum electronic-structure algorithms have been proposed in the literature. In this article, we briefly review recent progress in this direction with a special emphasis on the basis sets and boundary conditions. Compared to classical electronic structure calculations, there are new considerations in choosing a basis set in quantum algorithms. For example, the effect of the basis set on the circuit complexity is very important in quantum algorithm design. Electronic structure calculations should be performed with an appropriate boundary condition. Simply using a wave function ansatz designed for molecular systems in a material system with a periodic boundary condition may lead to significant errors. Artificial boundary conditions can be used to partition a large system into smaller fragments to save quantum resources. The basis sets and boundary conditions are expected to play a crucial role in electronic structure calculations on future quantum computers, especially for realistic systems.
Human-computer interaction and computer-mediated behavioral psychology research studies often rely on capturing user interaction data to characterize online behaviors. IRB considerations, site policies, and/or security and privacy concerns may force researchers to use screenshots or offline copies of pages of interest, instead of live websites, in their study designs. These interaction modalities reduce the fidelity and contextual realism of web content and often affect interface aesthetic quality – due to broken links, missing images, and/or malfunctioning scripts. StudySandboxx is a tool that allows websites to be saved exactly as they appear online. The tool sandboxes websites in a way that removes dangerous scripts that threaten privacy and security. Saved websites are encapsulated into a single portable file that contains all related website resources. Finally, the tool also supports certain types of permutations commonly used in research – such as changing links in a page. The project is housed within a GitHub repository at https://github.com/gewethor/study-sandbox.