Classical supply chain risk models treat node failures as statistically independent events, systematically underestimating cascade probabilities when supplier dependencies are strongly correlated. At n=40 nodes, the full correlated failure distribution requires O(2^n) classical samples, a regime where exact simulation demands 17.6 TB of memory and over 369,000 hours of computation on a standard workstation. We present QR-SPPS (Quantum-Native Retail Shock Propagation and Policy Stress Simulator), a three-algorithm quantum pipeline implemented using the Qiskit framework with the Aer statevector_simulator backend. First, a 40-node, 4-tier retail supply network is encoded as a 40-qubit Ising Hamiltonian using OpenFermion QubitOperator, where ZZ coupling terms encode correlated cascade probabilities structurally absent from classical Monte Carlo. Second, a hardware-efficient VQE circuit finds the ground-state stress distribution with zero error, detecting entangled cascade failures in 14/40 nodes with max|ΔP|=0.637 versus classical Monte Carlo. Third, we introduce the first application of ADAPT-VQE gradient screening to counterfactual macroeconomic policy evaluation: six crisis interventions are ranked in O(1) Qiskit operator evaluations per policy, a 287x speedup over sequential VQE re-optimisation. Fourth, Density-of-States QPE (DOS-QPE) reconstructs the full eigenspectrum via 32-step Trotter evolution and introduces a novel mapping of the Boltzmann catastrophe probability P_cat(T) to VIX-equivalent market volatility temperature, enabling direct integration into regulatory Value-at-Risk frameworks. Qiskit Aer scaling benchmarks confirm exponential classical intractability at 40 qubits.
Oscillator based Ising machines are non-von-Neumann machines ideally suited for solving combinatorial problems otherwise intractable on classic stored-program digital computers due to their run-time complexity. Possible future applications are manifold ranging from quantum simulations to protein folding and are of high academic and commercial interest as well. Described in the following is a very simple such machine aimed at educational and research applications.
The purpose of this study is to explore the performance of Informed OCR or iOCR. iOCR was developed with a spell correction algorithm to fix errors introduced by conventional OCR for vote tabulation. The results found that the iOCR system outperforms conventional OCR techniques.
We propose an optimization method to improve power efficiency and robustness in silicon-photonic-based coherent integrated photonic neural networks. Our method reduces the network power consumption by 15.3% and the accuracy loss under uncertainties by 16.1%.
D. Perez-Lopez, A. López-Hernandez, A. Macho
et al.
We review some of the basic principles, fundamentals, technologies, architectures and recent advances leading to thefor the implementation of Field Programmable Photonic Field Arrays (FPPGAs).
In this report reversible Toffoli and quantum Deutsch gates are extended to the p_valued domain. Their structural parameters are determined and their behavior is proven. Both conjunctive and disjunctive control strategies with positive and mixed polarities are introduced for the first time in a p_valued domain. The design is based on elementary Muthukrishnan_Stroud quantum gates, hence the realizability of the extended gates in the context of ion traps should be possible.
There is one, and only one way, consistent with fundamental physics, that the efficiency of general digital computation can continue increasing indefinitely, and that is to apply the principles of reversible computing. We need to begin intensive development work on this technology soon if we want to maintain advances in computing and the attendant economic growth NOTE: This paper is an extended author’s preprint of the feature article titled “Throwing Computing Into Reverse” (print) or “The Future of Computing Depends on Making it Reversible” (online), published by IEEE Spectrum in Aug.-Sep. 2017. This preprint is based on the original draft manuscript that the author submitted to Spectrum, prior to IEEE edits and feedback from external readers. Since the dawn of the transistor, technologists, and the world at large, have grown accustomed to a steady trend of exponentially-improving performance for information technologies at any given cost level. This performance growth has been enabled by the underlying trend, described by Moore’s Law, of the exponentially-increasing number of electronic devices (such as transistors) that can be fabricated on an integrated circuit. According to the classic rules of semiconductor scaling, as transistors were made smaller, they became simultaneously cheaper, faster, and more energy-efficient, a massive win-win-win scenario, which resulted in concordantly massive investments in the ongoing push to advance semiconductor fabrication technology to ever-smaller length scales. Unfortunately, there is today a growing consensus within industry, academia, and government labs that semiconductor scaling has not very much life left; maybe 10 years or so, at best. Multiple issues that come into play as we dive deeper into the nanoscale mean that the classic scaling trends are losing steam. Already, the decreasing logic voltages required due to various short-channel effects resulted in the plateauing of clock speeds more than a decade ago, driving the shift towards today’s multi-core architectures. But now, even multi-core architectures face the looming threat of increasing amounts of “dark silicon,” as heat dissipation constraints prevent us from being able to cram any more operations per second into each unit of chip area, due to the energy that is converted to heat in each operation. Fundamentally, achieving higher performance within a system of any given size, cost, and power budget requires that individual * This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories and by the Advanced Simulation and Computing program under the U.S. Department of Energy’s National Nuclear Security Administration (NNSA). Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for NNSA under contract DE-NA0003525. Approved for public release SAND2017-9424 O. M.P. Frank Back to the Future: Extended preprint Sandia National Labs The Case for Reversible Computing for IEEE Spectrum Version 5.7 arXiv:1803.02789 [cs.ET] Page 2 of 19 3/7/2018 7:48 PM operations have to become more energy-efficient, and the energy efficiency of conventional digital semiconductor technology is beginning to plateau for a variety of reasons, all of which can ultimately be traced back to fundamental physical issues. Looking forward, as transistors become smaller, their per-area leakage current and standby power increases; meanwhile, as signal energies are decreased, thermal fluctuations become more significant, eventually preventing any further progress within the traditional computing paradigm. Heroic efforts are being made within the semiconductor industry to try to allay and forestall these problems, but the solutions are becoming ever more expensive to deploy, with new leading-edge chip fabrication plants (“fabs”) now costing on the order of $10 billion each. But, it’s worth pointing out that no level of spending can ever defeat the laws of physics. Beyond some point that is, now, not very far away, a new conventionally-designed computer that simply has smaller transistors would no longer be any cheaper, faster, or more energy-efficient than its predecessors, and at that point, the progress of conventional semiconductor technology will stop, being no longer economically justifiable. The writing is on the wall. Obviously, however, we would prefer if the progress in the cost-efficiency of information technology were not to stop, since a large portion of our potential future economic progress would be empowered by the continuing advancement of this technology. So then the question arises, can we perhaps keep progress in computing going by transitioning over to some new technology base that is not “conventional semiconductor technology?” Unfortunately, some of the most crucial fundamental physical barriers that will prevent conventional complementary metal-oxide-semiconductor (CMOS) technology from advancing very much further will also still apply, in a more or less comparable way, to any alternative technology as well, as long as we insist on maintaining the present-day computing paradigm, namely irreversible computing. No other irreversible “beyond CMOS” technology can ever be very much better than end-of-the-line CMOS—at most, it will be better only by some relatively modest, limited factor. However, for several decades now, we have known that there exists a theoretically possible alternative computing paradigm, called reversible computing. Developing reversible computing (and then continuing to improve it) is in fact the only possible way, within the laws of physics, that we might be able to keep computer energy-efficiency and cost-efficiency for general applications increasing indefinitely, far into the future. So far, the concept of reversible computing has not received very much attention, which has perhaps made sense up until now, since it is indeed highly challenging to implement effectively, and the alternative of advancing conventional technology was much easier. Nevertheless, significant conceptual progress on reversible computing has been made over the decades by the small number of researchers pursuing it. Still, many difficult problems remain to be solved, and it is going to require a much larger effort, looking forwards, to address them. But, this effort will be highly worthwhile, because the potential upside that reversible computing offers is many orders of magnitude of information technology efficiency improvements, with associated economic advancements, compared to all possible irreversible computing technologies. With the end of conventional technology now in sight, it’s now time that the world’s best physics and engineering minds turn committed attention towards reversible computing, and begin an all-out effort to tackle its remaining engineering challenges, so as to bring this idea to practical fruition. The first person to describe the energy-efficiency implications of the conventional irreversible computing paradigm was Rolf Landauer of IBM, who wrote a paper in 1961 called “Irreversibility and Heat Generation in the Computing Process.” This paper has generated controversy in some circles, but Landauer’s key insight in this paper really does just follow directly as an immediate logical consequence of our most thorough, battle-tested understanding of fundamental physics. All of our most fundamental laws of low-level physical dynamics are reversible, meaning that if you were to have complete knowledge of the state of any given closed system at some time, and of the values of all of the relevant physical constants, M.P. Frank Back to the Future: Extended preprint Sandia National Labs The Case for Reversible Computing for IEEE Spectrum Version 5.7 arXiv:1803.02789 [cs.ET] Page 3 of 19 3/7/2018 7:48 PM you could always, conceptually, run the laws of physics backwards, and determine the system’s past state at any previous time exactly. (This is even true in quantum mechanics, if you knew the exact quantum state of the system.) As a consequence, it is impossible to have a situation wherein two different possible detailed states at some earlier time, could both evolve to become the exact same detailed state as each other at some later time, since this would mean that the earlier state couldn’t be uniquely determined from the later one. In other words, at the lowest level in physics, information cannot be destroyed. It’s important to realize how absolutely essential to our most basic understanding of physics this principle is. If it wasn’t true, then the Second Law of Thermodynamics (which says that entropy cannot decrease) could not be true, since entropy is just unknown information. If physics was not reversible, then entropy could simply vanish, and the Second Law would not hold. How does the indestructibility of information relate to the energy efficiency of irreversible computing? The point is that, since physics is reversible, whenever we think that we are destroying some information in a computer, we actually are not. Putatively “irreversible” operations (such as erasing a bit of information, or destructively overwriting it with a newly-computed value) are, in some sense, really just a convenient fiction. What’s actually happening, at the most fundamental level, is that the physical information that is embodied within the systems whose state we think we are “erasing” or “overwriting” (e.g., a circuit node charged to a particular voltage) is simply getting pushed out into the machine’s thermal environment, where it effectively becomes entropy (in essence, randomized information), and is manifested as heat. To increase the entropy of a thermal environment at temperature T by an increment ∆S requires adding an increment of heat ∆Q = T∆S to that environment; that is simply the thermodynamic definition of temper
D-Wave only guarantees to support coefficients with 4 to 5 bits of resolution or precision. This paper describes a method to extend the functionality of the D-Wave to solve problems that require the support of higher precision coefficients.
A liquid can be used to represent signals, actuate mechanical computing devices and to modify signals via chemical reactions. We give a brief overview of liquid based computing devices developed over hundreds of years. These include hydraulic calculators, fluidic computers, micro-fluidic devices, droplets, liquid marbles and reaction-diffusion chemical computers.
We propose the chemlambda artificial chemistry, whose behavior strongly suggests that real molecules which embed Interaction Nets patterns and real chemical reactions which resemble Interaction Nets graph rewrites could be a realistic path towards molecular computers, in the sense explained in the article.
Polymorphic circuits are a special kind of circuits which possess multiple build-in functions, and these functions are activated by environment parameters, like temperature, light and VDD. The behavior of a polymorphic circuit can be described by a polymorphic Boolean function. For the first time, this brief presents a simplification method of the polymorphic Boolean function.
Chaos is a phenomenon that attracted much attention in the past ten years. In this paper, we analyze chaos-based signal processing, and proposed a chaos processor to take advantage of chaos phenomenon. We also analyzed and demonstrated two of its practical applications in communication and sound synthesis.
Isotopic purification of group IV elements leads to substantial increase in thermal conductivity due to reduced scattering of the phonons. Based on this concept, a simulation study is used to demonstrate the reduction of at least 25 oC in LDMOS average temperature.
In his 2003 paper "Towards an algebraic theory of Boolean circuits", Lafont notes that the class of reversible circuits over a set of k truth values is finitely generated when k is odd. He cites a private communication for the proof. The purpose of this short note is to make the content of that communication available.
Reversible Peres gates with more than two all over binary-valued control signals are discussed. Methods are disclosed for the low cost realization of this kind of Peres gates without requiring ancillary lines. Proper distribution of the controlled gates and their inverses allow driving the reversible Peres gate with control signals of different polarities.
Complement and infinitive clause structures in Polish and FrenchThe comparative analysis of complement clauses and infinitive clauses in Polish and French throws light on the sequence to, że(by) P / ce que P introducing complement clauses in the two languages. The comparison demonstrates that in the face of «ce» that is deleted or grammaticalised in French complement clauses, we have an attested introductor to1 in Polish which changes its own nature when it is accented. To1 becomes the demonstrative pronoun to2 (this), which can cause a dislocation of the sequence to, że(by) P . The comparison shows also that these structures are affected by deletions which are different according to the language (particularly the infinitive structures).Structures des complétives et des infinitives du polonais et du françaisLa comparaison des structures complétives et infinitives du polonais et du français apporte un certain éclairage sur la nature du corrélat introduisant les complétives dans les deux langues (to, że(by) P / ce que P). Elle montre notamment que face à «ce» du français effacé (dans les constructions complétives sujets et compléments non-prépositionnels du verbe) ou grammaticalisé (dans les constructions complétives prépositionnelles), nous avons l’introducteur to1 du polonais (attesté) qui change de statut lorsqu’il est accentué. Il devient alors le pronom démonstratif to2 (=cela), ce qui entraîne la dislocation du corrélat. L’examen des structures suscitées révèle aussi qu’elles sont affectées par des effacements différents selon la langue (en particulier les structures infinitives).
Aspectual complements and verbs expressing ways of movement in French: between boundary marking and telicityConsidered synonymous by some, carefully distinguished by others, the notions of telicity and boundary marking are of crucial importance for studies on aspect. This article offers a reflection on the relation between aspectual properties and certain complement verbs. Through a detailed analysis of the verb syntagmas formed by courir and nager (courir cent mètres, nager le deux-cents-mètres), it is demonstrated that despite the apparent formal similarity, complements like cent mètres and le deux-cents-mètres perform different functions in relation to the predicate: in the first case, they mark boundaries, while in the second they are markers of a property [+ telicity]. A study of a particular problem thus allow for taking part in a more general discussion, providing arguments in favour of distinguishing between telicity and boundary marking. Compléments aspectuels et verbes de manière de déplacement en français: entre bornage et télicitéConsidérées comme équivalentes par certains, soigneusement distinguées par d’autres, les notions de télicité et de bornage ont une importance cruciale pour les études consacrées à l’aspect. Cet article propose une réflexion sur le rapport entre ces propriétés aspectuelles et certains compléments verbaux. À travers des analyses détaillées des syntagmes verbaux que forment courir et nager (courir cent mètres, nager le deux-cents-mètres), il est montré que malgré une similarité formelle apparente, les compléments tels que cent mètres et le deux-cents-mètres assument des fonctions différentes auprès du prédicat : dans le premier cas, ils opèrent un bornage, alors que dans le seconds, ils sont porteurs du trait [+ télicité]. L’étude d’un problème spécifique permet ainsi de prendre part dans une discussion plus générale, en dégageant des arguments en faveur de la distinction entre télicité et bornage. Dopełnienia aspektowe a czasowniki wyrażające sposób przemieszczania w języku francuskim: między określaniem granic a telicznościąPrzez jednych uważane za równoznaczne, przez innych wyraźnie odróżniane, pojęcia teliczności i określania granic mają zasadnicze znaczenie dla studiów dotyczących aspektu. Niniejszy artykuł proponuje refleksję nad stosunkiem pomiędzy owymi właściwościami określającymi aspekt a niektórymi dopełnieniami [czasownikowymi]. Poprzez szczegółową analizę syntagm czasownikowych tworzonych przez courir i nager (courir cent mètres, nager le deux-cents-mètres), wykazano, że pomimo pozornego podobieństwa formalnego, dopełnienia takie jak cent mètres i le deux-cents-mètres pełnią różne funkcje wobec orzeczenia: w pierwszym przypadku dokonują określenia granic, podczas gdy w drugim nadają pewną właściwość [+teliczność]. W ten sposób studia nad specyficznym problemem pozwalają zabrać głos w ogólniejszej dyskusji, przedstawiając nam argumenty przemawiające za rozróżnieniem pomiędzy telicznością a określaniem granic.