Anthony Dunne, Fiona Raby
Hasil untuk "Speculative philosophy"
Menampilkan 20 dari ~1426245 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar
Johan van Benthem
This paper is a historical tour of occurrences of the Craig interpolation theorem and the Beth definability theorem in philosophy since the 1950s. We identify the notion of dependence as one major red thread behind these, and include some new technical results, in particular, on logical system translations and generalized definability
Selin Yildirim, Deming Chen
Recent advancements in speculative decoding have demonstrated considerable speedup across a wide array of large language model (LLM) tasks. Speculative decoding inherently relies on sacrificing extra memory allocations to generate several candidate tokens, of which acceptance rate drives the speedup. However, deploying speculative decoding on memory-constrained devices, such as mobile GPUs, remains as a significant challenge in real-world scenarios. In this work, we present a device-aware inference engine named SpecMemo that can smartly control memory allocations at finer levels to enable multi-turn chatbots with speculative decoding on such limited memory devices. Our methodology stems from theoretically modeling memory footprint of speculative decoding to determine a lower bound on the required memory budget while retaining speedup. SpecMemo empirically acquires a careful balance between minimizing redundant memory allocations for rejected candidate tokens and maintaining competitive performance gains from speculation. Notably, with SpecMemo's memory management, we maintain 96% of overall throughput from speculative decoding on MT-Bench, with reduced generation-memory by 65% on single Nvidia Titan RTX. Given multiple constrained GPUs, we build on top of previous speculative decoding architectures to facilitate big-model inference by distributing Llama-2-70B-Chat model, on which we provide novel batched speculative decoding to increase usability of multiple small server GPUs. This novel framework demonstrates 2x speedup over distributed and batched vanilla decoding with the base model on eight AMD MI250 GPUs. Moreover, inference throughput increases remarkably 8x with batch size 10. Our work contributes to democratized LLM applications in resource-constrained environments, providing a pathway for faster and cheaper deployment of real-world LLM applications with robust performance.
MZ Naser
Philosophy-informed machine learning (PhIML) directly infuses core ideas from analytic philosophy into ML model architectures, objectives, and evaluation protocols. Therefore, PhIML promises new capabilities through models that respect philosophical concepts and values by design. From this lens, this paper reviews conceptual foundations to demonstrate philosophical gains and alignment. In addition, we present case studies on how ML users/designers can adopt PhIML as an agnostic post-hoc tool or intrinsically build it into ML model architectures. Finally, this paper sheds light on open technical barriers alongside philosophical, practical, and governance challenges and outlines a research roadmap toward safe, philosophy-aware, and ethically responsible PhIML.
Hanti Lin
The integration of the history and philosophy of statistics was initiated at least by Hacking (1975) and advanced by Hacking (1990), Mayo (1996), and Zabell (2005), but it has not received sustained follow-up. Yet such integration is more urgent than ever, as the recent success of artificial intelligence has been driven largely by machine learning -- a field historically developed alongside statistics. Today, the boundary between statistics and machine learning is increasingly blurred. What we now need is integration, twice over: of history and philosophy, and of two fields they engage -- statistics and machine learning. I present a case study of a philosophical idea in machine learning (and in formal epistemology) whose root can be traced back to an often under-appreciated insight in Neyman and Pearson's 1936 work (a follow-up to their 1933 classic). This leads to the articulation of an epistemological principle -- largely implicit in, but shared by, the practices of frequentist statistics and machine learning -- which I call achievabilism: the thesis that the correct standard for assessing non-deductive inference methods should not be fixed, but should instead be sensitive to what is achievable in specific problem contexts. Another integration also emerges at the level of methodology, combining two ends of the philosophy of science spectrum: history and philosophy of science on the one hand, and formal epistemology on the other hand.
Szymon Kobus, Deniz Gündüz
Speculative decoding accelerates large language model inference using a smaller draft model. In this paper, we establish a surprising connection between speculative decoding and channel simulation, which aims at simulating a noisy channel using as few bits as possible. This connection allows us to provide an information-theoretic analysis of the speed up that can be achieved by speculative decoding. Leveraging this link, we derive an explicit relation between generation speed-up and the number of tokens $k$ generated by the draft model for large $k$, which serves as an upper bound for all $k$. We also propose a novel speculative decoding method via exponential race ERSD that matches state-of-the-art performance.
Paloma Puente-Lozano
This report offers an interpretation of recent scholarship that articulates pasts and futures of geographical thought and praxis. By focussing on growing concerns about speculative, abyssal, and analytical styles of thinking in Geography, I argue that a more cogent philosophical take on geographic theory-making is needed. Drawing upon ongoing discussions on the role of geographic theory, I use the occasion of the various history and philosophy of geography-related anniversaries to reflect on why we are where we are today. I therefore claim that practitioners of history and philosophy of geography need to address some structural difficulties to navigate tensions between recurring calls for endogenous forms of geographic theory and relentless deconstruction of epistemic and ontological arrays as a way forward for Geography to merge with critical thinking.
L. Hedrick
abstract:One way of characterizing the ontological turn in anthropology is the effort to transform philosophical anthropology into anthropological philosophy—or anthropology into philosophy. This effort proceeds upon the premise that to critique philosophical representationalism is to critique the entire rationalist enterprise. It is as a result of this coupling that some OTers suggest that a permanently decolonized philosophy becomes indistinguishable from post-representationalist anthropology. Curiously, it is by thinking with Gilles Deleuze that they conclude, on the one hand, that to decolonize anthropology is to de-representationalize it, and, on the other, that to decolonize philosophy is therefore to anthropologize it (since, on their logic, de-representationalizing means de-philosophizing). To turn to ontology means "thinking immanence" not, pace Deleuze, as philosophy, but beyond philosophy. This article argues that the supersessionist claim is incoherent precisely insofar as this coupling is unwarranted. To provide a counterinstance, the article suggests how thinking with Alfred North Whitehead decouples the critique of representationalism from the critique of rationalism. One implication of this work is to model a speculative philosophy that can be responsive to postcolonial concerns about representationalism without thereby becoming functionally indistinct from anthropological method.
Raphaël Millière
Deep learning has enabled major advances across most areas of artificial intelligence research. This remarkable progress extends beyond mere engineering achievements and holds significant relevance for the philosophy of cognitive science. Deep neural networks have made significant strides in overcoming the limitations of older connectionist models that once occupied the centre stage of philosophical debates about cognition. This development is directly relevant to long-standing theoretical debates in the philosophy of cognitive science. Furthermore, ongoing methodological challenges related to the comparative evaluation of deep neural networks stand to benefit greatly from interdisciplinary collaboration with philosophy and cognitive science. The time is ripe for philosophers to explore foundational issues related to deep learning and cognition; this perspective paper surveys key areas where their contributions can be especially fruitful.
Mohamadreza Rostami, Shaza Zeitouni, Rahul Kande et al.
Microarchitectural attacks represent a challenging and persistent threat to modern processors, exploiting inherent design vulnerabilities in processors to leak sensitive information or compromise systems. Of particular concern is the susceptibility of Speculative Execution, a fundamental part of performance enhancement, to such attacks. We introduce Specure, a novel pre-silicon verification method composing hardware fuzzing with Information Flow Tracking (IFT) to address speculative execution leakages. Integrating IFT enables two significant and non-trivial enhancements over the existing fuzzing approaches: i) automatic detection of microarchitectural information leakages vulnerabilities without golden model and ii) a novel Leakage Path coverage metric for efficient vulnerability detection. Specure identifies previously overlooked speculative execution vulnerabilities on the RISC-V BOOM processor and explores the vulnerability search space 6.45x faster than existing fuzzing techniques. Moreover, Specure detected known vulnerabilities 20x faster.
Zilin Xiao, Hongming Zhang, Tao Ge et al.
Speculative decoding has proven to be an efficient solution to large language model (LLM) inference, where the small drafter predicts future tokens at a low cost, and the target model is leveraged to verify them in parallel. However, most existing works still draft tokens auto-regressively to maintain sequential dependency in language modeling, which we consider a huge computational burden in speculative decoding. We present ParallelSpec, an alternative to auto-regressive drafting strategies in state-of-the-art speculative decoding approaches. In contrast to auto-regressive drafting in the speculative stage, we train a parallel drafter to serve as an efficient speculative model. ParallelSpec learns to efficiently predict multiple future tokens in parallel using a single model, and it can be integrated into any speculative decoding framework that requires aligning the output distributions of the drafter and the target model with minimal training cost. Experimental results show that ParallelSpec accelerates baseline methods in latency up to 62% on text generation benchmarks from different domains, and it achieves 2.84X overall speedup on the Llama-2-13B model using third-party evaluation criteria.
Adriano Rodrigues de Oliveira
Nos últimos dez anos, o mundo vem se deparando com diversas turbulências de ordem política e social, em grande medida relacionadas às crises e aos dilemas enfrentados pelas democracias. O presente trabalho visa analisar o sentido político do neoliberalismo, seus efeitos na sociedade contemporânea e sua relação com as principais crises políticas do sistema democrático. Por fim, apresentar alguns dos desafios e alternativas a serem enfrentados no médio e longo prazo. Para isso, será feita uma revisão da bibliografia recente acerca desses temas com diferentes abordagens teóricas e informações pertinentes à compreensão do neoliberalismo e sua relação com a problemática da democracia no século XXI. Ao analisar essas abordagens, observase que a ideologia neoliberal ratifica as práticas exploratórias e autoritárias que, em última análise, impulsionam o seu funcionamento e seu êxito em diferentes sentidos. Destaca-se o fato de que esse processo ocorre dentro de países democráticos, o que leva a concluir que o neoliberalismo, ou seus efeitos nefastos, colabora para o desgaste e a corrosão das democracias por dentro.
Hans Reichenbach
This book represents a new approach to philosophy. It treats philosophy as not a collection of systems, but as a study of problems. It recognizes in traditional philosophical systems the historical function of having asked questions rather than having given solutions. Professor Reichenbach traces the failures of the systems to psychological causes. Speculative philosophers offered answers at a time when science had not yet provided the means to give true answers. Their search for certainty and for moral directives led them to accept pseudo-solutions. Plato, Descartes, Spinoza, Kant, and many others are cited to illustrate the rationalist fallacy: reason, unaided by observation, was regarded as a source of knowledge, revealing the physical world and 'moral truth'. The empiricists could not disprove this thesis, for they could not give a valid account of mathematical knowledge. Mathematical discoveries in the early nineteenth century cleared the way for modern scientific philosophy. Its advance was furthered by discoveries in modern physics, chemistry, biology, and psychology. These findings have made possible a new conception of the universe and of the atom. The work of scientists thus altered philosophy completely and brought into being a philosopher with a new attitude and training. Instead of dictating so-called laws of reason to the scientist, this modern philosopher proceeds by analyzing scientific methods and results. He finds answers to the age-old questions of space, time, causality, and life; of the human observer and the external world. He tells us how to find our way through this world without resorting to unjustifiable beliefs or assuming a supernatural origin for moral standards. Philosophy thus is no longer a battleground of contradictory opinions, but a science discovering truth step by step. Professor Reichenbach, known for his many contributions to logic and the philosophy of science, addresses this book to a wider audience. He writes for those who do not have the leisure or preparation to read in the fields of mathematics, symbolic logic, or physics. Besides showing the principal foundations of the new philosophy, he has been careful to provide the necessary factual background. He has written a philosophical study, not a mere popularization. It contains within its chapters all the necessary scientific material in an understandable form - and, therefore, conveys all the information indispensable to a modern world-view. The late Hans Reichenbach was Professor of Philosophy at the University of California, Los Angeles.
Gabriel Trop
ABSTRACT In Friedrich Schelling’s philosophy of nature, the attempt to think the unconditioned absolute of nature performs unconditioning, thereby transforming the present into a field of experimentation. Schelling’s nature-philosophy produces a series of interventions into cultural fields of consistency, drawing on material operations to reconceptualize forms of collective organization. In Schelling’s First Outline, beings have a specific signature: to be is to resist. In the Deities of Samothrace, philosophy performs a “magic singing” that gathers initiates together by continually exorcising—and preserving—the unruly obstinacy of pre-socialized drives. This conception of philosophy coheres with a gesture from his earlier lectures on the philosophy of art in which music forms the basis of inorganic communities, implicitly cultivating collective forms called upon to navigate the dangers of overly cohesive (harmonic) and overly transgressive (rhythmic) forms of life, while directing an unconditioning power to the conditions of the present.
J. Chase, Jack Reynolds
In this paper, we consider the implications of Grace Andrus de Laguna and Joel Katzav's work for the charge of conservatism against the analytic tradition. We differentiate that conservatism into three kinds: starting place; path dependency; and modesty. We also think again about gender in philosophy, consider the positive account of speculative philosophy presented by de Laguna and Katzav in comparison to some other naturalist trajectories, and conclude with a brief Australian addendum that reflects on a similar period in our own country which was also associated with the professional institutionalisation of analytic philosophy.
J. Katzav
Katzav and Vaesen have argued that control by analytic philosophers of key journals, philosophy departments and at least one funding body plays a substantial role in explaining the emergence of analytic philosophy into dominance in the Anglophone world and the corresponding decline of speculative philosophy. They also argued that this use of control suggests a characterisation of analytic philosophy as, at the institutional level, a sectarian form of critical philosophy. I test these hypotheses against data about philosophy job hires at key philosophy departments in the USA during the period 1930–1979 and against data about PhD completions during the period 1956–1965. I argue, further, that Katzav and Vaesen’s hypotheses can fully explain the data and are more fully able to do so than some other key accounts of the emergence of analytic philosophy in the USA.
A. V. Muratov
Introduction. The article discusses the view on social justice through the prism of the philosophy of Islam, the problem of its study and analysis, from the perspective of explanation and interpretation of sacred texts. The theoretical social base laid down in the Koran and the Sunnah, as well as the social model built during the reign of the four “righteous caliphs” are analyzed. To illustrate the application of the basic methods of Islamic theology to questions of social philosophy, some concrete examples are given. Theoretical analysis. The conducted comparative analysis allows us to conclude that there is a carefully developed theoretical base regulating the issues of social justice in Islam, the need to use a legitimate methodology and tools based on it in deriving certain social norms in order to avoid speculative judgments. Conclusion. The result of this study is the conclusion about the universality of the principles of social justice in the philosophy of Islam, the importance of studying its social function. In this regard the original model laid down in its primary sources is taken as a standard. As an exception to the risks of transformation caused by the influence on Muslim societies of various socio-philosophical, political and other ideas that did not come out of its foundations, it is necessary to consistently apply a certain methodology for analyzing Islamic doctrines.
Dagnachew Desta
This article attempts to offer a critical account of the genealogy of ancient Greek philosophy in its bid to transcend the old ruling mythopoeic culture. With this in mind, emphasis is given more to the speculative character of Greek thought rather than its technical and detailed aspects. In my account of the origin of Greek philosophy, I use Plato’s famous pronouncement (Plato, The Republic, Tenth Book) about the great quarrel between philosophy and poetry as a context to provide my analysis. In dealing with the question at hand, I develop the following interrelated claims. First, Greek philosophy made its appearance in the struggle against the mythical background. Here, even though early philosophy tried to move beyond myth, it did not completely transcend the world of mythology. Second, in dealing directly with the quarrel, I identify two issues (problems) as the basis of the conflict: A) the essence of the divine and B) the nature of the universe. Third, I sum up my article by making the following claims. 1) Greek philosophy took the crucial step in trying to explain the cosmos (world) by introducing a single fundamental principle. 2) The transition from traditional mythology to a rational account of the origin and nature of the universe is not the work of a single thinker but the effort of many philosophers over the generations. 3) A proper account of the transition is best explained if we approach it as a result of the process of “continuity in discontinuity”. 4) Early, Philosophy is not so much about the triumph of reason and science, but the conceptualization and differentiation of mythic cultures. Thus in a way, Greek philosophy emerged along with mythic culture against ‘mythic culture’ at the same time. Keywords: physis, nomos, arche, physiology, aperion, mythology, anthropomorphic
J. Katzav
Grace A. de Laguna was an American philosopher of exceptional originality. Many of the arguments and positions she developed during the early decades of the twentieth century later came to be central to analytic philosophy. These arguments and positions included, even before 1930, a critique of the analytic-synthetic distinction, a private language argument, a critique of type physicalism, a functionalist theory of mind, a critique of scientific reductionism, a methodology of research programs in science and more. Nevertheless, de Laguna identified herself as a defender of the speculative vision of philosophy, a vision which, in her words, ‘analytic philosophy condemns’. I outline her speculative vision of philosophy as well as what is, in effect, an argument she offers against analytic philosophy. This is an argument against the view that key parts of established opinion, e.g. our best theoretical physics or most certain common sense, should be assumed to be true in order to answer philosophical questions. I go on to bring out the implications of her argument for the approaches to philosophy of Bertrand Russell, Willard V. Quine and David Lewis, and I also compare the argument to recent, related arguments against analytic philosophy. I will suggest that de Laguna offers a viable critique of analytic philosophy and an alternative approach to philosophy that meets this critique.
Rutvik Choudhary, Alan Wang, Zirui Neil Zhao et al.
Speculative execution attacks undermine the security of constant-time programming, the standard technique used to prevent microarchitectural side channels in security-sensitive software such as cryptographic code. Constant-time code must therefore also deploy a defense against speculative execution attacks to prevent leakage of secret data stored in memory or the processor registers. Unfortunately, contemporary defenses, such as speculative load hardening (SLH), can only satisfy this strong security guarantee at a very high performance cost. This paper proposes DECLASSIFLOW, a static program analysis and protection framework to efficiently protect constant-time code from speculative leakage. DECLASSIFLOW models "attacker knowledge" -- data which is inherently transmitted (or, implicitly declassified) by the code's non-speculative execution -- and statically removes protection on such data from points in the program where it is already guaranteed to leak non-speculatively. Overall, DECLASSIFLOW ensures that data which never leaks during the non-speculative execution does not leak during speculative execution, but with lower overhead than conservative protections like SLH.
Halaman 5 dari 71313