A cornerstone of the multiple testing literature is the Benjamini-Hochberg (BH) procedure, which guarantees control of the FDR when $p$-values are independent or positively dependent. While BH controls the average quality of rejections, it does not provide guarantees for individual discoveries, particularly those near the rejection threshold, which are more likely to be false than the average rejection. For independent $p$-values with Uniform$(0,1)$ null distribution, the Support Line procedure (SL; arXiv:2207.07299) provably controls the error probability for the rejection at the edge of the discovery set (i.e. the one with largest $p$-value) at level $q m_0/m$, where $m_0$ is the number of true null hypotheses and $q$ is a tuning parameter. In this work, we study adaptive versions of the SL procedure that operate in two steps: the first step estimates $m_0$ from non-significant statistics, and the second step runs the SL procedure at an adjusted level $q m / \hat{m}_0$. The adaptive procedures are shown to control the false discovery probability for the "boundary'' rejection under an independence assumption. Simulation studies suggest that some but not all of the two-stage procedures maintain error control under positive dependence, and that substantial power is gained relative to the original SL procedure. We illustrate differences between the procedures on meta-data from the recent literature in behavioral psychology on growth mindset and nudge interventions.
Natural disasters are increasing in frequency and severity, causing hundreds of billions of dollars in damage annually and posing growing threats to infrastructure and human livelihoods. Accurate data on roofing materials is critical for modeling building vulnerability to natural hazards such as earthquakes, floods, wildfires, and hurricanes, yet such data remain unavailable. To address this gap, we introduce RoofNet, the largest and most geographically diverse novel multimodal dataset to date, comprising over 51,500 samples from 184 geographically diverse sites pairing high-resolution Earth Observation (EO) imagery with curated text annotations for global roof material classification. RoofNet includes geographically diverse satellite imagery labeled with 14 key roofing types -- such as asphalt shingles, clay tiles, and metal sheets -- and is designed to enhance the fidelity of global exposure datasets through vision-language modeling (VLM). We sample EO tiles from climatically and architecturally distinct regions to construct a representative dataset. A subset of 6,000 images was annotated in collaboration with domain experts to fine-tune a VLM. We used geographic- and material-aware prompt tuning to enhance class separability. The fine-tuned model was then applied to the remaining EO tiles, with predictions refined through rule-based and human-in-the-loop verification. In addition to material labels, RoofNet provides rich metadata including roof shape, footprint area, solar panel presence, and indicators of mixed roofing materials (e.g., HVAC systems). RoofNet supports scalable, AI-driven risk assessment and serves as a downstream benchmark for evaluating model generalization across regions -- offering actionable insights for insurance underwriting, disaster preparedness, and infrastructure policy planning.
The rapid advancement of creating Zero-Knowledge (ZK) programs has led to the development of numerous tools designed to support developers. Popular options include being able to write in general-purpose programming languages like Rust from Risc Zero. Other languages exist like Circom, Lib-snark, and Cairo. However, developers entering the ZK space are faced with many different ZK backends to choose from, leading to a steep learning curve and a fragmented developer experience across different platforms. As a result, many developers tend to select a single ZK backend and remain tied to it. This thesis introduces zkSDK, a modular framework that streamlines ZK application development by abstracting the backend complexities. At the core of zkSDK is Presto, a custom Python-like programming language that enables the profiling and analysis of a program to assess its computational workload intensity. Combined with user-defined criteria, zkSDK employs a dynamic selection algorithm to automatically choose the optimal ZK-proving backend. Through an in-depth analysis and evaluation of real-world workloads, we demonstrate that zkSDK effectively selects the best-suited backend from a set of supported ZK backends, delivering a seamless and user-friendly development experience.
Vesna Nowack, Dalal Alrajeh, Carolina Gutierrez Muñoz
et al.
Artificial Intelligence (AI) has become an important part of our everyday lives, yet user requirements for designing AI-assisted systems in law enforcement remain unclear. To address this gap, we conducted qualitative research on decision-making within a law enforcement agency. Our study aimed to identify limitations of existing practices, explore user requirements and understand the responsibilities that humans expect to undertake in these systems. Participants in our study highlighted the need for a system capable of processing and analysing large volumes of data efficiently to help in crime detection and prevention. Additionally, the system should satisfy requirements for scalability, accuracy, justification, trustworthiness and adaptability to be adopted in this domain. Participants also emphasised the importance of having end users review the input data that might be challenging for AI to interpret, and validate the generated output to ensure the system's accuracy. To keep up with the evolving nature of the law enforcement domain, end users need to help the system adapt to the changes in criminal behaviour and government guidance, and technical experts need to regularly oversee and monitor the system. Furthermore, user-friendly human interaction with the system is essential for its adoption and some of the participants confirmed they would be happy to be in the loop and provide necessary feedback that the system can learn from. Finally, we argue that it is very unlikely that the system will ever achieve full automation due to the dynamic and complex nature of the law enforcement domain.
Penelitian ini merupakan penelitian hukum dengan spesifikasi yang bersifat deskriptif analitis. Bentuk pengawasan internal oleh Inspektorat Pengawasan Daerah Kepolisian Daerah Sumatera Barat terhadap penyelesaian tindak pidana secara restoratif adalah memastikan bahwa proses tersebut dijalankan dengan transparansi, profesionalisme, dan sesuai dengan ketentuan hukum yang berlaku. Pengawasan ini mencakup aspek syarat tindak pidana yang dapat direstoratif justice. Memastikan bahwa pelaku, korban, dan masyarakat (terutama keluarga) setuju. Pengawasan ini dilakukan untuk mencegah terjadinya penyimpangan seperti Itwasda bertugas mengawasi agar anggota kepolisian tidak menyalahgunakan posisinya untuk memaksakan atau memanipulasi hasil dari proses restorative justice. Penyimpangan yang harus dihindari adalah adanya dugaan pemerasan atau sogokan dari pelaku untuk menghindari proses peradilan formal. Itwasda memantau implementasi dari kesepakatan yang disepakati. Kendala dalam pengawasan internal oleh Inspektorat Pengawasan Daerah Kepolisian Daerah Sumatera Barat terhadap penyelesaian tindak pidana secara restoratif adalah tidak adanya standarisasi proses, tanpa aturan yang jelas, standar pengukuran kinerja yang tidak seragam. Kesulitan dalam memastikan bahwa proses mediasi dan penyelesaian dilakukan dengan benar dan adil. Proses restorative justice hanya menjadi prosedur administratif tanpa memperhatikan prinsip pemulihan yang sesungguhnya, sehingga proses tidak memberikan hasil yang diharapkan oleh semua pihak yang terlibat. Keterbatasan sumber daya, baik dari segi jumlah personel pengawas maupun dukungan teknologi untuk melakukan monitoring secara berkala terhadap penyelesaian tindak pidana melalui restorative justice.
This article discusses the problem of using the results of administrative activities by the fire investigation authorities. This problem has been little studied and requires further research, despite the fact that the issue of using non-procedural information has been repeatedly investigated by process scientists. The article clarifies the features of the use of administrative records management materials at the stage of initiation of a criminal case and the stage of preliminary investigation. The author provides an analysis of the views of scientists who have studied this problem. This paper examines the characteristic features of evidence necessary for the processualization of the above materials. Considerable attention is paid to the problem of using the results of administrative activities by the bodies of inquiry in cases of fires during pre-investigation checks and, in particular, at the stage of preliminary investigation. In the conclusion of this study, the author draws conclusions about the possibility of using materials from the administrative activities of State fire supervision authorities in the investigation of fire cases, as material evidence and other documents, subject to their verification by investigative actions. In addition, the author proposed amendments to the Criminal Procedure Law, eliminating the conditions that prevent the recognition of materials of administrative activity as evidence of reliability and admissibility. These proposals make both scientific and practical contributions to the development of the domestic criminal process.
Large language models struggle to synthesize disparate pieces of information into a coherent plan when approaching a complex procedural task. In this work, we introduce a novel formalism and structure for such procedural knowledge. Based on this formalism, we present a novel procedural knowledge dataset called LCStep, which we created from LangChain tutorials. To leverage this procedural knowledge to solve new tasks, we propose analogy-augmented generation (AAG), which draws inspiration from the human ability to assimilate past experiences to solve unfamiliar problems. AAG uses a custom procedure memory store to retrieve and adapt specialized domain knowledge to answer new procedural tasks. We demonstrate that AAG outperforms few-shot and RAG baselines on LCStep, RecipeNLG, and CHAMP datasets under a pairwise LLM-based evaluation, corroborated by human evaluation in the case of RecipeNLG.
Tools for fighting cyber-criminal activities using new technologies are promoted and deployed every day. However, too often, they are unnecessarily complex and hard to use, requiring deep domain and technical knowledge. These characteristics often limit the engagement of law enforcement and end-users in these technologies that, despite their potential, remain misunderstood. For this reason, in this study, we describe our experience in combining learning and training methods and the potential benefits of gamification to enhance technology transfer and increase adult learning. In fact, in this case, participants are experienced practitioners in professions/industries that are exposed to terrorism financing (such as Law Enforcement Officers, Financial Investigation Officers, private investigators, etc.) We define training activities on different levels for increasing the exchange of information about new trends and criminal modus operandi among and within law enforcement agencies, intensifying cross-border cooperation and supporting efforts to combat and prevent terrorism funding activities. On the other hand, a game (hackathon) is designed to address realistic challenges related to the dark net, crypto assets, new payment systems and dark web marketplaces that could be used for terrorist activities. The entire methodology was evaluated using quizzes, contest results, and engagement metrics. In particular, training events show about 60% of participants complete the 11-week training course, while the Hackathon results, gathered in two pilot studies (Madrid and The Hague), show increasing expertise among the participants (progression in the achieved points on average). At the same time, more than 70% of participants positively evaluate the use of the gamification approach, and more than 85% of them consider the implemented Use Cases suitable for their investigations.
Europa's surface exhibits many regions of complex topography termed 'chaos terrains'. One set of hypotheses for chaos terrain formation requires upward migration of liquid water from perched water bodies within the icy shell formed by convection and tidal heating. However, consideration of the behavior of terrestrial ice sheets suggests the upwards movement of water from englacial water bodies is uncommon. Instead, rapid downwards hydrofracture from supraglacial lakes - unbounded given a sufficient volume of water - can occur in relatively low tensile stress states given a sufficiently deep initial fracture due to the negative relative buoyancy of water. I suggest that downwards, not upwards, fracture may be more reasonable for perched water bodies but show that full hydrofracture is unlikely if the perched water body is located beneath a mechanically strong icy lid. However, full hydrofracture is possible in the event of lid break up over a perched water body and likely in the event of a meteor impact that generates sufficient meltwater and a tensile shock. This provides a possible mechanism for the transfer of biologically important nutrients to the subsurface ocean and the formation of chaos terrains.
This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal of increasing the operational efficiency of police and judicial authorities and that of safeguarding fundamental rights of the affected individuals. Subsequently, it shifts the focus onto key principles of criminal procedure and the presumption of innocence in particular. Using Article 6 ECHR and the Directive (EU) 2016/343 as a starting point, it discusses challenges relating to the protective scope of presumption of innocence, the burden of proof rule and the in dubio pro reo principle as core elements of it. Given the transformations law enforcement and criminal proceedings go through in the era of algorithms, big data and artificial intelligence, this article advocates the adoption of specific procedural safeguards that will uphold rule of law requirements, and particularly transparency, fairness and explainability. In doing so, it also takes into account EU legislative initiatives, including the reform of the EU data protection acquis , the E-evidence Proposal, and the Proposal for an EU AI Act. Additionally, it argues in favour of revisiting the protective scope of key fundamental rights, considering, inter alia , the new dimensions suspicion has acquired.
This chapter explains the procedure followed in criminal cases and civil cases. It explores the role of the Crown Prosecution Service in criminal prosecutions. It also considers the important role that expert witnesses can play in civil proceedings and what the expectations are of those who put themselves forward for that role.
Introduction: the article analyzes possibilities of the penitentiary system for implementing goals of criminal punishment in the execution of a penalty in the form of deprivation of liberty against those convicted of extremism-related crimes. The emphasis is placed on possibilities of correcting persons pursuing extremist ideology and preventing commission of new crimes (both by convicts themselves by isolating them from society and by other citizens following their example). The article analyzes domestic and foreign experience in the field of countering prison radicalization. Based on statistical data on the terms of imprisonment and types of correctional institutions, the authors propose implementation of various resocialization schemes when correcting convicted extremists. Recommendations for preventing the spread of the relevant ideology among convicts are presented. Purpose: to identify key current trends and problems associated with the spread of the terrorism ideology in correctional institutions; consider them in the context of achieving criminal punishment goals; develop sound proposals and recommendations for effective correction of convicted extremists, and prevent expansion of the extremist ideology in correctional institutions. Methods: the research is based on the use of a combination of general and private scientific methods: analysis and synthesis, systematic, statistical, logical, formal-logical, sociological, comparative-legal, and hermeneutic. Results: the generalized statistical data on the total number of persons sentenced to imprisonment for extremism-related crimes for the past three years, the level of recidivism among them, terms of imprisonment, and types of penitentiary institutions in which these convicts serve their sentences show that there is an upward trend in the number of such persons in correctional facilities of general and strict regimes, as well as persons with unexpunged and outstanding convictions. Trends to increase religious radicalization risks are determined. Problems associated with insufficient readiness of the penitentiary system to ensure processes of correcting convicted extremists with various mental, especially radical religious attitudes, and preventing expansion of extremist radicalism are identified. With regard to domestic and foreign experience, actual and potential capabilities of the domestic penitentiary system, recommendations are developed to overcome the identified problems. Conclusions: the author substantiates the need to improve professionalism of penitentiary institution employees and third-party specialists involved in work with extremist convicts; the necessity to separate extremist convicts from others by creating separate sections of penitentiary institutions; the expediency of creating a specialized progressive system for the execution of punishments in relation to extremist convicts with regard to the degree of their correction; the need to introduce and implement comprehensive programs to counter the spread of the extremism ideology in correctional institutions, covering the closest circle of communication of the extremist criminal before his/ her conviction.
The Hermite-Taylor method, introduced in 2005 by Goodrich, Hagstrom and Lorenz, is highly efficient and accurate when applied to linear hyperbolic systems on periodic domains. Unfortunately its widespread use has been prevented by the lack of a systematic approach to implementing boundary conditions. In this paper we present the Hermite-Taylor Correction Function method, which provides exactly such a systematic approach for handing boundary conditions. Here we focus on Maxwell's equations but note that the method is easily extended to other hyperbolic problems.
Mengjiao Yang, Dale Schuurmans, Pieter Abbeel
et al.
Imitation learning aims to extract high-performance policies from logged demonstrations of expert behavior. It is common to frame imitation learning as a supervised learning problem in which one fits a function approximator to the input-output mapping exhibited by the logged demonstrations (input observations to output actions). While the framing of imitation learning as a supervised input-output learning problem allows for applicability in a wide variety of settings, it is also an overly simplistic view of the problem in situations where the expert demonstrations provide much richer insight into expert behavior. For example, applications such as path navigation, robot manipulation, and strategy games acquire expert demonstrations via planning, search, or some other multi-step algorithm, revealing not just the output action to be imitated but also the procedure for how to determine this action. While these intermediate computations may use tools not available to the agent during inference (e.g., environment simulators), they are nevertheless informative as a way to explain an expert's mapping of state to actions. To properly leverage expert procedure information without relying on the privileged tools the expert may have used to perform the procedure, we propose procedure cloning, which applies supervised sequence prediction to imitate the series of expert computations. This way, procedure cloning learns not only what to do (i.e., the output action), but how and why to do it (i.e., the procedure). Through empirical analysis on navigation, simulated robotic manipulation, and game-playing environments, we show that imitating the intermediate computations of an expert's behavior enables procedure cloning to learn policies exhibiting significant generalization to unseen environment configurations, including those configurations for which running the expert's procedure directly is infeasible.
Hammurabi’s code shows the social relations of that time, although most of these relations were regulated by the Law of Contract. The Code covers a variety of legal matters: it regulates very complex property, family, obligatory and criminal-legal relations including the judiciary provisions. The Code expresses the class character of the society, because it primarily protects the interests of the ruling class and punishes the members of the ruling and subordinate classes differently for the same crimes. The Code was carved in a stone pillar and it was found by M. Morgan in 1901. This masterpiece of a human’s thought, almost four millennia old, was engraved in the stone of Babylon (Hammurabi) for the temple of Sippar (now the ruins of Abu Dhabi near Baghdad). An undamaged inscription of the Code is kept in the British Museum.
We present the first verified implementation of a decision procedure for the quantifier-free theory of partial and linear orders. We formalise the procedure in Isabelle/HOL and provide a specification that is made executable using Isabelle's code generator. The procedure is already part of the development version of Isabelle as a sub-procedure of the simplifier.
Let $A$ be a finite-dimensional algebra over a field of characteristic $p>0$. We use a functorial approach involving torsion pairs to construct embeddings of endomorphism algebras of basic projective $A$--modules $P$ into those of the torsion submodules of $P$. As an application, we show that blocks of both the classical and quantum Schur algebras $S(2,r)$ and $S_q(2,r)$ are Morita equivalent as quasi-hereditary algebras to their Ringel duals if they contain $2p^k$ simple modules for some $k$.