Public sector agencies perform the critical task of implementing the redistributive role of the State by acting as the leading provider of critical public services that many rely on. In recent years, public agencies have been increasingly adopting algorithmic prioritization tools to determine which individuals should be allocated scarce public resources. Prior work on these tools has largely focused on assessing and improving their fairness, accuracy, and validity. However, what remains understudied is how the structural design of prioritization itself shapes both the effectiveness of these tools and the experiences of those subject to them under realistic public sector conditions. In this study, we demonstrate the fallibility of adopting a prioritization approach in the public sector by showing how the underlying mechanisms of prioritization generate significant relative disparities between groups of intersectional identities as resources become increasingly scarce. We argue that despite prevailing arguments that prioritization of resources can lead to efficient allocation outcomes, prioritization can intensify perceptions of inequality for impacted individuals. We contend that efficiencies generated by algorithmic tools should not be conflated with the dominant rhetoric that efficiency necessarily entails "doing more with less" and we highlight the risks of overlooking resource constraints present in real-world implementation contexts.
Jonathan Rystrøm, Chris Schmitz, Karolina Korgul
et al.
Deploying Large Language Model-based agents (LLM agents) in the public sector requires assuring that they meet the stringent legal, procedural, and structural requirements of public-sector institutions. Practitioners and researchers often turn to benchmarks for such assessments. However, it remains unclear what criteria benchmarks must meet to ensure they adequately reflect public-sector requirements, or how many existing benchmarks do so. In this paper, we first define such criteria based on a first-principles survey of public administration literature: benchmarks must be \emph{process-based}, \emph{realistic}, \emph{public-sector-specific} and report \emph{metrics} that reflect the unique requirements of the public sector. We analyse more than 1,300 benchmark papers for these criteria using an expert-validated LLM-assisted pipeline. Our results show that no single benchmark meets all of the criteria. Our findings provide a call to action for both researchers to develop public sector-relevant benchmarks and for public-sector officials to apply these criteria when evaluating their own agentic use cases.
Industrial Control Systems (ICSs) are complex interconnected systems used to manage process control within industrial environments, such as chemical processing plants and water treatment facilities. As the modern industrial environment moves towards Internet-facing services, ICSs face an increased risk of attacks that necessitates ICS-specific Intrusion Detection Systems (IDS). The development of such IDS relies significantly on a simulated testbed as it is unrealistic and sometimes hazardous to utilize an operational control system. Whilst some testbeds have been proposed, they often use a limited selection of virtual ICS simulations to test and verify cyber security solutions. There is a lack of investigation done on developing systems that can efficiently simulate multiple ICS architectures. Currently, the trend within research involves developing security solutions on just one ICS simulation, which can result in bias to its specific architecture. We present ICS-SimLab, an end-to-end software suite that utilizes Docker containerization technology to create a highly configurable ICS simulation environment. This software framework enables researchers to rapidly build and customize different ICS environments, facilitating the development of security solutions across different systems that adhere to the Purdue Enterprise Reference Architecture. To demonstrate its capability, we present three virtual ICS simulations: a solar panel smart grid, a water bottle filling facility, and a system of intelligent electronic devices. Furthermore, we run cyber-attacks on these simulations and construct a dataset of recorded malicious and benign network traffic to be used for IDS development.
If public trust is lost in a new technology early in its life cycle it can take much more time for the benefits of that technology to be realised. Eventually tens-of-millions of people will collectively have the power to determine self-driving technology success of failure driven by their perception of risk, data handling, safety, governance, accountability, benefits to their life and more. This paper reviews the evidence on safety critical technology covering trust, engagement, and acceptance. The paper takes a narrative review approach concluding with a scalable model for self-driving technology education and engagement. The paper find that if a mismatch between the publics perception and expectations about self driving systems emerge it can lead to misuse, disuse, or abuse of the system. Furthermore we find from the evidence that industrial experts often misunderstand what matters to the public, users, and stakeholders. However we find that engagement programmes that develop approaches to defining the right information at the right time, in the right format orientated around what matters to the public creates the potential for ever more sophisticated conversations, greater trust, and moving the public into a progressive more active role of critique and advocacy. This work has been undertaken as part of the Partners for Automated Vehicle Education (PAVE) United Kingdom programme.
This paper provides a novel summary measure of ideological polarization in the American public based on the joint distribution of survey responses. Intuitively, polarization is maximized when views are concentrated at opposing extremes with little mass in between and when opinions are highly correlated across many issues. Using this measure, I show that public polarization has been increasing for the past three decades and that these changes are mostly due to increases in general disagreement, not dimensional collapse. Furthermore, these increases are not explained by the diverging opinions of Democrats and Republicans, nor divergence of opinions across gender, geography, education, or any other demographic divide.
Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications, showcasing their proficiency in accurate predictions. However, the exclusive emphasis on prediction accuracy may not align with the diverse end objectives of downstream agents. Recognizing the public model's predictions as a service, we advocate for integrating the objectives of downstream agents into the optimization process. Concretely, to address performance disparities and foster fairness among heterogeneous agents in training, we propose a novel Equitable Objective. This objective, coupled with a policy gradient algorithm, is crafted to train the public model to produce a more equitable/uniform performance distribution across downstream agents, each with their unique concerns. Both theoretical analysis and empirical case studies have proven the effectiveness of our method in advancing performance equity across diverse downstream agents utilizing the public model for their decision-making. Codes and datasets are released at https://github.com/Ren-Research/Socially-Equitable-Public-Models.
AI is increasingly being used in the public sector, including public security. In this context, the use of AI-powered remote biometric identification (RBI) systems is a much-discussed technology. RBI systems are used to identify criminal activity in public spaces, but are criticised for inheriting biases and violating fundamental human rights. It is therefore important to ensure that such systems are developed in the public interest, which means that any technology that is deployed for public use needs to be scrutinised. While there is a consensus among business leaders, policymakers and scientists that AI must be developed in an ethical and trustworthy manner, scholars have argued that ethical guidelines do not guarantee ethical AI, but rather prevent stronger regulation of AI. As a possible counterweight, public opinion can have a decisive influence on policymakers to establish boundaries and conditions under which AI systems should be used -- if at all. However, we know little about the conditions that lead to regulatory demand for AI systems. In this study, we focus on the role of trust in AI as well as trust in law enforcement as potential factors that may lead to demands for regulation of AI technology. In addition, we explore the mediating effects of discrimination perceptions regarding RBI. We test the effects on four different use cases of RBI varying the temporal aspect (real-time vs. post hoc analysis) and purpose of use (persecution of criminals vs. safeguarding public events) in a survey among German citizens. We found that German citizens do not differentiate between the different modes of application in terms of their demand for RBI regulation. Furthermore, we show that perceptions of discrimination lead to a demand for stronger regulation, while trust in AI and trust in law enforcement lead to opposite effects in terms of demand for a ban on RBI systems.
Sepideh Bahadoripour, Ethan MacDonald, Hadis Karimipour
The growing number of cyber-attacks against Industrial Control Systems (ICS) in recent years has elevated security concerns due to the potential catastrophic impact. Considering the complex nature of ICS, detecting a cyber-attack in them is extremely challenging and requires advanced methods that can harness multiple data modalities. This research utilizes network and sensor modality data from ICS processed with a deep multi-modal cyber-attack detection model for ICS. Results using the Secure Water Treatment (SWaT) system show that the proposed model can outperform existing single modality models and recent works in the literature by achieving 0.99 precision, 0.98 recall, and 0.98 f-measure, which shows the effectiveness of using both modalities in a combined model for detecting cyber-attacks.
The spread of digital information technologies can significantly increase the opportunities for publicity and growth of financial relations, reduce abuse and corruption, which in turn will contribute to financial security at all levels of the economic system. The aim of the article is to create a methodological basis and methodological basis for the formation of a fundamentally digital model of transparency of financial and economic relations at the level of public finance, which minimizes threats to financial security and maximizes development opportunities due to the digitalization of economy and society. The statement of basic materials. The article analyzes the current scientific approaches to determining the impact of digital transformations on ensuring the transparency of financial relations and identifies the most relevant areas of research on this topic. Bibliographic analysis was carried out with the help of modern software VOSviewer, which revealed cluster relationships between the categories of "digital transformations", "national security" and other economic categories, which once again showed significant global scientific interest in this topic and its interdisciplinary nature. . Given that the key role in shaping the financial security of the state belongs to the provision of budget security indicators, considerable attention was paid to considering the specifics of the Open Budget Index, which is formed by calculating indicators that comprehensively characterize the transparency of the budget process. The rating positions of Ukraine according to this index are compared with the positions of other countries. The possibilities of the Transparent Budget system, which is part of the open government of Ukraine, are considered. The peculiarities of the Open Budget and Open Spending web portals, which provide informational support to the budget process and provide citizens with access to information on public funds at all stages of planning and use, are analyzed. Existing technologies and information opportunities to ensure the transparency of public debt policy, foreign exchange, and monetary market are considered. Conclusions. The study shows that the synergistic combination of digitalization and integrated development of financial transparency is an effective means of improving financial security, and reducing information barriers and will be a catalyst for positive changes in the economy.
Chowdhury Mohammad Sakib Anwar, Alexander Matros, Sonali SenGupta
We study a public good game with N citizens and a Governor who allocates resources from a common fund. Citizens may voluntarily contribute or be compelled to do so if audited, in which case shirkers face a penalty. The Governor decides how much of the fund to devote to public good provision, with the remainder embezzled. Crucially, the Governor's utility combines material payoffs from embezzlement with belief-dependent reputational concerns. We fully characterize the symmetric subgame perfect equilibria (SSPE) of the game. The model always admits at least one pure-strategy equilibrium, ranging from universal free-riding with complete embezzlement to full contribution with efficient provision. Mixed-strategy equilibria exist only in a narrow region of parameter values and may involve multiple equilibria. Our analysis highlights the roles of penalties, audits, and reputational incentives in sustaining contribution and provision, thereby linking public good provision with the broader literature on corruption, embezzlement, and psychological game theory.
With the global spread of the COVID-19 pandemic, scientists from various disciplines responded quickly to this historical public health emergency. The sudden boom of COVID-19 related papers in a short period of time may bring unexpected influence to some commonly used bibliometric indicators. By a large-scale investigation using Science Citation Index Expanded and Social Sciences Citation Index, this brief communication confirms the citation advantage of COVID-19 related papers empirically through the lens of Essential Science Indicators' highly cited paper. More than 8% of COVID-19 related papers published during 2020 and 2021 were selected as Essential Science Indicators highly cited papers, which was much higher than the set global benchmark value of 1%. The citation advantage of COVID-19 related papers for different Web of Science categories/countries/journal impact factor quartiles were also demonstrated. The distortions of COVID-19 related papers' citation advantage to some bibliometric indicators such as journal impact factor were discussed at the end of this brief communication.
Noga H. Rotman, Yaniv Ben-Itzhak, Aran Bergman
et al.
Public clouds are one of the most thriving technologies of the past decade. Major applications over public clouds require world-wide distribution and large amounts of data exchange between their distributed servers. To that end, major cloud providers have invested tens of billions of dollars in building world-wide inter-region networking infrastructure that can support high performance communication into, out of, and across public cloud geographic regions. In this paper, we lay the foundation for a comprehensive study and real time monitoring of various characteristic of networking within and between public clouds. We start by presenting CloudCast, a world-wide and expandable measurements and analysis system, currently (January 2019)collecting data from three major public clouds (AWS, GCPand Azure), 59 regions, 1184 intra-cloud and 2238 cross-cloud links (each link represents a direct connection between a pair of regions), amounting to a total of 3422 continuously monitored links and providing active measurements every minute.CloudCast is composed of measurement agents automatically installed in each public cloud region, centralized control, measurement data base, analysis engine and visualization tools. Then we turn to analyze the latency measurement data collected over almost a year . Our analysis yields surprising results. First, each public cloud exhibits a unique set of link latency behaviors along time. Second, using a novel, fair evaluation methodology, termed similar links, we compare the three clouds. Third, we prove that more than 50% of all links do not provide the optimal RTT through the methodology of triangles. Triangles also provide a framework to get around bottlenecks, benefiting not only the majority (53%-70%) of the cross-cloud links by 30% to 70%, but also a significant portion (29%-45%) of intra-cloud links by 14%-33%.
Andrea Castellani, Sebastian Schmitt, Stefano Squartini
The continuously growing amount of monitored data in the Industry 4.0 context requires strong and reliable anomaly detection techniques. The advancement of Digital Twin technologies allows for realistic simulations of complex machinery, therefore, it is ideally suited to generate synthetic datasets for the use in anomaly detection approaches when compared to actual measurement data. In this paper, we present novel weakly-supervised approaches to anomaly detection for industrial settings. The approaches make use of a Digital Twin to generate a training dataset which simulates the normal operation of the machinery, along with a small set of labeled anomalous measurement from the real machinery. In particular, we introduce a clustering-based approach, called Cluster Centers (CC), and a neural architecture based on the Siamese Autoencoders (SAE), which are tailored for weakly-supervised settings with very few labeled data samples. The performance of the proposed methods is compared against various state-of-the-art anomaly detection algorithms on an application to a real-world dataset from a facility monitoring system, by using a multitude of performance measures. Also, the influence of hyper-parameters related to feature extraction and network architecture is investigated. We find that the proposed SAE based solutions outperform state-of-the-art anomaly detection approaches very robustly for many different hyper-parameter settings on all performance measures.
In recent months, COVID-19 has become a global pandemic and had a huge impact on the world. People under different conditions have very different attitudes toward the epidemic. Due to the real-time and large-scale nature of social media, we can continuously obtain a massive amount of public opinion information related to the epidemic from social media. In particular, researchers may ask questions such as "how is the public reacting to COVID-19 in China during different stages of the pandemic?", "what factors affect the public opinion orientation in China?", and so on. To answer such questions, we analyze the pandemic related public opinion information on Weibo, China's largest social media platform. Specifically, we have first collected a large amount of COVID-19-related public opinion microblogs. We then use a sentiment classifier to recognize and analyze different groups of users' opinions. In the collected sentiment orientated microblogs, we try to track the public opinion through different stages of the COVID-19 pandemic. Furthermore, we analyze more key factors that might have an impact on the public opinion of COVID-19 (e.g., users in different provinces or users with different education levels). Empirical results show that the public opinions vary along with the key factors of COVID-19. Furthermore, we analyze the public attitudes on different public-concerning topics, such as staying at home and quarantine.
A public decision-making problem consists of a set of issues, each with multiple possible alternatives, and a set of competing agents, each with a preferred alternative for each issue. We study adaptations of market economies to this setting, focusing on binary issues. Issues have prices, and each agent is endowed with artificial currency that she can use to purchase probability for her preferred alternatives (we allow randomized outcomes). We first show that when each issue has a single price that is common to all agents, market equilibria can be arbitrarily bad. This negative result motivates a different approach. We present a novel technique called "pairwise issue expansion", which transforms any public decision-making instance into an equivalent Fisher market, the simplest type of private goods market. This is done by expanding each issue into many goods: one for each pair of agents who disagree on that issue. We show that the equilibrium prices in the constructed Fisher market yield a "pairwise pricing equilibrium" in the original public decision-making problem which maximizes Nash welfare. More broadly, pairwise issue expansion uncovers a powerful connection between the public decision-making and private goods settings; this immediately yields several interesting results about public decisions markets, and furthers the hope that we will be able to find a simple iterative voting protocol that leads to near-optimum decisions.
We propose a design for philanthropic or publicly-funded seeding to allow (near) optimal provision of a decentralized, self-organizing ecosystem of public goods. The concept extends ideas from Quadratic Voting to a funding mechanism for endogenous community formation. Individuals make public goods contributions to projects of value to them. The amount received by the project is (proportional to) the square of the sum of the square roots of contributions received. Under the "standard model" this yields first best public goods provision. Variations can limit the cost, help protect against collusion and aid coordination. We discuss applications to campaign finance, open source software ecosystems, news media finance and urban public projects. More broadly, we offer a resolution to the classic liberal-communitarian debate in political philosophy by providing neutral and non-authoritarian rules that nonetheless support collective organization.
Citation networks of scientific publications offer fundamental insights into the structure and development of scientific knowledge. We propose a new measure, called intermediacy, for tracing the historical development of scientific knowledge. Given two publications, an older and a more recent one, intermediacy identifies publications that seem to play a major role in the historical development from the older to the more recent publication. The identified publications are important in connecting the older and the more recent publication in the citation network. After providing a formal definition of intermediacy, we study its mathematical properties. We then present two empirical case studies, one tracing historical developments at the interface between the community detection literature and the scientometric literature and one examining the development of the literature on peer review. We show both conceptually and empirically how intermediacy differs from main path analysis, which is the most popular approach for tracing historical developments in citation networks. Main path analysis tends to favor longer paths over shorter ones, whereas intermediacy has the opposite tendency. Compared to main path analysis, we conclude that intermediacy offers a more principled approach for tracing the historical development of scientific knowledge.
AbstractThis article looks at the difficulties of adapting a very centralised employment relations system in a country characterised by a deep regional economic divide. In particular, by looking at the Italian public health sector, it is contended that organised decentralisation of employment relations implemented against wide regional differences led to uneven outcomes in second‐level (organisation) collective bargaining.
Public involvement is critical in sustainable contaminated site management. It is important for China to improve public knowledge and participation, foster dialogue between urban managers and laypeople, and accelerate the remediation and redevelopment processes in contaminated site management. In this study, we collected 1812 questionnaires from nine cities around China through face-to-face interviews and statistically analyzed the perception of residents concerning contaminated sites. The results show that respondents’ concern about soil pollution was lower than for other environmental issues and their knowledge of soil contamination was limited. The risks posed by contaminated industrial sites were well recognized by respondents, but they were unsatisfied with the performance of local agencies regarding information disclosure, publicity and education and public participation. Respondents believed that local governments and polluters should take the primary responsibility for contaminated site remediation. Most of them were unwilling to pay for contaminated site remediation and preferred recreational or public service redevelopment. Moreover, our research indicated that public perception varied among different cities. This variation was mainly determined by implementations of policy instruments and additionally affected by remediation technology, pollutant type, regional policy response and living distance.