This article focuses on the use of artificial intelligence (AI) in the Australian Public Service (APS) and its impact on the APS workforce. Though several policies and governance arrangements regarding AI use in the APS have been adopted since 2023, a survey of nearly 2000 APS employees by the Community and Public Sector Union (CPSU) from late 2024 identified a range of ongoing issues. While it found that AI is used across the APS and viewed positively, concerns were raised about a lack of consultation and knowledge about AI governance arrangements, inadequate training for staff, and the use of AI in recruitment. In addition, it identified fears about its impact on public trust in government.
This perspective posits that gene-environment interplay (GxE) studies should be developed both theoretically and empirically to be of relevance to policy makers. On the theoretical front, this development is essential because the current literature lacks the integration of a clear framework capturing the various goals of public policies. Empirically, GxE models need to be further developed because the common way of modelling GxE effects fails to adequately capture the heterogeneous effects public policies may have along the distribution of genetic propensities (as captured by polygenic indices). We fill these gaps by proposing a policy classification for GxE research and by offering guidance on advancing the empirical modelling of policy-informative GxE interplay. While doing so, we provide a review of existing GxE studies on educational outcomes exploiting policy reforms or environments that could be targeted by public policy.
The electric power sector is a leading source of air pollutant emissions, impacting the public health of nearly every community. Although regulatory measures have reduced air pollutants, fossil fuels remain a significant component of the energy supply, highlighting the need for more advanced demand-side approaches to reduce the public health impacts. To enable health-informed demand-side management, we introduce HealthPredictor, a domain-specific AI model that provides an end-to-end pipeline linking electricity use to public health outcomes. The model comprises three components: a fuel mix predictor that estimates the contribution of different generation sources, an air quality converter that models pollutant emissions and atmospheric dispersion, and a health impact assessor that translates resulting pollutant changes into monetized health damages. Across multiple regions in the United States, our health-driven optimization framework yields substantially lower prediction errors in terms of public health impacts than fuel mix-driven baselines. A case study on electric vehicle charging schedules illustrates the public health gains enabled by our method and the actionable guidance it can offer for health-informed energy management. Overall, this work shows how AI models can be explicitly designed to enable health-informed energy management for advancing public health and broader societal well-being. Our datasets and code are released at: https://github.com/Ren-Research/Health-Impact-Predictor.
Paula Fraga-Lamas, Tiago M Fernandez-Carames, Oscar Blanco-Novoa
et al.
Shipbuilding companies are upgrading their inner workings in order to create Shipyards 4.0, where the principles of Industry 4.0 are paving the way to further digitalized and optimized processes in an integrated network. Among the different Industry 4.0 technologies, this article focuses on Augmented Reality, whose application in the industrial field has led to the concept of Industrial Augmented Reality (IAR). This article first describes the basics of IAR and then carries out a thorough analysis of the latest IAR systems for industrial and shipbuilding applications. Then, in order to build a practical IAR system for shipyard workers, the main hardware and software solutions are compared. Finally, as a conclusion after reviewing all the aspects related to IAR for shipbuilding, it is proposed an IAR system architecture that combines Cloudlets and Fog Computing, which reduce latency response and accelerate rendering tasks while offloading compute intensive tasks from the Cloud.
This paper investigates differences in characteristics across publication types for aging-related genetic research. We utilized bibliometric data for five model species retrieved from authoritative databases including PubMed. Publications are classified into types according to PubMed. Results indicate substantial divergence across publication types in attention paid to aging-related research, scopes of studied genes, and topical preferences. For instance, comparative studies and meta-analyses show a greater focus on aging than validation studies. Reviews concentrate more on cell biology while clinical studies emphasize translational topics. Publication types also manifest variations in highly studied genes, like APOE for reviews versus GH1 for clinical studies. Despite differences, top genes like insulin are universally emphasized. Publication types demonstrate similar levels of imbalance in research efforts to genes. Differences also exist in bibliometrics like authorship numbers, citation counts, etc. Publication types show distinct preferences for journals of certain topical specialties and scope of readership. Overall, findings showcase distinct characteristics of publication types in studying aging-related genetics, owing to their unique nature and objectives. This study is the first endeavor to systematically depict the inherent structure of a biomedical research field from the perspective of publication types and provides insights into knowledge production and evaluation patterns across biomedical communities.
Beth Goldberg, Diana Acosta-Navas, Michiel Bakker
et al.
Two substantial technological advances have reshaped the public square in recent decades: first with the advent of the internet and second with the recent introduction of large language models (LLMs). LLMs offer opportunities for a paradigm shift towards more decentralized, participatory online spaces that can be used to facilitate deliberative dialogues at scale, but also create risks of exacerbating societal schisms. Here, we explore four applications of LLMs to improve digital public squares: collective dialogue systems, bridging systems, community moderation, and proof-of-humanity systems. Building on the input from over 70 civil society experts and technologists, we argue that LLMs both afford promising opportunities to shift the paradigm for conversations at scale and pose distinct risks for digital public squares. We lay out an agenda for future research and investments in AI that will strengthen digital public squares and safeguard against potential misuses of AI.
As public sector agencies rapidly introduce new AI tools in high-stakes domains like social services, it becomes critical to understand how decisions to adopt these tools are made in practice. We borrow from the anthropological practice to ``study up'' those in positions of power, and reorient our study of public sector AI around those who have the power and responsibility to make decisions about the role that AI tools will play in their agency. Through semi-structured interviews and design activities with 16 agency decision-makers, we examine how decisions about AI design and adoption are influenced by their interactions with and assumptions about other actors within these agencies (e.g., frontline workers and agency leaders), as well as those above (legal systems and contracted companies), and below (impacted communities). By centering these networks of power relations, our findings shed light on how infrastructural, legal, and social factors create barriers and disincentives to the involvement of a broader range of stakeholders in decisions about AI design and adoption. Agency decision-makers desired more practical support for stakeholder involvement around public sector AI to help overcome the knowledge and power differentials they perceived between them and other stakeholders (e.g., frontline workers and impacted community members). Building on these findings, we discuss implications for future research and policy around actualizing participatory AI approaches in public sector contexts.
The performance of medical research can be viewed and evaluated not only from the perspective of publication output, but also from the perspective of economic exploitability. Patents can represent the exploitation of research results and thus the transfer of knowledge from research to industry. In this study, we set out to identify publication-patent pairs in order to use patents as a proxy for the economic impact of research. To identify these pairs, we matched scholarly publications and patents by comparing the names of authors and investors. To resolve the ambiguities that arise in this name-matching process, we expanded our approach with two additional filter features, one used to assess the similarity of text content, the other to identify common references in the two document types. To evaluate text similarity, we extracted and transformed technical terms from a medical ontology (MeSH) into numerical vectors using word embeddings. We then calculated the results of the two supporting features over an example five-year period. Furthermore, we developed a statistical procedure which can be used to determine valid patent classes for the domain of medicine. Our complete data processing pipeline is freely available, from the raw data of the two document types right through to the validated publication-patent pairs.
We consider the problem of training private recommendation models with access to public item features. Training with Differential Privacy (DP) offers strong privacy guarantees, at the expense of loss in recommendation quality. We show that incorporating public item features during training can help mitigate this loss in quality. We propose a general approach based on collective matrix factorization (CMF), that works by simultaneously factorizing two matrices: the user feedback matrix (representing sensitive data) and an item feature matrix that encodes publicly available (non-sensitive) item information. The method is conceptually simple, easy to tune, and highly scalable. It can be applied to different types of public item data, including: (1) categorical item features; (2) item-item similarities learned from public sources; and (3) publicly available user feedback. Furthermore, these data modalities can be collectively utilized to fully leverage public data. Evaluating our method on a standard DP recommendation benchmark, we find that using public item features significantly narrows the quality gap between private models and their non-private counterparts. As privacy constraints become more stringent, models rely more heavily on public side features for recommendation. This results in a smooth transition from collaborative filtering to item-based contextual recommendations.
Nicholas Mirin, Heather Mattie, Latifa Jackson
et al.
Rapidly evolving technology, data and analytic landscapes are permeating many fields and professions. In public health, the need for data science skills including data literacy is particularly prominent given both the potential of novel data types and analysis methods to fill gaps in existing public health research and intervention practices, as well as the potential of such data or methods to perpetuate or augment health disparities. Through a review of public health courses and programs at the top 10 U.S. and globally ranked schools of public health, this article summarizes existing educational efforts in public health data science. These existing practices serve to inform efforts for broadening such curricula to further schools and populations. Data science ethics course offerings are also examined in context of assessing how population health principles can be blended into training across levels of data involvement to augment the traditional core of public health curricula. Parallel findings from domestic and international 'outside the classroom' training programs are also synthesized to advance approaches for increasing diversity in public health data science. Based on these program reviews and their synthesis, a four-point formula is distilled for furthering public health data science education efforts, toward development of a critical and inclusive mass of practitioners with fluency to leverage data to advance goals of public health and improve quality of life in the digital age.
The study considers the system of mass communication channels of the educational organization in the information and telecommunications network in general and social networks in particular and the urgent task of the university administration in this regard is to manage social and public communications initiated by the university and its affiliated structures. The authors attribute the information and communication support of the university’s activities, its web presence to particular tasks of public relations, and the distributed nature of the management subject is considered the main characteristic of the management of the university’s media communication. The authors believe that the urgent task is the task related to the field of communication management, namely, the consolidation of efforts of a distributed subject of management of university’s media communication. Special types of capital – social capital and publicity capital – are considered in this study as an effect of systemic activity on management of university’s media communication. Focused activity on the management of social and publicity capital of an educational organization in a new digital environment allows a systematic approach firstly, to the formation of corporate identity in the most important segments of the internal university community – among students, teachers and employees, secondly, to the formation of loyalty to the university in key segments of the external public – among parents of students, school students and school teachers, representatives of business and government. The management of the university’s publicity capital ensures the systematic development of its intangible assets – image, brand, publicity, positive public opinion and reputation.
This paper aims to further understand the main factors influencing the behavioural intentions (BI) of private vehicle users towards public transport to provide policymakers and public transport operators with the tools they need to attract more private vehicle users. As service quality, satisfaction and attitudes towards public transport are considered the main motivational forces behind the BI of public transport users, this research analyses 26 indicators frequently associated with these constructs for both public transport users and private vehicle users. Non-parametric tests and ordinal logit models have been applied to an online survey asked in Madrid's metropolitan area with a sample size of 1,025 respondents (525 regular public transport users and 500 regular private vehicle users). In order to achieve a comprehensive analysis and to deal with heterogeneity in perceptions, 338 models have been developed for the entire sample and for 12 users' segments. The results led to the identification of indicators with no significant differences between public transport and private vehicle users in any of the segments being considered (punctuality, information and low-income), as well as those that did show significant differences in all the segments (proximity, intermodality, save time and money, and lifestyle). The main differences between public transport and private vehicle users were found in the attitudes towards public transport and for certain user segments (residents in the city centre, males, young, with university qualification and with incomes above 2,700EUR/month). Findings from this study can be used to develop policies and recommendations for persuading more private vehicle users to use the public transport services.
This paper studies optimal Public Private Partnerships contract between a public entity and a consortium, in continuous-time and with a continuous payment, with the possibility for the public to stop the contract. The public ("she") pays a continuous rent to the consortium ("he"), while the latter gives a best response characterized by his effort. This effect impacts the drift of the social welfare, until a terminal date decided by the public when she stops the contract and gives compensation to the consortium. Usually, the public can not observe the effort done by the consortium, leading to a principal agent's problem with moral hazard. We solve this optimal stochastic control with optimal stopping problem in this context of moral hazard. The public value function is characterized by the solution of an associated Hamilton Jacobi Bellman Variational Inequality. The public value function and the optimal effort and rent processes are computed numerically by using the Howard algorithm. In particular, the impact of the social welfare's volatility on the optimal contract is studied.
Giovanni Abramo, Ciriaco Andrea D'Angelo, Marco Solazzi
It is widely recognized that collaboration between the public and private research sectors should be stimulated and supported, as a means of favoring innovation and regional development. This work takes a bibliometric approach, based on co-authorship of scientific publications, to propose a model for comparative measurement of the performance of public research institutions in collaboration with the domestic industry collaboration with the private sector. The model relies on an identification and disambiguation algorithm developed by the authors to link each publication to its real authors. An example of application of the model is given, for the case of the academic system and private enterprises in Italy. The study demonstrates that for each scientific discipline and each national administrative region, it is possible to measure the performance of individual universities in both intra-regional and extra-regional collaboration, normalized with respect to advantages of location. Such results may be useful in informing regional policies and merit-based public funding of research organizations.
Outlier and noise detection processes are highly useful in the quality assessment of any kind of database. Such processes may have novel civic and public applications in the detection of anomalies in public data. The purpose of this work is to explore the possibilities of experimentation with, validation and application of hybrid outlier and noise detection procedures in public officials' affidavit systems currently available in Argentina.
We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume that agents have additive utilities across elements. Our model generalizes existing frameworks such as fair public decision making and participatory budgeting. We study a groupwise fairness notion called the core, which generalizes well-studied notions of proportionality and Pareto efficiency, and requires that each subset of agents must receive an outcome that is fair relative to its size. In contrast to the case of divisible public goods (where fractional allocations are permitted), the core is not guaranteed to exist when allocating indivisible public goods. Our primary contributions are the notion of an additive approximation to the core (with a tiny multiplicative loss), and polynomial time algorithms that achieve a small additive approximation, where the additive factor is relative to the largest utility of an agent for an element. If the feasibility constraints define a matroid, we show an additive approximation of 2. A similar approach yields a constant additive bound when the feasibility constraints define a matching. More generally, if the feasibility constraints define an arbitrary packing polytope with mild restrictions, we show an additive guarantee that is logarithmic in the width of the polytope. Our algorithms are based on variants of the convex program for maximizing the Nash social welfare, but differ significantly from previous work in how it is used. Our guarantees are meaningful even when there are fewer elements than the number of agents. As far as we are aware, our work is the first to approximate the core in indivisible settings.
AbstractThe British postal service, Royal Mail, was privatised in 2013, following failed attempts at divestiture in 1994 and 2009. This article analyses processes of marketisation, liberalisation and privatisation, highlighting how strong workplace‐centred union presence allowed for considerable influence and bargaining gains within such highly sensitive political projects of restructuring.
Ishak Hajjej, Caroline Hillairet, Mohamed Mnif
et al.
Public-Private Partnership (PPP) is a contract between a public entity and a consortium, in which the public outsources the construction and the maintenance of an equipment (hospital, university, prison...). One drawback of this contract is that the public may not be able to observe the effort of the consortium but only its impact on the social welfare of the project. We aim to characterize the optimal contract for a PPP in this setting of asymmetric information between the two parties. This leads to a stochastic control under partial information and it is also related to principal-agent problems with moral hazard. Considering a wider set of information for the public and using martingale arguments in the spirit of Sannikov, the optimization problem can be reduced to a standard stochastic control problem, that is solved numerically. We then prove that for the optimal contract, the effort of the consortium is explicitly characterized. In particular, it is shown that the optimal rent is not a linear function of the effort, contrary to some models of the economic literature on PPP contracts.
Contemporary debates on "open science" mostly focus on the pub- lic accessibility of the products of scientific and academic work. In contrast, this paper presents arguments for "opening" the ongoing work of science. That is, this paper is an invitation to rethink the university with an eye toward engaging the public in the dynamic, conceptual and representational work involved in creating scientific knowledge. To this end, we posit that public computing spaces, a genre of open- ended, public learning environment where visitors interact with open source computing platforms to directly access, modify and create complex and authentic scientific work, can serve as a possible model of "open science" in the university.