S. Fienberg, F. Mosteller, D. L. Wallace
Hasil untuk "Political science"
Menampilkan 20 dari ~22201620 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
W. Riker
K. Dowding
J. Cressica Brazier, Sharon Klein, Jasmine Lamb et al.
Mario Villagran
The choice of protest tactics in a social movement has often been analyzed based on the demands, participants, and internal characteristics of the movement. However, recent evidence highlights the context or setting in which the demonstration takes place as another key element in the process; Using structural equation modeling, studies have shown a link between high perceptions of injustice in the treatment received by authorities and a greater acceptance of non-normative and/or violent methods of protest. In line with this approach, this article aims to examine the extent to which another form of authority legitimacy -- such as political trust -- affects the overall justification for the use of violence by both protesters and the police. Using longitudinal data from Chile (2016 -- 2019), which captures the collective protests of the ``Social Outbreak'', three analytical approaches -- fixed effects, cross-lagged, and multilevel models -- demonstrate that declining political trust not only weakened public acceptance of police violence but also increased tolerance toward protesters' use of violent tactics. This relationship adds a new dimension to the analysis of violent protests, suggesting that low political trust in many modern states may be a contributing factor to the increasing radicalization of demonstrations in recent years.
Nathan Junzi Chen
Amidst the rapid normalization of generative artificial intelligence (GAI), intelligent systems have come to dominate political discourse across information media. However, internalized political biases stemming from training data skews, human prejudice, and algorithmic flaws continue to plague this novel technology. This study employs a zero-shot classification approach to evaluate algorithmic political partisanship through a methodical combination of ideological alignment, topicality, response sentiment, and objectivity. A total of 1800 model responses across six mainstream large language models (LLMs) were individually input into four distinct fine-tuned classification algorithms, each responsible for computing one of the aforementioned metrics. The results show an amplified liberal-authoritarian alignment across the six LLMs evaluated, with notable instances of reasoning supersessions and canned refusals. The study subsequently highlights the psychological influences underpinning human-computer interactions and how intrinsic biases can permeate public discourse. The resulting distortion of the political landscape can ultimately manifest as conformity or polarization, depending on the region's pre-existing socio-political structures.
Natalia Ożegalska-Łukasik, Szymon Łukasik
Political beliefs vary significantly across different countries, reflecting distinct historical, cultural, and institutional contexts. These ideologies, ranging from liberal democracies to rigid autocracies, influence human societies, as well as the digital systems that are constructed within those societies. The advent of generative artificial intelligence, particularly Large Language Models (LLMs), introduces new agents in the political space-agents trained on massive corpora that replicate and proliferate socio-political assumptions. This paper analyses whether LLMs display propensities consistent with democratic or autocratic world-views. We validate this insight through experimental tests in which we experiment with the leading LLMs developed across disparate political contexts, using several existing psychometric and political orientation measures. The analysis is based on both numerical scoring and qualitative analysis of the models' responses. Findings indicate high model-to-model variability and a strong association with the political culture of the country in which the model was developed. These findings highlight the need for more detailed examination of the socio-political dimensions embedded within AI systems.
Estelle Ferrarese
Jonas O. Meckling, Nina Kelsey, E. Biber et al.
Amanda Clayton, Diana Z. O’Brien, Jennifer M. Piscopo
What does women’s presence in political decision-making bodies signal to citizens? Do these signals differ based on the body’s policy decisions? And do women and men respond to women’s presence similarly? Though scholars have demonstrated the substantive and symbolic benefits of women’s representation, little work has examined how women’s presence affects citizens’ perceptions of democratic legitimacy. We test the relationship between representation and legitimacy beliefs through survey experiments on a nationally representative sample of U.S. citizens. First, we find that women’s equal presence legitimizes decisions that go against women’s interests. We show suggestive evidence that this effect is particularly pronounced among men, who tend to hold less certain views on women’s rights. Second, across decision outcomes and issue areas, women’s equal presence legitimizes decision-making processes and confers institutional trust and acquiescence. These findings add new theoretical insights into how, when, and for whom inclusive representation increases perceptions of democratic legitimacy. Replication Materials: The data, code, and any additional materials required to replicate all analyses in this article are available on the American Journal of Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/7190MT In 2017, newly inaugurated President Donald Trump sparked public outrage when he reinstated the global gag order on abortion funding while surrounded by only men. Opprobrium against groups of men making decisions concerning women is not a new phenomenon. Famously, protests erupted in 1991 when an all-male, all-white congressional committee interrogated Anita Hill—a black woman—about being sexually harassed. Nor is public outcry limited to cases that restrict women’s rights. PayPal endured public shaming via social media in April 2016, when it organized a panel of “senior male leaders” to discuss pay equity. That all-male panels confront scorn, especially when their topic addresses matters connected to women’s Amanda Clayton is Assistant Professor, Department of Political Science, Vanderbilt University, 230 Appleton Place, Nashville, TN 37203 (amanda.clayton@vanderbit.edu). Diana Z. O’Brien is Associate Professor, Department of Political Science, Texas A&M University, 2010 Allen Building, 4348 TAMU, College Station, TX 77843-4348 (dzobrien@tamu.edu). Jennifer M. Piscopo is Assistant Professor, Department of Politics, Occidental College, 1600 Campus Road, Los Angeles, CA 90041 (piscopo@oxy.edu). We are grateful for feedback we received from Georgia Anderson-Nilsson, Lisa Baldez, Larry Bartels, Josh Clinton, Maria EscobarLemmon, Martin Gilens, Matthew Hayes, Nahomi Ichino, Cindy Kam, Dave Lewis, Kristin Michelitch, Michael Nelson, Efrén Perez, Kira Sanbonmatsu, Michelle Taylor-Robinson, Liz Zechmeister, and others. This work also benefited from comments provided by seminar participants at Birkbeck, University of London, Indiana University, the Research on Individuals, Politics, and Society (RIPS) lab at Vanderbilt University, the University of Illinois Urbana Champaign, and the University of California-Merced, as well as attendees at the 2016 Empirical Study of Gender (EGEN) conference at the University of Pennsylvania and the 2017 annual meeting of the Midwest Political Science Association in Chicago, IL. Funding for this work was generously provided by the Center on American Politics at Indiana University. [Correction add on 10/08, after first online publication: A correction has been made in References] experiences, suggests that women’s presence can affect how citizens view policy decisions and the institutions and processes that guide them. The backlash against all-male panels thus raises a central question for the study of democratic politics: Does the inclusion of representatives from historically underrepresented groups (typically called descriptive representation) legitimize decisions and decisionmaking procedures in the eyes of the general public? Democratic theorists argue that legislative outcomes, processes, and institutions cannot be legitimate when certain social groups remain systematically excluded from elected office (Dovi 2007; Mansbridge 1999; Phillips 1995). Despite these strong normative expectations, most research on symbolic representation—that is, the link American Journal of Political Science, Vol. 63, No. 1, January 2019, Pp. 113–129 C ©2018, Midwest Political Science Association DOI: 10.1111/ajps.12391
Olga S. Chikrizova
The relevance of the study of Islamist terrorism is due to its destructive impact on national and global security, as well as on the dialogue between Western and Eastern, particularly Muslim, nations since the early 2000s. Islamist terrorism reinforces entrenched prejudices against Islam and Muslims, leading to their demonization and the subsequent prevention of constructive interaction between communities professing different religions, thus hindering the establishment of relations based on mutual trust. This study examines the number of terrorist attacks committed by Islamist groups and their victims between 2000 and 2020, and tests the methodology for scoring their terrorist activities. Based on the Global Terrorism Database and the author’s sample of 155 groups broadcasting Islamist ideology, three stages of the development of Islamist terrorism were identified, a direct proportional relationship between the number of terrorist attacks and the number of victims was proven, and the geography of Islamist terrorist activity was analyzed. Methodologically, this study combines the analysis of terrorism as both a political phenomenon and a religious manifestation, and Islamist terrorist groups themselves are seen as political projects masquerading as religiously motivated communities. In contrast to the destabilization of Iraq, which along with Afghanistan became another platform for training terrorists, the terrorist attacks on September 11, 2001, had little impact on Islamist terrorism. Quantitative analysis revealed that the Middle East and North Africa was mistakenly perceived as the “epicenter” of Islamist terrorism in 2000-2020, as Southeast Asia was the leader in terrorist attacks in 2000, while South Asia occupied 1st place in 2003, 2005-2013, and 2018-2020. It has been confirmed that instability at the local and national levels serves as a fertile ground for Islamist terrorism. The possibilities and limitations of the proposed methodology are outlined, and the prospects for its further application in scientific studies of Islamist terrorism are described.
Pietro Bernardelle, Leon Fröhling, Stefano Civelli et al.
The analysis of political biases in large language models (LLMs) has primarily examined these systems as single entities with fixed viewpoints. While various methods exist for measuring such biases, the impact of persona-based prompting on LLMs' political orientation remains unexplored. In this work we leverage PersonaHub, a collection of synthetic persona descriptions, to map the political distribution of persona-based prompted LLMs using the Political Compass Test (PCT). We then examine whether these initial compass distributions can be manipulated through explicit ideological prompting towards diametrically opposed political orientations: right-authoritarian and left-libertarian. Our experiments reveal that synthetic personas predominantly cluster in the left-libertarian quadrant, with models demonstrating varying degrees of responsiveness when prompted with explicit ideological descriptors. While all models demonstrate significant shifts towards right-authoritarian positions, they exhibit more limited shifts towards left-libertarian positions, suggesting an asymmetric response to ideological manipulation that may reflect inherent biases in model training.
Tobias Rohrbach, Mykola Makhortykh, Maryna Sydorova
Search engines like Google have become major information gatekeepers that use artificial intelligence (AI) to determine who and what voters find when searching for political information. This article proposes and tests a framework of algorithmic representation of minoritized groups in a series of four studies. First, two algorithm audits of political image searches delineate how search engines reflect and uphold structural inequalities by under- and misrepresenting women and non-white politicians. Second, two online experiments show that these biases in algorithmic representation in turn distort perceptions of the political reality and actively reinforce a white and masculinized view of politics. Together, the results have substantive implications for the scientific understanding of how AI technology amplifies biases in political perceptions and decision-making. The article contributes to ongoing public debates and cross-disciplinary research on algorithmic fairness and injustice.
Hanyuan Jiang
In the study of the Political Resource Curse (Brollo et al.,2013), the authors identified a new channel to investigate whether the windfalls of resources are unambiguously beneficial to society, both with theory and empirical evidence. This paper revisits the framework with a new dataset. Specifically, we implemented a regression discontinuity design and difference-in-difference specification
Campos Neutrais Revista Latino-Americana de Relações Internacionais
Di Zhou, Yinxian Zhang
The rising popularity of ChatGPT and other AI-powered large language models (LLMs) has led to increasing studies highlighting their susceptibility to mistakes and biases. However, most of these studies focus on models trained on English texts. Taking an innovative approach, this study investigates political biases in GPT's multilingual models. We posed the same question about high-profile political issues in the United States and China to GPT in both English and simplified Chinese, and our analysis of the bilingual responses revealed that GPT's bilingual models' political "knowledge" (content) and the political "attitude" (sentiment) are significantly more inconsistent on political issues in China. The simplified Chinese GPT models not only tended to provide pro-China information but also presented the least negative sentiment towards China's problems, whereas the English GPT was significantly more negative towards China. This disparity may stem from Chinese state censorship and US-China geopolitical tensions, which influence the training corpora of GPT bilingual models. Moreover, both Chinese and English models tended to be less critical towards the issues of "their own" represented by the language used, than the issues of "the other." This suggests that GPT multilingual models could potentially develop a "political identity" and an associated sentiment bias based on their training language. We discussed the implications of our findings for information transmission and communication in an increasingly divided world.
Sadia Kamal, Brenner Little, Jade Gullic et al.
Developing machine learning models to characterize political polarization on online social media presents significant challenges. These challenges mainly stem from various factors such as the lack of annotated data, presence of noise in social media datasets, and the sheer volume of data. The common research practice typically examines the biased structure of online user communities for a given topic or qualitatively measuring the impacts of polarized topics on social media. However, there is limited work focusing on analyzing polarization at the ground-level, specifically in the social media posts themselves. Such existing analysis heavily relies on annotated data, which often requires laborious human labeling, offers labels only to specific problems, and lacks the ability to determine the near-future bias state of a social media conversations. Understanding the degree of political orientation conveyed in social media posts is crucial for quantifying the bias of online user communities and investigating the spread of polarized content. In this work, we first introduce two heuristic methods that leverage on news media bias and post content to label social media posts. Next, we compare the efficacy and quality of heuristically labeled dataset with a randomly sampled human-annotated dataset. Additionally, we demonstrate that current machine learning models can exhibit improved performance in predicting political orientation of social media posts, employing both traditional supervised learning and few-shot learning setups. We conduct experiments using the proposed heuristic methods and machine learning approaches to predict the political orientation of posts collected from two social media forums with diverse political ideologies: Gab and Twitter.
Vera Sosnovik, Romaissa Kessi, Maximin Coavoux et al.
Online political advertising has become the cornerstone of political campaigns. The budget spent solely on political advertising in the U.S. has increased by more than 100% from \$700 million during the 2017-2018 U.S. election cycle to \$1.6 billion during the 2020 U.S. presidential elections. Naturally, the capacity offered by online platforms to micro-target ads with political content has been worrying lawmakers, journalists, and online platforms, especially after the 2016 U.S. presidential election, where Cambridge Analytica has targeted voters with political ads congruent with their personality To curb such risks, both online platforms and regulators (through the DSA act proposed by the European Commission) have agreed that researchers, journalists, and civil society need to be able to scrutinize the political ads running on large online platforms. Consequently, online platforms such as Meta and Google have implemented Ad Libraries that contain information about all political ads running on their platforms. This is the first step on a long path. Due to the volume of available data, it is impossible to go through these ads manually, and we now need automated methods and tools to assist in the scrutiny of political ads. In this paper, we focus on political ads that are related to policy. Understanding which policies politicians or organizations promote and to whom is essential in determining dishonest representations. This paper proposes automated methods based on pre-trained models to classify ads in 14 main policy groups identified by the Comparative Agenda Project (CAP). We discuss several inherent challenges that arise. Finally, we analyze policy-related ads featured on Meta platforms during the 2022 French presidential elections period.
Ben Stobaugh, Dhiraj Murthy
We explore the understudied area of social payments to evaluate whether or not we can predict the gender and political affiliation of Venmo users based on the content of their Venmo transactions. Latent attribute detection has been successfully applied in the domain of studying social media. However, there remains a dearth of previous work using data other than Twitter. There is also a continued need for studies which explore mobile payments spaces like Venmo, which remain understudied due to the lack of data access. We hypothesize that using methods similar to latent attribute analysis with Twitter data, machine learning algorithms will be able to predict gender and political affiliation of Venmo users with a moderate degree of accuracy. We collected crowdsourced training data that correlates participants' political views with their public Venmo transaction history through the paid Prolific service. Additionally, we collected 21 million public Venmo transactions from recently active users to use for gender classification. We then ran the collected data through a TF-IDF vectorizer and used that to train a support vector machine (SVM). After hyperparameter training and additional feature engineering, we were able to predict user's gender with a high level of accuracy (.91) and had modest success predicting user's political orientation (.63).
M. Palmer, E. Bernhardt, E. Chornesky et al.
Halaman 31 dari 1110081