T. Abdullah, Tamara L. Brown
Hasil untuk "History America"
Menampilkan 20 dari ~10645235 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
Gilmária Salviano Ramos, Silvia Maria Fávero Arend
O feminicídio é crime previsto pelo Código Penal Brasileiro (BRASIL, 1940), inciso VI, §§ 2º e 2º-A, o qual alterou o Art. 121. Foi incluído na legislação brasileira pela Lei nº 13.104, de 2015, com pena de 12 a 30 anos de prisão, agravada de um terço se a vítima estiver grávida ou se três meses após o parto. O presente artigo pretende descrever e analisar os discursos de operadores da justiça como base em inquéritos policiais e boletins de ocorrência sobre casos de feminicídios e tentativas de feminicídio, registrados pela Delegacia de Proteção à Criança, Adolescente, Mulher e Idoso, de São José (DPCAMI), em Santa Catarina, entre o período de 2012 a 2020. Para tanto, as discussões acerca dos movimentos feministas, legislações voltadas para a violência contra mulheres à luz do conceito de interseccionalidade foram fundamentais para a investigação. No que diz respeito a relação entre vítimas e acusados, as mortes de mulheres em Santa Catarina ocorreram no contexto de relacionamentos íntimos e/ou afetivos. É o que denomina-se “Feminicídio direto”: mortes por agressão física, mortes envolvendo violência sexual, mortes envolvendo violência conjugal, doméstica ou familiar, mortes que envolvam tortura psicológica ou violência que incida na degradação do corpo físico da mulher.
Benjamin Mako Hill, Aaron Shaw
Wikipedia's founders could not have dreamed they were creating the most important laboratory for social scientific and computing research in history but that is exactly what happened. Hill and Shaw take account of Wikipedia's enormous effect on academic scholarship
Serin Kim, Sangam Lee, Dongha Lee
Large language models have advanced web agents, yet current agents lack personalization capabilities. Since users rarely specify every detail of their intent, practical web agents must be able to interpret ambiguous queries by inferring user preferences and contexts. To address this challenge, we present Persona2Web, the first benchmark for evaluating personalized web agents on the real open web, built upon the clarify-to-personalize principle, which requires agents to resolve ambiguity based on user history rather than relying on explicit instructions. Persona2Web consists of: (1) user histories that reveal preferences implicitly over long time spans, (2) ambiguous queries that require agents to infer implicit user preferences, and (3) a reasoning-aware evaluation framework that enables fine-grained assessment of personalization. We conduct extensive experiments across various agent architectures, backbone models, history access schemes, and queries with varying ambiguity levels, revealing key challenges in personalized web agent behavior. For reproducibility, our codes and datasets are publicly available at https://anonymous.4open.science/r/Persona2Web-73E8.
Nina Régis
Gustavo Moisés Bortolameoti, Denis Fernando Radun
KaiWen Wei, Kejun He, Xiaomian Kang et al.
Generative recommendation, which directly generates item identifiers, has emerged as a promising paradigm for recommendation systems. However, its potential is fundamentally constrained by the reliance on purely autoregressive training. This approach focuses solely on predicting the next item while ignoring the rich internal structure of a user's interaction history, thus failing to grasp the underlying intent. To address this limitation, we propose Masked History Learning (MHL), a novel training framework that shifts the objective from simple next-step prediction to deep comprehension of history. MHL augments the standard autoregressive objective with an auxiliary task of reconstructing masked historical items, compelling the model to understand ``why'' an item path is formed from the user's past behaviors, rather than just ``what'' item comes next. We introduce two key contributions to enhance this framework: (1) an entropy-guided masking policy that intelligently targets the most informative historical items for reconstruction, and (2) a curriculum learning scheduler that progressively transitions from history reconstruction to future prediction. Experiments on three public datasets show that our method significantly outperforms state-of-the-art generative models, highlighting that a comprehensive understanding of the past is crucial for accurately predicting a user's future path. The code will be released to the public.
Zijian Hu, Zhenjie Zheng, Monica Menendez et al.
Network-wide traffic flow, which captures dynamic traffic volume on each link of a general network, is fundamental to smart mobility applications. However, the observed traffic flow from sensors is usually limited across the entire network due to the associated high installation and maintenance costs. To address this issue, existing research uses various supplementary data sources to compensate for insufficient sensor coverage and estimate the unobserved traffic flow. Although these studies have shown promising results, the inconsistent availability and quality of supplementary data across cities make their methods typically face a trade-off challenge between accuracy and generality. In this research, we first time advocate using the Global Open Multi-Source (GOMS) data within an advanced deep learning framework to break the trade-off. The GOMS data primarily encompass geographical and demographic information, including road topology, building footprints, and population density, which can be consistently collected across cities. More importantly, these GOMS data are either causes or consequences of transportation activities, thereby creating opportunities for accurate network-wide flow estimation. Furthermore, we use map images to represent GOMS data, instead of traditional tabular formats, to capture richer and more comprehensive geographical and demographic information. To address multi-source data fusion, we develop an attention-based graph neural network that effectively extracts and synthesizes information from GOMS maps while simultaneously capturing spatiotemporal traffic dynamics from observed traffic data. A large-scale case study across 15 cities in Europe and North America was conducted. The results demonstrate stable and satisfactory estimation accuracy across these cities, which suggests that the trade-off challenge can be successfully addressed using our approach.
Qiuju Wu, Weichuang Fang
Abstract Most Latin American countries entered the ranks of middle-income levels almost simultaneously with Japan and South Korea. However, while Japan and South Korea embarked on the path of economic development through technological innovation and industrial upgrading, Latin America fell into the middle-income trap. By examining the history of industrial and technological development in Brazil, Argentina, and Mexico, three representative countries in Latin America, this paper finds that the fundamental reasons for Latin America falling into the middle-income trap lie in the tension between the choice of industrialization path and primitive accumulation, as well as the institutional deficiencies in technology and industrial development. Latin America, which has fallen into the middle-income trap, finds it hard to upgrade its technology and is trapped in premature industrialization and the middle-technology trap. It can be said that Latin America has fallen into a double trap, which reinforces each other and forms a solid trap pattern. The warning from the Latin American experience is that late-developing countries need to maintain an open attitude during the process of technological catching up, build a complete technological innovation system, enhance industrial and technological governance capabilities, and seize the window of opportunity for technological and industrial upgrading. Otherwise, there is a significant risk of falling into the middle-income trap and even the double trap.
Patricia López
English Abstract: Corazón de Maíz gathers the stories and recipes for traditional Maya cooking in the regions of Huehuetenango, and is designed to promote the survival of the culinary system of the grand-mothers, and the food that they prepared with love for the whole family. This “culinary novel” contains myths, legends, stories of the origin of some foods, cooking tech-niques and methods. The foods prepared in the altiplano region developed from millenary traditions, the Hispanic invasion, the ravages of climate change bringing droughts and plagues that destroy life, and the various consequences and influences of migration. For the Maya, corn has always been the main and most sacred food. The stories and recipes given in this factual and artist's view of Q'anjob'al Maya food have continuing connections to the worldview embodied in the Popol Wuj, which states that everything that exists on earth has life, if it has life it has a spirit and if it has a spirit then it is sacred, thus the grandparents say that corn is a sacred food that has its own spirit and energy. Document is in Spanish. Resumen español: Corazón de Maíz está centrado en el arte culinario maya y está diseñado para promover la supervivencia del arte y sistema culinario de las abuelas en las diferentes regiones de Huehuetenango, comida que ellas preparaban con amor para toda la familia. Esta «novela culinaria» contiene mitos, leyendas, relatos del origen de algunos alimentos, técnicas y métodos de cocción. Los alimentos que se preparan en la región del alti-plano se desarrollaron a partir de las tradiciones milenarias, la invasión hispana, los estragos del cambio climático que trae sequías y plagas que destruyen la vida, y las diversas con-secuencias e influencias de la emigración. Para los mayas, el maíz siempre ha sido el alimen-to principal y más sagrado. Las historias y recetas dadas en esta visión factual y artista de la comida de los mayas q'anjob'al tienen conexiones continuas con la visión del mundo plasma-da en el Popol Wuj, que afirma que todo lo que existe en la tierra tiene vida, si tiene vida tiene un espíritu y si tiene un espíritu entonces es sagrado, por lo que los abuelos dicen que el maíz es un alimento sagrado que tiene su propio espíritu y energía.
Lekshmi Thulasidharan, Elena D'Onghia, Robert Benjamin et al.
The prevailing model of galaxy formation proposes that galaxies like the Milky Way are built through a series of mergers with smaller galaxies over time. However, the exact details of the Milky Way's assembly history remain uncertain. In this study, we show that the Milky Way's merger history is uniquely encoded in the vertical thickness of its stellar disk. By leveraging age estimates from the value-added LAMOST DR8 catalog and the StarHorse ages from SDSS-IV DR12 data, we investigate the relationship between disk thickness and stellar ages in the Milky Way using a sample comprising Red Giants (RG), Red Clump Giants (RCG), and metal-poor stars (MPS). Guided by the IllustrisTNG50 simulations, we show that an increase in the dispersion of the vertical displacement of stars in the disk traces its merger history. This analysis reveals the epoch of a major merger event that assembled the Milky Way approximately 11.13 billion years ago, as indicated by the abrupt increase in disk thickness among stars of that age, likely corresponding to the Gaia-Sausage Enceladus (GSE) event. The data do not exclude an earlier major merger, which may have occurred about 1.3 billion years after the Big Bang. Furthermore, the analysis suggests that the geometric thick disk of the Milky Way was formed around 11.13 billion years ago, followed by a transition period of approximately 2.6 billion years leading to the formation of the geometric thin disk, illustrating the galaxy's structural evolution. Additionally, we identified three more recent events -- 5.20 billion, 2.02 billion, and 0.22 billion years ago -- potentially linked to multiple passages of the Sagittarius dwarf galaxy. Our study not only elucidates the complex mass assembly history of the Milky Way and highlights its past interactions but also introduces a refined method for examining the merger histories of external galaxies.
Christoph Draxler, Henk van den Heuvel, Arjan van Hessen et al.
Oral history is about oral sources of witnesses and commentors on historical events. Speech technology is an important instrument to process such recordings in order to obtain transcription and further enhancements to structure the oral account In this contribution we address the transcription portal and the webservices associated with speech processing at BAS, speech solutions developed at LINDAT, how to do it yourself with Whisper, remaining challenges, and future developments.
Luis María Caterina
Seong Jin Kim, Tomotsugu Goto, Chih-Teng Ling et al.
With the advent of the James Webb Space Telescope (JWST), extra-galactic source count studies were conducted down to sub-microJy in the mid-infrared (MIR), which is several tens of times fainter than what the previous-generation infrared (IR) telescopes achieved in the MIR. In this work, we aim to interpret the JWST source counts and constrain cosmic star-formation history (CSFH) and black hole accretion history (BHAH). We employ the backward evolution of local luminosity functions (LLFs) of galaxies to reproduce the observed source counts from sub-microJy to a few tens of mJy in the MIR bands of the JWST. The shapes of the LLFs at the MIR bands are determined using the model templates of the spectral energy distributions (SEDs) for five representative galaxy types (star-forming galaxies, starbursts, composite, AGN type 2 and 1). By simultaneously fitting our model to all the source counts in the six MIR bands, along with the previous results, we determine the best-fit evolutions of MIR LFs for each of the five galaxy types, and subsequently estimate the CSFH and BHAH. Thanks to the JWST, our estimates are based on several tens of times fainter MIR sources, the existence of which was merely an extrapolation in previous studies.
Alfonso F. Dingemans
A diferencia del enfoque historiográfico común, este artículo no asume la similitud doctrinaria como la explicación principal de la política comercial. En su lugar, propone considerarla como el resultado de un proceso político que conjuga múltiples intereses. En particular, la política comercial chilena (1850-1914), mediante sus Ordenanzas de Aduanas, ha sido considerada un fiel reflejo de las doctrinas económicas de la época. Sin embargo, este artículo propone ampliar el análisis empírico a las distintas medidas de política, tales como las leyes y decretos, enfatizando su estructura arancelaria ex ante (de iure) más que la ex post (efectiva). Los resultados sugieren un uso compensatorio de los aranceles que llevó a una reducción persistente de estos, independiente de la doctrina imperante en la ordenanza. Sus “designios” fueron, por tanto, más pragmáticos que doctrinarios.
Luiz César de Sá, Camila Condilo
O tema deste dossiê sugere uma série de questões voltadas para os processos de escrita e o papel dos que escrevem, sobretudo as inscritas nas relações entre autoria e autoridade. São exemplos disso a discussão sobre uma possível dimensão transcendental, legitimadora e incontestável de autoridade ou o debate acerca da existência de uma genialidade criativa inata, apenas dois dos variados problemas que permitem explorar a perspectiva de que atribuir um texto a uma identidade individual não é o suficiente para que se forme autoria, não sendo igualmente possível admitir a categoria “autor” em chave essencialista, pois são muitos os resíduos semânticos depositados nela.
Aditya Prakash, K. S. Thejaswini
We consider the model of history-deterministic one-counter nets (OCNs). History-determinism is a property of transition systems that allows for a limited kind of non-determinism which can be resolved 'on-the-fly'. Token games, which have been used to characterise history-determinism over various models, also characterise history-determinism over OCNs. By reducing 1-token games to simulation games, we are able to show that checking for history-determinism of OCNs is decidable. Moreover, we prove that this problem is PSPACE-complete for a unary encoding of transitions, and EXPSPACE-complete for a binary encoding. We then study the language properties of history-deterministic OCNs. We show that the resolvers of non-determinism for history-deterministic OCNs are eventually periodic. As a consequence, for a given history-deterministic OCN, we construct a language equivalent deterministic one-counter automaton. We also show the decidability of comparing languages of history-deterministic OCNs, such as language inclusion and language universality.
María Florencia Antequera
<p>La presente comunicación explora la intervención cultural del intelectual santafesino Alcides Greca (1889-1956) que tuvo lugar en 1948 por las tierras del Brasil. Proponemos focalizar, principalmente, en la singular perspectiva que Greca inscribe en Bahianos y bandeirantes (1950), a partir de un catálogo de referencias sensibles (Ette, 2004), propio del viajero cultural (Aguilar y Siskind, 2002). Entendemos que este relato resulta un vehículo escriturario singular para traducir nuevas experiencias sociales (Antequera, 2020a) cuyo tópico de interés refulgente es la ciudad. La comunicación consta de tres partes: en la primera, discurrimos en torno a los arquetipos y artefactos urbanos que Greca mensura; en la segunda, analizamos las marcas de escritura autobiográfica que coadyuvan en la búsqueda de una legitimación conjunta de proyecto y sujeto (Antelo, 1998). Por último, examinamos las credenciales que porta Greca en su viaje a Brasil –sus contribuciones en el campo del urbanismo y el derecho municipal– y pasamos revista a los anfitriones que, pródigos en atenciones, lo acompañaron.</p>
Lang Liu, Zong-Kuan Guo, Rong-Gen Cai
We develop a formalism to calculate the merger rate density of primordial black hole binaries with a general mass function, by taking into account the merger history of primordial black holes. We apply the formalism to three specific mass functions, monochromatic, power-law and log-normal cases. In the former case, the merger rate density is dominated by the single-merger events, while in the latter two cases, the contribution of the multiple-merger events on the merger rate density can not be ignored. The effects of the merger history on the merger rate density depend on the mass function.
Dennis Dieks
According to what has become a standard history of quantum mechanics, in 1932 von Neumann persuaded the physics community that hidden variables are impossible as a matter of principle, after which leading proponents of the Copenhagen interpretation put the situation to good use by arguing that the completeness of quantum mechanics was undeniable. This state of affairs lasted, so the story continues, until Bell in 1966 exposed von Neumann's proof as obviously wrong. The realization that von Neumann's proof was fallacious then rehabilitated hidden variables and made serious foundational research possible again. It is often added in recent accounts that von Neumann's error had been spotted almost immediately by Grete Hermann, but that her discovery was of no effect due to the dominant Copenhagen Zeitgeist. We shall attempt to tell a story that is more historically accurate and less ideologically charged. Most importantly, von Neumann never claimed to have shown the impossibility of hidden variables tout court, but argued that hidden-variable theories must possess a structure that deviates fundamentally from that of quantum mechanics. Both Hermann and Bell appear to have missed this point, moreover, both raised unjustified technical objections to the proof. Von Neumann's argument was basically that hidden-variables schemes must violate the "quantum principle" that physical quantities are to be represented by operators in a Hilbert space. As a consequence, hidden-variables schemes, though possible in principle, necessarily exhibit a certain kind of contextuality.
Halaman 41 dari 532262