"I Might be Using His... But It is Also Mine!": Ownership and Control in Accounts Designed for Sharing
Ji Eun Song, Jaeyoun You, Joongseek Lee
A user's ownership perception of virtual objects, such as cloud files, is generally uncertain. Is this valid for streaming platforms featuring accounts designed for sharing (DS)? We observe sharing practices within DS accounts of streaming platforms and identify their ownership characteristics and unexpected complications through two mixed-method studies. Casual and Cost-splitting are the two sharing practices identified. The owner is the sole payer for the account in the former, whereas profile holders split the cost in the latter. We distinguish two types of ownership in each practice -- Primary and Dual. In Primary ownership, the account owner has the power to allow others to use the account; in Dual ownership, Primary ownership appears in conjunction with joint ownership, notably displaying asymmetric ownership perceptions among users. Conflicts arise when the sharing agreements collapse. Therefore, we propose design recommendations that bridge ownership differences based on sharing practices of DS accounts.
Simultaneously accounting for winner's curse and sample structure in Mendelian randomization: bivariate rerandomized inverse variance weighted estimator
Xin Liu, Ping Yin, Peng Wang
The recently developed rerandomized inverse variance weighted (RIVW) estimator provides a simple and efficient framework to break the winner's curse in two-sample Mendelian randomization (MR). However, this method has ignored the possible presence of sample structure (e.g., residual population stratification and sample overlap), a common confounding factor in MR studies. Sample structure can not only distort SNP-exposure and SNP-outcome association estimates but also induce correlation between them, leading exposure-side instrument selection to propagate bias to the outcome side. To address this challenge, we propose the bivariate RIVW (BRIVW) estimator that can simultaneously account for the winner's curse and sample structure. The BRIVW estimator extends the RIVW framework by modeling the joint distribution of SNP-exposure and SNP-outcome associations, first adjusting their covariance matrix via linkage disequilibrium score regression to account for sample structure, and then applying randomized instrument selection and Rao-Blackwellization to obtain unbiased post-selection association estimates as well as their covariance matrix. Under appropriate conditions, we show that the BRIVW estimator is consistent and asymptotically normal. Extensive simulations and real data analyses demonstrate that the BRIVW estimator provides more accurate causal effect estimates than existing methods.
Accounting for Tidal Deformability in Binary Neutron Star Template Banks
Lorenzo Piccari, Francesco Pannarale
Modelled searches for gravitational waves emitted by compact binary coalescences currently filter the data with template signals that ignore all effects related to the physics of dense-matter in neutron stars interiors, even when the masses in the template are compatible with a binary neutron star or a neutron star-black hole binary source. The leading neutron star finite-size effect is an additional phase contribution due to tidal deformations induced by the gravitational coupling between the two inspiralling objects in the binary. We show how neglecting this effect in the templates reduces the search sensitivity close to the detection threshold. This is particularly true for binary neutron stars systems, where tidal effects are larger. In this work we therefore propose a new technique for the construction of binary neutron star template banks that accounts for neutron star tidal deformabilities as degrees of freedom of the parameter space to be searched over. A first attempt in this direction was carried out by Harry & Lundgren [Physical Review D 104, 043008 (2021)], who proposed to extract randomly the tidal deformabilities of the stars over a uniform interval, regardless of the binary neutron star component masses. We show that this approach yields 33% additional templates with respect to the equivalent point-like template bank. Our proposed approach, instead, adopts a more physically motivated tidal deformability prior with a support that is informed by the value of the neutron star mass and compatible with the neutron star equation of state constraint provided by the observation of GW170817. This method significantly reduces the needed additional templates to 8.2%.
Accountability of Generative AI: Exploring a Precautionary Approach for "Artificially Created Nature"
Yuri Nakao
The rapid development of generative artificial intelligence (AI) technologies raises concerns about the accountability of sociotechnical systems. Current generative AI systems rely on complex mechanisms that make it difficult for even experts to fully trace the reasons behind the outputs. This paper first examines existing research on AI transparency and accountability and argues that transparency is not a sufficient condition for accountability but can contribute to its improvement. We then discuss that if it is not possible to make generative AI transparent, generative AI technology becomes ``artificially created nature'' in a metaphorical sense, and suggest using the precautionary principle approach to consider AI risks. Finally, we propose that a platform for citizen participation is needed to address the risks of generative AI.
Towards Proxy Staking Accounts Based on NFTs in Ethereum
Viktor Valaštín, Roman Bitarovský, Kristián Košťál
et al.
Blockchain is a technology that is often used to share data and assets. However, in the decentralized ecosystem, blockchain-based systems can be utilized to share information and assets without the traditional barriers associated with solo responsibility, e.g., multi-sig wallets. This paper describes an innovative approach to blockchain networks based on a non-fungible token that behaves as an account (NFTAA). The key novelty of this article is using NFTAA to leverage the unique properties of NFTs to manage your ownership better and effectively isolate them to improve the security, transparency, and even interoperability possibilities. Additionally, the account-based solution gives us the ability and flexibility to cover regular use cases such as staking and liquid equities, but also practical composability. This article offers a simple implementation, which allows developers and researchers to choose the best solution for their needs in demand of abstract representation in any use case.
Improving the quality of life for sustainable development in the context of globalization and modernization of Kazakhstan's economy
R.U. Unerbayeva, G.Zh. Alibekova, J. Grabara
et al.
Quality of life is a multifaceted concept that reflects the level of well-being and life satisfaction of the population. President K.K. Tokayev noted that ‘in order to improve the quality of life of every Kazakhstani citizen, infrastructural issues that directly affect the quality of life will be in the centre of attention’. Indicators of the quality of life of the population are important indicators for assessing the specific socio-economic consequences of the ongoing transformations and the degree of social tension in society. Therefore, the quality of life of the population becomes relevant in the centre of attention of the state leadership.
The purpose of the article is to determine the actual assessment criteria for Kazakhstan and to propose measures to improve the quality of life in the conditions of globalisation and modernisation. An analysis is conducted based on existing approaches to assessing the quality of life and similar concepts of sustainable development. It is considered how improving the quality of life can contribute to the development of regions in the context of globalization and economic modernization, paying special attention to promoting sustainable development.
This article examines the relationship between economic growth and population well-being in Kazakhstan. It examines the dynamics of gross domestic product (GDP), the state of education and proposes measures to improve the education system in the country. The importance of cooperation between the government, business and the public in the implementation of these strategies is emphasized. Promoting and implementing these strategies requires open dialogue, support for project development efforts and attention to every aspect of development. These efforts will contribute to a stable and uniform improvement in the quality of life of the population of Kazakhstan and the region as a whole.
Key words: quality of life, factors, sustainable development, welfare, income.
Economics as a science, Marketing. Distribution of products
Accounting For Informative Sampling When Learning to Forecast Treatment Outcomes Over Time
Toon Vanderschueren, Alicia Curth, Wouter Verbeke
et al.
Machine learning (ML) holds great potential for accurately forecasting treatment outcomes over time, which could ultimately enable the adoption of more individualized treatment strategies in many practical applications. However, a significant challenge that has been largely overlooked by the ML literature on this topic is the presence of informative sampling in observational data. When instances are observed irregularly over time, sampling times are typically not random, but rather informative -- depending on the instance's characteristics, past outcomes, and administered treatments. In this work, we formalize informative sampling as a covariate shift problem and show that it can prohibit accurate estimation of treatment outcomes if not properly accounted for. To overcome this challenge, we present a general framework for learning treatment outcomes in the presence of informative sampling using inverse intensity-weighting, and propose a novel method, TESAR-CDE, that instantiates this framework using Neural CDEs. Using a simulation environment based on a clinical use case, we demonstrate the effectiveness of our approach in learning under informative sampling.
Assessment of Large Eddy Simulation (LES) Sub-grid Scale Models Accounting for Compressible Homogeneous Isotropic Turbulence
Jhon Cordova, Cesar Celis, Andres Mendiburu
et al.
Most sub-grid scale (SGS) models employed in LES (large eddy simulation) formulations were originally developed for incompressible, single phase, inert flows and assume transfer of energy based on the classical energy cascade mechanism. Although they have been extended to numerically study compressible and reactive flows involving deflagrations and detonations, their accuracy in such sensitive and challenging flows is an open question. Therefore, there is a need for both assessing these existing SGS models and identifying the opportunities for proposing new ones, which properly characterize reacting flows in complex engine configurations such as those characterizing rotating detonation engines (RDEs). Accordingly, accounting for the decay of free homogeneous isotropic turbulence (HIT), this work provides a comparison of four different SGS models when compressibility effects are present, (i) the classical Smagorinsky model, (ii) the dynamic Smagorinsky model, (iii) the wall-adapting local eddy-viscosity (WALE) model, and (iv) the Vreman model. More specifically, SGS models are firstly implemented in the open-source computational tool PeleC, which is a high-fidelity finite-volume solver for compressible flows, and then numerical simulations are carried out using them. In terms of results, turbulent spectra, and the decay of physical quantities such as kinetic energy, enstrophy, temperature, and dilatation are computed for each SGS LES model and compared with direct numerical simulations (DNS) results available in literature. The LES numerical results obtained here highlight that the studied SGS models are capable of capturing the overall trends of all physical quantities accounted for. However, they also emphasize the need of improved SGS models capable of adequately describing turbulence dynamics in compressible flows.
en
physics.flu-dyn, cs.DC
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF
Anand Siththaranjan, Cassidy Laidlaw, Dylan Hadfield-Menell
In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called Borda count. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called distributional preference learning (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability. Our code and data are available at https://github.com/cassidylaidlaw/hidden-context
Accounting for localized deformation: a simple computation of true stress in micropillar compression experiments
Jalal Smiri, Oguz Umut Salman, Matteo Ghidelli
et al.
Compression experiments are widely used to study the mechanical properties of materials at micro- and nanoscale. However, the conventional engineering stress measurement method used in these experiments neglects to account for the alterations in the material's shape during loading. This can lead to inaccurate stress values and potentially misleading conclusions about the material's mechanical behavior especially in the case of localized deformation. To address this issue, we present a method for calculating true stress in cases of localized plastic deformation commonly encountered in experimental settings: (i) a single band and (ii) two bands oriented in arbitrary directions with respect to the vertical axis of the pillar (either in the same or opposite directions). Our simple analytic formulas can be applied to homogeneous and isotropic materials and crystals, requiring only standard data (displacement-force curve, aspect ratio, shear band angle and elastic strain limit) obtained from experimental results and eliminating the need for finite element computations. Our approach provides a more precise interpretation of experimental results and can serve as a valuable and simple tool in material design and characterization.
en
cond-mat.mtrl-sci, cond-mat.mes-hall
Population synthesis of exoplanets accounting for orbital variations due to stellar evolution
A. S. Andriushin, S. B. Popov
In this paper, the evolution of exoplanet orbits at the late stages of stellar evolution is studied by the method of population synthesis. The evolution of stars is traced from the Main Sequence stage to the white dwarf stage. The MESA package is used to calculate evolutionary tracks. The statistics of absorbed, ejected, and surviving planets by the time of the transformation of parent stars into white dwarfs are calculated taking into account the change in the rate of star formation in the Galaxy over the entire time of its existence. Planets around stars in the range of initial masses 1-8 $M_\odot$ are considered since less massive stars do not have time to leave the Main Sequence during the lifetime of the Galaxy, and more massive ones do not lead to the formation of white dwarfs. It is shown that with the initial $a$~--~$M_\mathrm{pl}$ distribution of planets adopted in this work, most (about 60\%) of the planets born from stars in the mass range under study are absorbed by their parent stars at the giant stage. A small fraction of the planets (less than one percent) are ejected from their systems because of the mass loss due to the stellar wind. The estimated number of ejected planets with masses ranging from 0.04 Earth masses to 13 Jupiter masses in the Milky way is approximately equal to 300 million.
en
astro-ph.EP, astro-ph.SR
Impact Of Annuity on The Economic Development of Nigeria
Alli Noah Gbenga, Afolabi Mutiu Adeniyi
An annuity is a way of transferring longevity risk from an individual to an insurance company, which can pool risk among many individuals and achieve greater diversification of risk than can be accomplished by any individual. Annuity premiums build capital for insurance companies and encourage technical innovation and progress, while the benefits paid to annuitants are beneficial to them in the and, in turn, helps with the economies of large-scale production and increases specialization, which helps to accelerate labour profuctivity and increase GDP. This paper examines the impact of annuities on economic development in Nigeria. Annuities provide longevity risk protection and retirement income security. The study aims to determine the impact of annuity premiums paid by workers and annuity benefits paid to retirees on Nigeria's economic development, as measured by real GDP. An ex-post facto research design is adopted using secondary data from 2014-2020. Two hypotheses are tested using regression analysis.
The first hypothesis postulates that annuity premiums do not have significant impact on economic development. The second hypothesis postulates that there is no significant effect of annuity benefits on economic development. The results reject both null hypotheses at the 5 % significance level. Annuity premiums positively and significantly (p = 0.013) affect economic development. This occurs as premiums create capital formation for insurance firms to make productivity-enhancing investments. However, annuity benefits negatively and significantly (p = 0.0027) affect development, likely indicating funds withdrawn from the economy. Recommendations include the establishing structures to develop annuity products and markets to address the problems of the decumulation phase.
In addition, it is advisable to increase investment in annuity premiums and to monitor the payout ratio of annuities. The empirical analysis provides useful insights, but future research should explore long-term macroeconomic effects.
Las bases de datos y el data mining como herramientas contra la corrupción en la administración local
Jaime Clemente Martínez
Ante el auge de casos de corrupción en las administraciones, en especial en las locales, cada vez resulta más conveniente la utilización de técnicas avanzadas para evitarlos. El uso de las bases de datos a las que tienen acceso las entidades locales y las técnicas de data mining supone una nueva oportunidad para que los consistorios puedan evitar fraudes en sus expedientes. Por ello, en el presente artículo se aborda la gran utilidad de dichas bases de datos y de la minería de datos para detectar conflictos de intereses y corruptelas municipales que, de otra forma, pasarían desapercibidas. Asimismo, se hace un especial hincapié en el margen de actuación que la normativa de protección de datos otorga en este ámbito y qué retos deben abordarse para conseguir exprimir el potencial de dichas bases de datos con fines antifraude para poder luchar adecuadamente contra la lacra de la corrupción local.
Political institutions and public administration (General), Accounting. Bookkeeping
Towards an Accountable and Reproducible Federated Learning: A FactSheets Approach
Nathalie Baracaldo, Ali Anwar, Mark Purcell
et al.
Federated Learning (FL) is a novel paradigm for the shared training of models based on decentralized and private data. With respect to ethical guidelines, FL is promising regarding privacy, but needs to excel vis-à-vis transparency and trustworthiness. In particular, FL has to address the accountability of the parties involved and their adherence to rules, law and principles. We introduce AF^2 Framework, where we instrument FL with accountability by fusing verifiable claims with tamper-evident facts, into reproducible arguments. We build on AI FactSheets for instilling transparency and trustworthiness into the AI lifecycle and expand it to incorporate dynamic and nested facts, as well as complex model compositions in FL. Based on our approach, an auditor can validate, reproduce and certify a FL process. This can be directly applied in practice to address the challenges of AI engineering and ethics.
A General Bayesian Framework to Account for Foreground Map Errors in Global 21-cm Experiments
Michael Pagano, Peter Sims, Adrian Liu
et al.
Measurement of the global 21-cm signal during Cosmic Dawn (CD) and the Epoch of Reionization (EoR) is made difficult by bright foreground emission which is 2-5 orders of magnitude larger than the expected signal. Fitting for a physics-motivated parametric forward model of the data within a Bayesian framework provides a robust means to separate the signal from the foregrounds, given sufficient information about the instrument and sky. It has previously been demonstrated that, within such a modelling framework, a foreground model of sufficient fidelity can be generated by dividing the sky into $N$ regions and scaling a base map assuming a distinct uniform spectral index in each region. Using the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) as our fiducial instrument, we show that, if unaccounted-for, amplitude errors in low-frequency radio maps used for our base map model will prevent recovery of the 21-cm signal within this framework, and that the level of bias in the recovered 21-cm signal is proportional to the amplitude and the correlation length of the base-map errors in the region. We introduce an updated foreground model that is capable of accounting for these measurement errors by fitting for a monopole offset and a set of spatially-dependent scale factors describing the ratio of the true and model sky temperatures, with the size of the set determined by Bayesian evidence-based model comparison. We show that our model is flexible enough to account for multiple foreground error scenarios allowing the 21-cm sky-averaged signal to be detected without bias from simulated observations with a smooth conical log spiral antenna.
en
astro-ph.CO, astro-ph.IM
The Impact of Psychological Dimensions of Financial anagers on Financial Reporting Quality
Mohamadreza Khodabakhshian Naeni, Mehdi Arab Salehi, Hasan Khoshakhlagh
et al.
One of the main and most important sources of information for decision makers, especially external users, is the reports and financial statements of companies. Therefore, the purpose of this study is to determine the personality traits affecting the financial reporting of managers and companies listed on the Tehran Stock Exchange.For this purpose, a sample consisting of companies listed on the Tehran Stock Exchange in the period from 2014 to the end of 2018 was selected. After studying the theoretical foundations of research topics and formulate research hypotheses, collect and prepare data sets used by researchers and eventually hypotheses using structural equation modeling approach tested and analyzed.The results show that personality traits have a significant effect on the financial reporting of managers and companies listed on the Tehran Stock Exchange. thus Investors as well as the board of directors of companies are advised to consider the personality traits and components of financial intelligence of the person or persons in question at an acceptable level in selecting financial managers.
Accounting. Bookkeeping, Finance
The need to understand innovative activity as a capital-forming process in the accounting system
V.K.
The compliance of the current accounting system with the needs of interested parties regarding the innovative activity of the enterprise has been analyzed. Dissatisfaction of interested users with information about research and development, formed in the accounting system, has been considered. The requirements of international and national accounting standards (IAS/IFRS, GAAP US, S(S)A) regarding the capitalization of research and development costs have been analyzed. The use of a cost-forming approach to understanding innovative activity in the accounting system has been considered. The inconsistency of the accounting and economic approaches to understanding the innovative activity of the enterprise has been established. The need to use a capital-forming approach to understanding the essence of innovative activity in the accounting system to meet the growing needs of users in the conditions of the knowledge economy has been substantiated. The need to use the integrated concept of «innovation capital» in accounting for the development of a new system of accounting and reporting on innovative initiatives, prospects and risks to the innovative policy and strategy of the enterprise has been proposed.
Thinking of peace when rich: The effect of industry growth on corporate risk-taking
Xiangting Kong, Jinsong Tan, Jingxin Zhang
We investigate the unique role and mechanisms of industry growth in firms’ risk-taking policies. We find that industry growth is negatively associated with corporate risk-taking, consistent with the prospect theory that a high-growth industry gives firms a superior external environment, which may cause them to refrain from corporate risk-taking as in the saying “thinking of peace when rich.” This correlation is stronger for product market leaders, industries encouraged by industry policies and industries that receive more government support. Firms reduce risk-taking through various corporate policies, including long-term, high-value investments, operational efficiency and cash holdings in response to high industry growth. Overall, our results are consistent with industry growth negatively affecting corporate risk-taking.
Characterizing Retweet Bots: The Case of Black Market Accounts
Tuğrulcan Elmas, Rebekah Overdorf, Karl Aberer
Malicious Twitter bots are detrimental to public discourse on social media. Past studies have looked at spammers, fake followers, and astroturfing bots, but retweet bots, which artificially inflate content, are not well understood. In this study, we characterize retweet bots that have been uncovered by purchasing retweets from the black market. We detect whether they are fake or genuine accounts involved in inauthentic activities and what they do in order to appear legitimate. We also analyze their differences from human-controlled accounts. From our findings on the nature and life-cycle of retweet bots, we also point out several inconsistencies between the retweet bots used in this work and bots studied in prior works. Our findings challenge some of the fundamental assumptions related to bots and in particular how to detect them.
Detecting Malicious Accounts showing Adversarial Behavior in Permissionless Blockchains
Rachit Agarwal, Tanmay Thapliyal, Sandeep K. Shukla
Different types of malicious activities have been flagged in multiple permissionless blockchains such as bitcoin, Ethereum etc. While some malicious activities exploit vulnerabilities in the infrastructure of the blockchain, some target its users through social engineering techniques. To address these problems, we aim at automatically flagging blockchain accounts that originate such malicious exploitation of accounts of other participants. To that end, we identify a robust supervised machine learning (ML) algorithm that is resistant to any bias induced by an over representation of certain malicious activity in the available dataset, as well as is robust against adversarial attacks. We find that most of the malicious activities reported thus far, for example, in Ethereum blockchain ecosystem, behaves statistically similar. Further, the previously used ML algorithms for identifying malicious accounts show bias towards a particular malicious activity which is over-represented. In the sequel, we identify that Neural Networks (NN) holds up the best in the face of such bias inducing dataset at the same time being robust against certain adversarial attacks.