Objective: This study examines the impact of civil status on family relations in Uzbekistan, focusing on the legal framework, historical evolution, and socio-legal implications. It explores how civil registration influences marital, parental, and inheritance rights within the broader context of civil and family law. Method: The research employs a qualitative approach, utilizing doctrinal legal analysis of legislative texts, judicial decisions, and academic literature. Comparative and historical methods are applied to assess the development and application of family law in Uzbekistan. Results: The findings reveal that civil status significantly affects family rights and obligations, with legal recognition playing a crucial role in defining marital legitimacy, parental authority, and inheritance distribution. Moreover, inconsistencies in registration procedures and legal interpretation create challenges for individuals navigating family law disputes. Novelty: This study contributes to the discourse on civil and family law in post-Soviet legal systems by highlighting the unique interplay between civil status and family relations in Uzbekistan. It offers insights into the legal complexities individuals face and suggests potential reforms to enhance legal clarity and protect family rights.
Neural scaling laws have driven the field's ever-expanding exponential growth in parameters, data and compute. While scaling behaviors for pretraining losses and discriminative benchmarks are well established, generative benchmarks such as mathematical problem-solving or software engineering remain under-explored. We propose and evaluate three different pretraining scaling laws for fitting pass-at-$k$ on generative evaluations and for predicting pass-at-$k$ of the most expensive model using cheaper models. Our three scaling laws differ in the covariates used: (1) pretraining compute, (2) model parameters and pretraining tokens, (3) log likelihoods of gold reference solutions. First, we demonstrate that generative evaluations introduce new hyperparameters (in our setting, $k$) that act as a control lever for scaling behavior, modulating both the scaling law parameters and the predictability of performance. Second, we identify a stark difference in parameter stability: while the compute and parameters+tokens laws stabilize for only the last $\mathord{\sim}1.5\mathord{-}2.5$ orders of magnitude, the gold reference likelihood law is uniquely stable, converging across $\mathord{\sim}5$ orders. Third, in terms of predictive performance, we find all three scaling laws perform comparably, although the compute law predicts slightly worse for small $k$ and the gold reference law predicts slightly worse for large $k$. Finally, we establish a theoretical connection, proving that the compute scaling law emerges as the compute-optimal envelope of the parameters-and-tokens law. Our framework provides researchers and practitioners with insights and methodologies to forecast generative performance, accelerating progress toward models that can reason, solve, and create.
Zero-one laws state that probabilistic events of a certain type must occur with probability either $0$ or $1$, and nothing in between. We formulate a syntactic zero-one law, which enjoys good logical properties while being broadly applicable in probability theory. Then, inspired by Gödel's Dialectica interpretation, we finitise it: The result is an approximate zero-one law which states that events with a particular finite structure occur with probability close to $0$ or $1$ up to an arbitrary degree of precision. This approximate zero-one law is equivalent - over classical logic - to the original zero-one law, but in contrast to the latter, is formulated entirely in terms of finite unions and intersections of events. Furthermore, in line with recent logical metatheorems for probability, it admits a computational interpretation, which in turn facilitates a quantitative analysis of theorems whose proof makes use of zero-one laws. Concrete applications in this spirit, over a variety of different settings, are discussed.
Charles J. Law, Romane Le Gal, Karin I. Öberg
et al.
The sulfur chemistry in protoplanetary disks influences the properties of nascent planets, including potential habitability. Although the inventory of sulfur molecules in disks has gradually increased over the last decade, CS is still the most commonly-observed sulfur-bearing species and it is expected to be the dominant gas-phase sulfur carrier beyond the water snowline. Despite this, few dedicated multi-line observations exist, and thus the typical disk CS chemistry is not well constrained. Moreover, it is unclear how that chemistry - and in turn, the bulk volatile sulfur reservoir - varies with stellar and disk properties. Here, we present the largest survey of CS to date, combining both new and archival observations from ALMA, SMA, and NOEMA of 12 planet-forming disks, covering a range of stellar spectral types and dust morphologies. Using these data, we derived disk-integrated CS gas excitation conditions in each source. Overall, CS chemistry appears similar across our sample with rotational temperatures of ${\approx}$10-40 K and column densities between 10$^{12}$-10$^{13}$ cm$^{-2}$. CS column densities do not show strong trends with most source properties, which broadly suggests that CS chemistry is not highly sensitive to disk structure or stellar characteristics. We do, however, identify a positive correlation between stellar X-ray luminosity and CS column density, which indicates that the dominant CS formation pathway is likely via ion-neutral reactions in the upper disk layers, where X-ray-enhanced S$^+$ and C$^+$ drive abundant CS production. Thus, using CS as a tracer of gas-phase sulfur abundance requires a nuanced approach that accounts for its emitting region and dependence on X-ray luminosity.
Large Language Models (LLMs) have emerged as a milestone in artificial intelligence, and their performance can improve as the model size increases. However, this scaling brings great challenges to training and inference efficiency, particularly for deploying LLMs in resource-constrained environments, and the scaling trend is becoming increasingly unsustainable. This paper introduces the concept of ``\textit{capacity density}'' as a new metric to evaluate the quality of the LLMs across different scales and describes the trend of LLMs in terms of both effectiveness and efficiency. To calculate the capacity density of a given target LLM, we first introduce a set of reference models and develop a scaling law to predict the downstream performance of these reference models based on their parameter sizes. We then define the \textit{effective parameter size} of the target LLM as the parameter size required by a reference model to achieve equivalent performance, and formalize the capacity density as the ratio of the effective parameter size to the actual parameter size of the target LLM. Capacity density provides a unified framework for assessing both model effectiveness and efficiency. Our further analysis of recent open-source base LLMs reveals an empirical law (the densing law)that the capacity density of LLMs grows exponentially over time. More specifically, using some widely used benchmarks for evaluation, the capacity density of LLMs doubles approximately every three months. The law provides new perspectives to guide future LLM development, emphasizing the importance of improving capacity density to achieve optimal results with minimal computational overhead.
The four laws of black hole mechanics have been put forward for a long time. However, the zeroth law, which states that the surface gravity of a stationary black hole is a constant on the event horizon, still lacks universal proof in various modified gravitational theories. In this paper, we study the zeroth law {in a special Horndeski gravity}, which is an interesting gravitational theory with a nonminimally coupled scalar field. After assuming that {the nonminimally coupled scalar field has the same symmetries with the spacetime,} the minimally coupled matter fields satisfy the dominant energy condition and the Horndeski gravity has a smooth limit to Einstein gravity when the coupling constant approaches zero, we prove the zeroth law based on the gravitational equation in Horndeski gravity without any assumption to the spacetime symmetries.
Macroscopic cyclic heat engines have been a major motivation for the emergence of thermodynamics. In the last decade, cyclic heat engines that have large fluctuations and operate at finite time were studied within the more modern framework of stochastic thermodynamics. The second law for such heat engines states that the efficiency cannot be larger than the Carnot efficiency. The concept of cyclic active heat engines for a system in the presence of hidden dissipative degrees of freedom, also known as a nonequilibrium or active reservoir, has also been studied in theory and experiment. Such active engines show rather interesting behavior such as an ``efficiency'' larger than the Carnot bound. They are also likely to play an important role in future developments, given the ubiquitous presence of active media. However, a general second law for cyclic active heat engines has been lacking so far. Here, upon using a known inequality in stochastic stochastic thermodynamics for the excess entropy, we obtain a general second law for active heat engines, which does not involve the energy dissipation of the hidden degrees of freedom and is expressed in terms of quantities that can be measured directly from the observable degrees of freedom. Besides heat and work, our second law contains an information-theoretic term, which allows an active heat engine to extract work beyond the limits valid for a passive heat engine. To obtain a second law expressed in terms of observable variables in the presence of hidden degrees of freedom we introduce a coarse-grained excess entropy and prove a fluctuation theorem for this quantity.
Spinning black holes create electromagnetic storms when immersed in ambient magnetic fields, illuminating the otherwise epically dark terrain. In an electromagnetic extension of the Penrose process, tremendous energy can be extracted, boosting the energy of radiating particles far more efficiently than the mechanical Penrose process. We locate the regions from which energy can be mined and demonstrate explicitly that they are no longer restricted to the ergosphere. We also show that there can be toroidal regions that trap negative energy particles in orbit around the black hole. We find that the effective charge coupling between the black hole and the super-radiant particles decreases as energy is extracted, much like the spin of a black hole decreases in the mechanical analogue. While the effective coupling decreases, the actual charge of the black hole increases in magnitude reaching the energetically-favored Wald value, at which point energy extraction is impeded. We demonstrate the array of orbits for products from the electromagnetic Penrose process.
A locale, being a complete Heyting algebra, satisfies De Morgan law $(a\vee b)^*=a^*\wedge b^*$ for pseudocomplements. The dual De Morgan law $(a\wedge b)^*={a^* \vee b^*}$ (here referred to as the second De Morgan law) is equivalent to, among other conditions, $(a\vee b)^{**} =a^{**}\vee b^{**}$, and characterizes the class of extremally disconnected locales. This paper presents a study of the subclasses of extremally disconnected locales determined by the infinite versions of the second De Morgan law and its equivalents.
The proof is the most important stage in settlement of a case in court because it aims to prove that a particular legal event or relationship has been made as a basis for a lawsuit. Through the burden of the proof stage, the judge will get the bases to decide between settling a case. Nevertheless, the burden of proof regulation remains plural. There are even some regulations which regulate not only the material law but also the formal law. Such a situation affects the achievement of order and legal certainty in law enforcement efforts. As is known, the nature of the procedural law is formal law, namely the law concerning the rules of the game in settlement of disputes through the court, and is binding on all parties and cannot be deviated. That is why procedural law has a public nature. For the certainty of law, therefore, the procedural law must be in the codification form of unification nature so that it can generally apply to and binding on all parties. Therefore, it is necessary to reform the civil procedural law that is codified and nationally applicable.
Kirchhoff's current law is thought to describe the translational movement of charged particles through resistors. But Kirchhoff's law is widely used to describe movements of current through resistors in high speed devices. Current at high frequencies/short times involves much more than the translation of particles. Transients abound. Augmentation of the resistors with ad hoc 'stray' capacitances is often used to introduce transients into models like those in real resistors. But augmentation hides the underlying problem, rather than solves it: the location, value and dielectric properties of the stray capacitances are not well determined. Here, we suggest a more general approach, that is well determined. If current is redefined as in Maxwell's equations, independent of the properties of dielectrics, Kirchhoff's law is exact and transients arise automatically without ambiguity. The transients in a particular real circuit-a high density integrated circuit for example-can then be described by measured constitutive equations together with Maxwell's equations without the introduction of arbitrary circuit elements.
A mere hyperbolic law, like the Zipf's law power function, is often inadequate to describe rank-size relationships. An alternative theoretical distribution is proposed based on theoretical physics arguments starting from the Yule-Simon distribution. A modeling is proposed leading to a universal form. A theoretical suggestion for the "best (or optimal) distribution", is provided through an entropy argument. The ranking of areas through the number of cities in various countries and some sport competition ranking serves for the present illustrations.
Homes' law, $ρ_s = C σ_{\mathrm{DC}} T_c$, is an empirical law satisfied by various superconductors with a material independent universal constant $C$, where $ρ_{s}$ is the superfluid density at zero temperature, $T_c$ is the critical temperature, and $σ_{\mathrm{DC}}$ is the electric DC conductivity in the normal state close to $T_c$. We study Homes' law in holographic superconductor with Q-lattices and find that Homes' law is realized for some parameter regime in insulating phase near the metal-insulator transition boundary, where momentum relaxation is strong. In computing the superfluid density, we employ two methods: one is related to the infinite DC conductivity and the other is related to the magnetic penetration depth. With finite momentum relaxation both yield the same results, while without momentum relaxation only the latter gives the superfluid density correctly because the former has a spurious contribution from the infinite DC conductivity due to translation invariance.
Iafrate, Miller, and Strauch [Equipartition and a Distribution for Numbers: A Statistical Model for Benford's Law," arXiv:1503.08259] construct and test a statistical model for partitioning a conserved quantity. One consequence of their model is Benford's law. This Comment amplifies their work by exploring its thermodynamic consequences.
One of the first steps to understand and forecast economic downturns is identifying their frequency distribution, but it remains uncertain. This problem is common in phenomena displaying power-law-like distributions. Power laws play a central role in complex systems theory; therefore, the current limitations in the identification of this distribution in empirical data are a major obstacle to pursue the insights that the complexity approach offers in many fields. This paper addresses this issue by introducing a reliable methodology with a solid theoretical foundation, the Taylor Series-Based Power Law Range Identification Method. When applied to time series from 39 countries, this method reveals a well-defined power law in the relative per capita GDP contractions that span from 5.53% to 50%, comprising 263 events. However, this observation does not suffice to attribute recessions to some specific mechanism, such as self-organized criticality. The paper highlights a set of points requiring more study so as to discriminate among models compatible with the power law, as needed to develop sound tools for the management of recessions.
It is well known that a canonical scalar field with an exponential potential can drive power law inflation (PLI). However, the tensor-to-scalar ratio in such models turns out to be larger than the stringent limit set by recent Planck results. We propose a new model of power law inflation for which the scalar spectra index, the tensor-to-scalar ratio and the non-gaussianity parameter $f_{_{\mathbf{NL}}}^{\mathrm{equil}}$ are in excellent agreement with Planck results. Inflation, in this model, is driven by a non-canonical scalar field with an inverse power law potential. The Lagrangian for our model is structurally similar to that of a canonical scalar field and has a power law form for the kinetic term. A simple extension of our model resolves the graceful exit problem which usually afflicts models of power law inflation.
In the first article of this series we have presented the history of auxiliary primes from Legendre's proof of the quadratic reciprocity law up to Artin's reciprocity law. We have also seen that the proof of Artin's reciprocity law consists of several steps, the first of which is the verification of the reciprocity law for cyclotomic extensions. In this article we will show that this step can be identified with one of Dedekind's proofs of the irreducibility of the cyclotomic polynomial.
We completely determine the free infinite divisibility for the Boolean stable law which is parametrized by a stability index $α$ and an asymmetry coefficient $ρ$. We prove that the Boolean stable law is freely infinitely divisible if and only if one of the following conditions holds: $0<α\leq\frac{1}{2}$; $\frac{1}{2}<α\leq\frac{2}{3}$ and $2-\frac{1}α\leqρ\leq \frac{1}α-1$; $α=1,~ρ=\frac{1}{2}$. Positive Boolean stable laws corresponding to $ρ=1$ and $α\leq \frac{1}{2}$ have completely monotonic densities and they are both freely and classically infinitely divisible. We also show that continuous Boolean convolutions of positive Boolean stable laws with different stability indices are also freely and classically infinitely divisible. Boolean stable laws, free stable laws and continuous Boolean convolutions of positive Boolean stable laws are non-trivial examples whose free divisibility indicators are infinity. We also find that the free multiplicative convolution of Boolean stable laws is again a Boolean stable law.
We prove an analogue of the Sato-Tate conjecture for Drinfeld modules. Using ideas of Drinfeld, J.-K. Yu showed that Drinfeld modules satisfy some Sato-Tate law, but did not describe the actual law. More precisely, for a Drinfeld module φdefined over a field L, he constructs a continuous representation ρ_\infty : W_L \to D^* of the Weil group of L into a certain division algebra, which encodes the Sato-Tate law. When the Drinfeld module has generic characteristic and L is finitely generated, we shall describe the image of this representation up to commensurability. As an application, we give improved upper bounds for the Drinfeld module analogue of the Lang-Trotter conjecture.