In this paper, we introduce the concept of the generalized $(m, ψ, δ)-$capacity in the complex space $\mathbb{C}^n$, within the class of $m-$subharmonic functions. We give a relation between $(m, ψ, δ)-$capacity and $(m, ψ, δ)-$subharmonic measure. Moreover, we prove that the capacity vanishes on $m-$polar sets and vice versa.
As a fundamental problem in transportation and operations research, the bilevel capacity expansion problem (BCEP) has been extensively studied for decades. In practice, BCEPs are commonly addressed in two stages: first, pre-select a small set of links for expansion; then, optimize their capacities. However, this sequential and separable approach can lead to suboptimal solutions as it neglects the critical interdependence between link selection and capacity allocation. In this paper, we propose to introduce a cardinality constraint into the BCEP to limit the number of expansion locations rather than fixing such locations beforehand. This allows us to search over all possible link combinations within the prescribed limit, thereby enabling the joint optimization of both expansion locations and capacity levels. The resulting cardinality-constrained BCEP (CCBCEP) is computationally challenging due to the combination of a nonconvex equilibrium constraint and a nonconvex and discontinuous cardinality constraint. To address this challenge, we develop a penalized difference-of-convex (DC) approach that transforms the original problem into a sequence of tractable subproblems by exploiting its inherent DC structure and the special properties of the cardinality constraint. We prove that the method converges to approximate Karush-Kuhn-Tucker (KKT) solutions with arbitrarily prescribed accuracy. Numerical experiments further show that the proposed approach consistently outperforms alternative methods for identifying practically feasible expansion plans investing only a few links, both in solution quality and computational efficiency.
We develop bounds on the capacity of Poisson-repeat channels (PRCs) for which each input bit is independently repeated according to a Poisson distribution. The upper bounds are obtained by considering an auxiliary channel where the output lengths corresponding to input blocks of a given length are provided as side information at the receiver. Numerical results show that the resulting upper bounds are significantly tighter than the best known one for a large range of the PRC parameter $λ$ (specifically, for $λ\ge 0.35$). We also describe a way of obtaining capacity lower bounds using information rates of the auxiliary channel and the entropy rate of the provided side information.
The low coding rate of quantum stabilizer codes results in formidable physical qubit overhead when realizing quantum error correcting in engineering. In this letter, we propose a new class of hypergraph-product code called TGRE-hypergraph-product code. This code has constant coding rate 0.2, which is the highest constant coding rate of quantum stabilizer codes to our best knowledge. We perform simulations to test the error correcting capability TGRE-hypergraph-product code and find their code capacity noise threshold in depolarizing noise channel is around 0.096.
Quantum neural networks form one pillar of the emergent field of quantum machine learning. Here, quantum generalisations of classical networks realizing associative memories - capable of retrieving patterns, or memories, from corrupted initial states - have been proposed. It is a challenging open problem to analyze quantum associative memories with an extensive number of patterns, and to determine the maximal number of patterns the quantum networks can reliably store, i.e. their storage capacity. In this work, we propose and explore a general method for evaluating the maximal storage capacity of quantum neural network models. By generalizing what is known as Gardner's approach in the classical realm, we exploit the theory of classical spin glasses for deriving the optimal storage capacity of quantum networks with quenched pattern variables. As an example, we apply our method to an open-system quantum associative memory formed of interacting spin-1/2 particles realizing coupled artificial neurons. The system undergoes a Markovian time evolution resulting from a dissipative retrieval dynamics that competes with a coherent quantum dynamics. We map out the non-equilibrium phase diagram and study the effect of temperature and Hamiltonian dynamics on the storage capacity. Our method opens an avenue for a systematic characterization of the storage capacity of quantum associative memories.
Antenna selection is capable of handling the cost and complexity issues in massive multiple-input multiple-output (MIMO) channels. The sum-rate capacity of a multiuser massive MIMO uplink channel is characterized under the Nakagami fading. A mathematically tractable sum-rate capacity upper bound is derived for the considered system. Moreover, for a sufficiently large base station (BS) antenna number, a deterministic equivalent (DE) of the sum-rate bound is derived. Based on this DE, the sum-rate capacity is shown to grow double logarithmically with the number of BS antennas. The validity of the analytical result is confirmed by numerical experiments.
Bjørn Ivar Teigen, Neil Davies, Kai Olav Ellefsen
et al.
Most Internet traffic is carried by capacity-seeking protocols such as TCP and QUIC. Capacity-seeking protocols probe to find the maximum available throughput from sender to receiver, and, once they converge, attempt to keep sending traffic at this maximum rate. Achieving reliable low latency with capacity-seeking end-to-end methods is not yet entirely solved. We contribute a theoretical analysis to this ongoing discussion. In this work, we derive an expression for the minimum size of the spike in latency caused by a sudden drop in network capacity. Our results highlight a quantifiable and fundamental constraint on capacity-seeking network traffic. When end-to-end capacity is suddenly reduced, capacity-seeking traffic inevitably produces a latency spike. A lower bound on this latency spike can be calculated by multiplying the round-trip delay from the network bottleneck to the source of capacity-seeking traffic by the magnitude of the end-to-end capacity reduction. Testbed experiments show that this bound holds for the DCTCP, BBR, and Cubic congestion control algorithms. Our results have implications for the design of low-latency PHY and MAC-layer technologies because we quantify an important transport-layer consequence of unstable traffic rates.
Sergei Bezrodnykh, Andrei Bogatyrev, Sergei Goreinov
et al.
Making use of two different analytical-numerical methods for capacity computation, we obtain matching to a very high precision numerical values for capacities of a wide family of planar condensers. These two methods are based respectively on the use of the Lauricella function and Riemann theta functions. We apply these results to benchmark the performance of numerical algorithms, which are based on adaptive $hp$--finite element method and boundary integral method.
This paper derives upper and lower bounds on the capacity of the multiple-input single-output free-space optical intensity channel with signal-independent additive Gaussian noise subject to both an average-intensity and a peak-intensity constraint. In the limit where the signal-to-noise ratio (SNR) tends to infinity, the asymptotic capacity is specified, while in the limit where the SNR tends to zero, the exact slope of the capacity is also given.
In this paper, we study the capacity regions of two-way diamond channels. We show that for a linear deterministic model the capacity of the diamond channel in each direction can be simultaneously achieved for all values of channel parameters, where the forward and backward channel parameters are not necessarily the same. We divide the achievability scheme into three cases, depending on the forward and backward channel parameters. For the first case, we use a reverse amplify-and-forward strategy in the relays. For the second case, we use four relay strategies based on the reverse amplify-and-forward with some modifications in terms of replacement and repetition of some stream levels. For the third case, we use two relay strategies based on performing two rounds of repetitions in a relay. The proposed schemes for deterministic channels are used to find the capacity regions within constant gaps for two special cases of the Gaussian two-way diamond channel. First, for the general Gaussian two-way relay channel with a simple coding scheme the smallest gap is achieved compared to the prior works. Then, a special symmetric Gaussian two-way diamond model is considered and the capacity region is achieved within four bits.
In this work we consider the large-coalition asymptotics of various fingerprinting and group testing games, and derive explicit expressions for the capacities for each of these models. We do this both for simple decoders (fast but suboptimal) and for joint decoders (slow but optimal). For fingerprinting, we show that if the pirate strategy is known, the capacity often decreases linearly with the number of colluders, instead of quadratically as in the uninformed fingerprinting game. For many attacks the joint capacity is further shown to be strictly higher than the simple capacity. For group testing, we improve upon known results about the joint capacities, and derive new explicit asymptotics for the simple capacities. These show that existing simple group testing algorithms are suboptimal, and that simple decoders cannot asymptotically be as efficient as joint decoders. For the traditional group testing model, we show that the gap between the simple and joint capacities is a factor 1.44 for large numbers of defectives.
The wiretap channel models secure communication between two users in the presence of an eavesdropper who must be kept ignorant of transmitted messages. The performance of such a system is usually characterized by its secrecy capacity which determines the maximum transmission rate of secure communication. In this paper, the issue of whether or not the secrecy capacity is a continuous function of the system parameters is examined. In particular, this is done for channel uncertainty modeled via compound channels and arbitrarily varying channels, in which the legitimate users know only that the true channel realization is from a pre-specified uncertainty set. In the former model, this realization remains constant for the entire duration of transmission, while in the latter the realization varies from channel use to channel use in an unknown and arbitrary manner. These models not only capture the case of channel uncertainty, but are also suitable for modeling scenarios in which a malicious adversary jams or otherwise influence the legitimate transmission. The secrecy capacity of the compound wiretap channel is shown to be robust in the sense that it is a continuous function of the uncertainty set. Thus, small variations in the uncertainty set lead to small variations in secrecy capacity. On the other hand, the deterministic secrecy capacity of the \emph{arbitrarily varying wiretap channel} is shown to be discontinuous in the uncertainty set meaning that small variations can lead to dramatic losses in capacity.
The so-called `TV white spaces' (TVWS) - representing unused TV channels in any given location as the result of the transition to digital broadcasting - designated by U.S. Federal Communications Commission (FCC) for unlicensed use presents significant new opportunities within the context of emerging 4G networks for developing new wireless access technologies that meet the goals of the US National Broadband Plan (notably true broadband access for an increasing fraction of the population). There are multiple challenges in realizing this goal; the most fundamental being the fact that the available WS capacity is currently not accurately known, since it depends on a multiplicity of factors - including system parameters of existing incumbents (broadcasters), propagation characteristics of local terrain as well as FCC rules. In this paper, we explore the capacity of white space networks by developing a detailed model that includes all the major variables, and is cognizant of FCC regulations that provide constraints on incumbent protection. Real terrain information and propagation models for the primary broadcaster and adjacent channel interference from TV transmitters are included to estimate their impact on achievable WS capacity. The model is later used to explore various trade-offs between network capacity and system parameters and suggest possible amendments to FCC's incumbent protection rules in the favor of furthering white space capacity.
Georg Böcherer, Fabian Altenbach, Alex Alvarado
et al.
Bit-interleaved coded modulation (BICM) is a practical approach for reliable communication over the AWGN channel in the bandwidth limited regime. For a signal point constellation with 2^m points, BICM labels the signal points with bit strings of length m and then treats these m bits separately both at the transmitter and the receiver. BICM capacity is defined as the maximum of a certain achievable rate. Maximization has to be done over the probability mass functions (pmf) of the bits. This is a non-convex optimization problem. So far, the optimal bit pmfs were determined via exhaustive search, which is of exponential complexity in m. In this work, an algorithm called bit-alternating convex concave method (Bacm) is developed. This algorithm calculates BICM capacity with a complexity that scales approximately as m^3. The algorithm iteratively applies convex optimization techniques. Bacm is used to calculate BICM capacity of 4,8,16,32, and 64-PAM in AWGN. For PAM constellations with more than 8 points, the presented values are the first results known in the literature.
There are only two known kinds of zero-capacity channels. The first kind produces entangled states that have positive partial transpose, and the second one - states that are cloneable. We consider the family of 'hybrid' quantum channels, which lies in the intersection of the above classes of channels and investigate its properties. It gives rise to the first explicit examples of the channels, which create bound entangled states that have the property of being cloneable to the arbitrary finite number of parties. Hybrid channels provide the first example of highly cloneable binding entanglement channels, for which known superactivation protocols must fail - superactivation is the effect where two channels each with zero quantum capacity having positive capacity when used together. We give two methods to construct a hybrid channel from any binding entanglement channel. We also find the low-dimensional counterparts of hybrid states - bipartite qubit states which are extendible and possess two-way key.