This paper considers wireless device-to-device (D2D) coded caching in a multiaccess network, where the users communicate with each other and each user can access multiple cache nodes. Access topologies derived from two combinatorial designs known as the $t$-design and $t$-group divisible design ($t$-GDD), referred to as the $t$-design and $t$-GDD topologies respectively, which subsume a few other known topologies, have been studied for the multiaccess coded caching (MACC) network by Cheng \textit{et al.} in \cite{MACC_des}. These access topologies are extended to a multiaccess D2D coded caching (MADCC) network and novel MADCC schemes are proposed. MADCC network has been studied so far only for the cyclic wrap-around topology. Apart from the proposed novel MADCC schemes, MADCC schemes are also derived from the existing MACC schemes in \cite{MACC_des}. To compare the performance of different MADCC schemes, the metrics of load per user and subpacketization level are used while keeping the number of caches and cache memory size same. The proposed MADCC scheme with $t$-design topology performs better in terms of subpacketization level while achieving the same load per user compared to the MADCC scheme derived from the MACC scheme with $t$-design topology in \cite{MACC_des}. The proposed MADCC scheme with $t$-GDD topology performs better in terms of load per user while achieving the same subpacketization level compared to the MADCC scheme derived from the MACC scheme with $t$-GDD topology in \cite{MACC_des} in some cases. Compared to the existing MADCC scheme with cyclic wrap-around topology, the proposed MADCC scheme with $t$-design topology performs better in terms of load per user, and the proposed MADCC scheme with $t$-GDD topology performs better in terms of subpacketization level at the expense of an increase in load per user.
This paper investigates a unification of distributed source coding, multiple description coding, and source coding with side information at decoders. The equivalence between the multiple-decoder extension of distributed source coding with decoder side information and the multiple-source extension of multiple description coding with decoder side information is clarified. Their multi-letter rate-distortion region for arbitrary general correlated sources is characterized in terms of entropy functions. We construct a code based on constrained-random number generators and show its achievability.
In this paper, we prove a lower bound on the soundness of quantum locally testable codes under the distance balancing construction of Evra et al. arXiv:2004.07935 [quant-ph]. Our technical contribution is that the new soundness of the quantum code is at least the old soundness divided by the classical code length (up to a constant factor). This allows us to use any classical code with independent checks when distance balancing, where previously only the repetition code had been considered for qLTCs. By using a good classical LDPC code, we are able to grow the dimension of the hypersphere product codes arXiv:1608.05089 [quant-ph] and the hemicubic codes arXiv:1911.03069 [quant-ph] while maintaining their distance and locality, but at the expense of soundness. From this, and also by distance balancing a chain complex of Cross et al. arXiv:2209.11405 [cs.IT], we obtain quantum locally testable codes of new parameters.
Resolving a linear combination of point sources from their band-limited Fourier data is a fundamental problem in imaging and signal processing. With the incomplete Fourier data and the inevitable noise in the measurement, there is a fundamental limit on the separation distance between point sources that can be resolved. This is the so-called resolution limit problem. Characterization of this resolution limit is still a long-standing puzzle despite the prevalent use of the classic Rayleigh limit. It is well-known that Rayleigh limit is heuristic and its drawbacks become prominent when dealing with data that is subjected to delicate processing, as is what modern computational imaging methods do. Therefore, more precise characterization of the resolution limit becomes increasingly necessary with the development of data processing methods. For this purpose, we developed a theory of "computational resolution limit" for both number detection and support recovery in one dimension in [arXiv:2003.02917[cs.IT], arXiv:1912.05430[eess.IV]]. In this paper, we extend the one-dimensional theory to multi-dimensions. More precisely, we define and quantitatively characterize the "computational resolution limit" for the number detection and support recovery problems in a general k-dimensional space. Our results indicate that there exists a phase transition phenomenon regarding to the super-resolution factor and the signal-to-noise ratio in each of the two recovery problems. Our main results are derived using a subspace projection strategy. Finally, to verify the theory, we proposed deterministic subspace projection based algorithms for the number detection and support recovery problems in dimension two and three. The numerical results confirm the phase transition phenomenon predicted by the theory.
Among many current data processing systems, the objectives are often not the reproduction of data, but to compute some answers based on the data resulting from queries. The similarity identification task is to identify the items in a database that are similar to a given query item for a given metric. The problem of compression for similarity identification has been studied in arXiv:1307.6609 [cs.IT]. Unlike classical compression problems, the focus is not on reconstructing the original data. Instead, the compression rate is determined by the desired reliability of the answers. Specifically, the information measure identification rate characterizes the minimum rate that can be achieved among all schemes which guarantee reliable answers with respect to a given similarity threshold. In this paper, we propose a component-based model for computing correlated similarity queries. The correlated signals are first decorrelated by the KLT transform. Then, the decorrelated signal is processed by a distinct D-admissible system for each component. We show that the component-based model equipped with KLT can perfectly represent the multivariate Gaussian similarity queries when optimal rate-similarity allocation applies. Hence, we can derive the identification rate of the multivariate Gaussian signals based on the component-based model. We then extend the result to general Gaussian sources with memory. We also study the models equipped with practical compone\nt systems. We use TC-$\triangle$ schemes that use type covering signatures and triangle-inequality decision rules as our component systems. We propose an iterative method to numerically approximate the minimum achievable rate of the TC-$\triangle$ scheme. We show that our component-based model equipped with TC-$\triangle$ schemes can achieve better performance than the TC-$\triangle$ scheme unaided on handling the multivariate Gaussian sources.
Hsuan-Yin Lin, Siddhartha Kumar, Eirik Rosnes
et al.
We consider private information retrieval (PIR) for distributed storage systems (DSSs) with noncolluding nodes where data is stored using a non maximum distance separable (MDS) linear code. It was recently shown that if data is stored using a particular class of non-MDS linear codes, the MDS-PIR capacity, i.e., the maximum possible PIR rate for MDS-coded DSSs, can be achieved. For this class of codes, we prove that the PIR capacity is indeed equal to the MDS-PIR capacity, giving the first family of non-MDS codes for which the PIR capacity is known. For other codes, we provide asymmetric PIR protocols that achieve a strictly larger PIR rate compared to existing symmetric PIR protocols.
We show how to sample exactly discrete probability distributions whose defining parameters are distributed among remote parties. For this purpose, von Neumann's rejection algorithm is turned into a distributed sampling communication protocol. We study the expected number of bits communicated among the parties and also exhibit a trade-off between the number of rounds of the rejection algorithm and the number of bits transmitted in the initial phase. Finally, we apply remote sampling to the simulation of quantum entanglement in its most general form possible, when an arbitrary number of parties share systems of arbitrary dimensions on which they apply arbitrary measurements (not restricted to being projective measurements). In case the dimension of the systems and the number of possible outcomes per party is bounded by a constant, it suffices to communicate an expected O(m^2) bits in order to simulate exactly the outcomes that these measurements would have produced on those systems, where m is the number of participants.
Seyed Pooya Shariatpanahi, Giuseppe Caire, Babak Hossein Khalaj
We investigate the potentials of applying the coded caching paradigm in wireless networks. In order to do this, we investigate physical layer schemes for downlink transmission from a multiantenna transmitter to several cache-enabled users. As the baseline scheme we consider employing coded caching on top of max-min fair multicasting, which is shown to be far from optimal at high SNR values. Our first proposed scheme, which is near-optimal in terms of DoF, is the natural extension of multiserver coded caching to Gaussian channels. As we demonstrate, its finite SNR performance is not satisfactory, and thus we propose a new scheme in which the linear combination of messages is implemented in the finite field domain, and the one-shot precoding for the MISO downlink is implemented in the complex field. While this modification results in the same near-optimal DoF performance, we show that this leads to significant performance improvement at finite SNR. Finally, we extend our scheme to the previously considered cache-enabled interference channels, and moreover, we provide an Ergodic rate analysis of our scheme. Our results convey the important message that although directly translating schemes from the network coding ideas to wireless networks may work well at high SNR values, careful modifications need to be considered for acceptable finite SNR performance.
A single unicast index coding problem (SUICP) with symmetric neighboring interference (SNI) has equal number of $K$ messages and $K$ receivers, the $k$th receiver $R_{k}$ wanting the $k$th message $x_{k}$ and having the side-information $\mathcal{K}_{k}=(\mathcal{I}_{k} \cup x_{k})^c,$ where ${I}_k= \{x_{k-U},\dots,x_{k-2},x_{k-1}\}\cup\{x_{k+1}, x_{k+2},\dots,x_{k+D}\}$ is the interference with $D$ messages after and $U$ messages before its desired message. Maleki, Cadambe and Jafar obtained the capacity of this symmetric neighboring interference single unicast index coding problem (SNI-SUICP) with $(K)$ tending to infinity and Blasiak, Kleinberg and Lubetzky for the special case of $(D=U=1)$ with $K$ being finite. In this work, for any finite $K$ and arbitrary $D$ we obtain the capacity for the case $U=gcd(K,D+1)-1.$ Our proof is constructive, i.e., we give an explicit construction of a linear index code achieving the capacity.
Background: Many authors have proposed Quantitative Theories of Consciousness (QTC) based on theoretical principles like information theory, Granger causality and complexity. Recently, Virmani and Nagaraj (arXiv:1608.08450v2 [cs.IT]) noted the similarity between Integrated Information and Compression-Complexity, and on this basis, proposed a novel measure of network complexity called Phi-Compression Complexity (Phi-C or $Φ^C$). Their computer simulations using Boolean networks showed that $Φ^C$ compares favorably to Giulio Tononi et al's Integrated Information measure $Φ$ 3.0 and exhibits desirable mathematical and computational characteristics. Methods: In the present work, $Φ^C$ was measured for two types of simulated networks: (A) Networks representing simple neuronal connectivity motifs (presented in Fig.9 of Tononi and Sporns, BMC Neuroscience 4(1), 2003); (B) random networks derived from Erdös-R ényi G(N, p)graphs. Code for all simulations was written in Python 3.6, and the library NetworkX was used to simulate the graphs. Results and discussions summary: In simulations A, for the same set of networks, $Φ^C$ values differ from the values of IIT 1.0 $Φ$ in a counter-intuitive manner. It appears that $Φ^C$ captures some invariant aspects of the interplay between information integration, network topology, graph composition and node entropy. While Virmani and Nagaraj (arXiv:1608.08450v2 [cs.IT]) sought to highlight the correlations between $Φ^C$ and IIT $Φ$, the results of simulations A highlight the differences between the two measures in the way they capture the integrated information. In simulations B, the results of simulations A are extended to the more general case of random networks. In the concluding section we outline the novel aspects of this paper, and our ongoing and future research.
Reseña del libro:Guerrero, Mauricio (ed.) (2014). Objetos públicos, espacios privados. Usuarios y relaciones sociales en tres centros comerciales de Santiago de Cali. Cali: Universidad Icesi, pp. 158.
The existing upper and lower bounds between entropy and error are mostly derived through an inequality means without linking to joint distributions. In fact, from either theoretical or application viewpoint, there exists a need to achieve a complete set of interpretations to the bounds in relation to joint distributions. For this reason, in this work we propose a new approach of deriving the bounds between entropy and error from a joint distribution. The specific case study is given on binary classifications, which can justify the need of the proposed approach. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. For both errors, we derive the closed-form expressions of upper bound and lower bound in relation to joint distributions. The solutions show that Fano's lower bound is an exact bound for any type of errors in a relation diagram of "Error Probability vs. Conditional Entropy". A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij's upper bound.
An intuitive outer bound for the multiterminal source coding problem is given. The proposed bound explicitly couples the rate distortion functions for each source and correlation measures which derive from a "strong" data processing inequality. Unlike many standard outer bounds, the proposed bound is not parameterized by a continuous family of auxiliary random variables, but instead only requires maximizing two ratios of divergences which do not depend on the distortion functions under consideration.
Regenerating codes and codes with locality are two schemes that have recently been proposed to ensure data collection and reliability in a distributed storage network. In a situation where one is attempting to repair a failed node, regenerating codes seek to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. In this paper, we provide several constructions for a class of vector codes with locality in which the local codes are regenerating codes, that enjoy both advantages. We derive an upper bound on the minimum distance of this class of codes and show that the proposed constructions achieve this bound. The constructions include both the cases where the local regenerating codes correspond to the MSR as well as the MBR point on the storage-repair-bandwidth tradeoff curve of regenerating codes. Also included is a performance comparison of various code constructions for fixed block length and minimum distance.
In this work a four terminal complex Gaussian network composed of a source, a destination, an eavesdropper and a jammer relay is studied under two different set of assumptions: (i) The jammer relay does not hear the source transmission, and (ii) The jammer relay is causally given the source message. In both cases the jammer relay assists the eavesdropper and aims to decrease the achievable secrecy rates. The source, on the other hand, aims to increase it. To help the eavesdropper, the jammer relay can use pure relaying and/or send interference. Each of the problems is formulated as a two-player, non-cooperative, zero-sum continuous game. Assuming Gaussian strategies at the source and the jammer relay in the first problem, the Nash equilibrium is found and shown to be achieved with mixed strategies in general. The optimal cumulative distribution functions (cdf) for the source and the jammer relay that achieve the value of the game, which is the Nash equilibrium secrecy rate, are found. For the second problem, the Nash equilibrium solution is found and the results are compared to the case when the jammer relay is not informed about the source message.
For every p in (0,1/2), we give an explicit construction of binary codes of rate approaching "capacity" 1-H(p) that enable reliable communication in the presence of worst-case additive errors}, caused by a channel oblivious to the codeword (but not necessarily the message). Formally, we give an efficient "stochastic" encoding E(\cdot,\cdot) of messages combined with a small number of auxiliary random bits, such that for every message m and every error vector e (that could depend on m) that contains at most a fraction p of ones, w.h.p over the random bits r chosen by the encoder, m can be efficiently recovered from the corrupted codeword E(m,r) + e by a decoder without knowledge of the encoder's randomness r. Our construction for additive errors also yields explicit deterministic codes of rate approaching 1-H(p) for the "average error" criterion: for every error vector e of at most p fraction 1's, most messages m can be efficiently (uniquely) decoded from the corrupted codeword C(m)+e. Note that such codes cannot be linear, as the bad error patterns for all messages are the same in a linear code. We also give a new proof of the existence of such codes based on list decoding and certain algebraic manipulation detection codes. Our proof is simpler than the previous proofs from the literature on arbitrarily varying channels.