T. Oliveira, Maria F. O. Martins
Hasil untuk "Information theory"
Menampilkan 20 dari ~21739958 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
H. Oinas-Kukkonen, M. Harjumaa
Tie-Yan Liu
Stefan Edelkamp
W. Alston, Fred I. Dretske
Acknowledgments Preface 1. Communication theory 2. Communication and information 3. A semantic theory of information 4. Knowledge 5. The communication channel 6. Sensation and perception 7. Coding and content 8. The structure of belief 9. Concepts and meaning Notes Index.
D. Chalmers
S. Seibert, Maria L. Kraimer, R. Liden
J. Gasser, H. Leutwyler
E. Biglieri, J. Proakis, S. Shamai
L. Zadeh
A. Liberman, I. Mattingly
K. Kuutti
C. Delbruck, B. Raffelhuschen
R. Mayer
Abstract A fundamental hypothesis underlying research on multimedia learning is that multimedia instructional messages that are designed in light of how the human mind works are more likely to lead to meaningful learning than those that are not. The cognitive theory of multimedia learning (CTML) is based on three cognitive science principles of learning: the human information processing system includes dual channels for visual/pictorial and auditory/verbal processing (i.e., dual-channels assumption); each channel has limited capacity for processing (i.e., limited capacity assumption); and active learning entails carrying out a coordinated set of cognitive processes during learning (i.e., active processing assumption). The cognitive theory of multimedia learning specifies five cognitive processes in multimedia learning: selecting relevant words from the presented text or narration, selecting relevant images from the presented illustrations, organizing the selected words into a coherent verbal representation, organizing selected images into a coherent pictorial representation, and integrating the pictorial and verbal representations and prior knowledge. Multimedia instructional messages should be designed to prime these processes. The Case for Multimedia Learning What is the rationale for a theory of multimedia learning? People learn more deeply from words and pictures than from words alone. This assertion – which can be called the multimedia principle – underlies much of the interest in multimedia learning. For thousands of years, words have been the major format for instruction – including spoken words, and within the last few hundred years, printed words.
Ram T. S. Ramakrishnan, A. Thakor
Matthew R. Jones, Helena Karsten
Matthew J. Dykas, J. Cassidy
HanQin Cai, Longxiu Huang
Tensor dimensionality reduction is one of the fundamental tools for modern data science. To address the high computational overhead, fiber-wise sampled subtensors that preserve the original tensor rank are often used in designing efficient and scalable tensor dimensionality reduction. However, the theory of property inheritance for subtensors is still underdevelopment, that is, how the essential properties of the original tensor will be passed to its subtensors. This paper theoretically studies the property inheritance of the two key tensor properties, namely incoherence and condition number, under the tensor train setting. We also show how tensor train rank is preserved through fiber-wise sampling. The key parameters introduced in theorems are numerically evaluated under various settings. The results show that the properties of interest can be well preserved to the subtensors formed via fiber-wise sampling. Overall, this paper provides several handy analytic tools for developing efficient tensor analysis methods.
Lingyi Chen, Shitong Wu, Sicheng Xu et al.
The information bottleneck (IB) method is a technique designed to extract meaningful information related to one random variable from another random variable, and has found extensive applications in machine learning problems. In this paper, neural network based estimation of the IB problem solution is studied, through the lens of a novel formulation of the IB problem. Via exploiting the inherent structure of the IB functional and leveraging the mapping approach, the proposed formulation of the IB problem involves only a single variable to be optimized, and subsequently is readily amenable to data-driven estimators based on neural networks. A theoretical analysis is conducted to guarantee that the neural estimator asymptotically solves the IB problem, and the numerical experiments on both synthetic and MNIST datasets demonstrate the effectiveness of the neural estimator.
Akhil Premkumar
Diffusion models transform noise into data by injecting information that was captured in their neural network during the training phase. In this paper, we ask: \textit{what} is this information? We find that, in pixel-space diffusion models, (1) a large fraction of the total information in the neural network is committed to reconstructing small-scale perceptual details of the image, and (2) the correlations between images and their class labels are informed by the semantic content of the images, and are largely agnostic to the low-level details. We argue that these properties are intrinsically tied to the manifold structure of the data itself. Finally, we show that these facts explain the efficacy of classifier-free guidance: the guidance vector amplifies the mutual information between images and conditioning signals early in the generative process, influencing semantic structure, but tapers out as perceptual details are filled in.
Halaman 25 dari 1086998