For several styles of fidelity constraints -- guaranteed distortion, conditional excess distortion, excess distortion -- we show mutual information upper bounds on the minimum expected description length needed to represent a random variable. Coupled with the corresponding converses, these results attest that as long as the information content in the data is not too low, minimizing the mutual information under an appropriate fidelity constraint serves as a reasonable proxy for the minimum description length of the data. We provide alternative characterizations of all three convex proxies, shedding light on the structure of their solutions.
We investigate information-theoretic limits and design of communication under receiver quantization. Unlike most existing studies, this work is more focused on the impact of resolution reduction from high to low. We consider a standard transceiver architecture, which includes i.i.d. complex Gaussian codebook at the transmitter, and a symmetric quantizer cascaded with a nearest neighbor decoder at the receiver. Employing the generalized mutual information (GMI), an achievable rate under general quantization rules is obtained in an analytical form, which shows that the rate loss due to quantization is $\log\left(1+γ\mathsf{SNR}\right)$, where $γ$ is determined by thresholds and levels of the quantizer. Based on this result, the performance under uniform receiver quantization is analyzed comprehensively. We show that the front-end gain control, which determines the loading factor of quantization, has an increasing impact on performance as the resolution decreases. In particular, we prove that the unique loading factor that minimizes the MSE also maximizes the GMI, and the corresponding irreducible rate loss is given by $\log\left(1+\mathsf {mmse}\cdot\mathsf{SNR}\right)$, where mmse is the minimum MSE normalized by the variance of quantizer input, and is equal to the minimum of $γ$. A geometrical interpretation for the optimal uniform quantization at the receiver is further established. Moreover, by asymptotic analysis, we characterize the impact of biased gain control, showing how small rate losses decay to zero and providing rate approximations under large bias. From asymptotic expressions of the optimal loading factor and mmse, approximations and several per-bit rules for performance are also provided. Finally we discuss more types of receiver quantization and show that the consistency between achievable rate maximization and MSE minimization does not hold in general.
With the rapid development of generative artificial intelligence technology, the traditional cloud-based centralized model training and inference face significant limitations due to high transmission latency and costs, which restrict user-side in-situ Artificial Intelligence Generated Content (AIGC) service requests. To this end, we propose the Edge Artificial Intelligence Generated Content (EdgeAIGC) framework, which can effectively address the challenges of cloud computing by implementing in-situ processing of services close to the data source through edge computing. However, AIGC models usually have a large parameter scale and complex computing requirements, which poses a huge challenge to the storage and computing resources of edge devices. This paper focuses on the edge intelligence model caching and resource allocation problems in the EdgeAIGC framework, aiming to improve the cache hit rate and resource utilization of edge devices for models by optimizing the model caching strategy and resource allocation scheme, and realize in-situ AIGC service processing. With the optimization objectives of minimizing service request response time and execution cost in resource-constrained environments, we employ the Twin Delayed Deep Deterministic Policy Gradient algorithm for optimization. Experimental results show that, compared with other methods, our model caching and resource allocation strategies can effectively improve the cache hit rate by at least 41.06% and reduce the response cost as well.
Abstract Background In the medical field, value co-creation involves patients’ active involvement. By collaborating with service providers, patients can contribute to the creation of more targeted and effective value. Patients’ self-efficacy and behavior are crucial in this process, as their active participation and support can enhance their service experience. This study investigated the impact of chronic disease patients’ self-efficacy and value co-creation behaviors on the outcomes of value co-creation. Methods Relevant data were acquired through a questionnaire survey using statistical methods, such as the t-test, analysis of variance, and stratified linear regression. This approach was used to examine the current conditions and factors influencing value co-creation outcomes among community-dwelling patients with chronic diseases. Additionally, a structural equation model was employed to systematically investigate and validate the impact pathways and mechanisms related to the influence of self-efficacy and value co-creation behaviors on value co-creation outcomes. We also explored the moderating effect of digital health technology application capabilities on the relationship between self-efficacy and value co-creation behaviors. Results Self-efficacy, information search, interactive collaboration, feedback provision, and shared decision-making exert significant positive influences on the value co-creation outcomes among individuals with chronic diseases. The path analysis of the structural equation model indicates that self-efficacy and value co-creation behaviors may directly impact value co-creation outcomes. Concurrently, value co-creation behaviors partially mediate the association between self-efficacy and value co-creation outcomes. Furthermore, the digital health technology application capability exhibits a negative moderating effect in the pathway from self-efficacy to value co-creation behaviors. Conclusions The implementation of health education and social support measures by healthcare institutions and communities may augment patient self-efficacy, facilitate doctor-patient interactions, and promote shared decision-making. These initiatives could enhance the value of chronic disease services and optimize patient experiences. Additionally, healthcare institution managers are encouraged to focus on optimizing internet hospital platforms, organizing digital health training for patients, and bolstering patients’ proficiency in digital health technology applications. This strategy aims to instill a sense of health responsibility among patients with chronic diseases by fostering positive behaviors in interactive collaboration, information search, feedback provision, and other dimensions.
Artificial Intelligence (AI) is rapidly transforming engineering fields, from robotics to aerospace, with applications in control systems for UAVs and satellites. This work builds on a previously developed AI attitude controller for the InnoCube 3U nanosatellite. Deploying complex Neural Networks (NNs) on resource-limited microcontrollers presents a significant challenge. To overcome this, we propose distilling a Multi-Layer Perceptron (MLP) trained with Deep Reinforcement Learning (DRL) for attitude control into a Kolmogorov–Arnold Network (KAN). We convert this numeric KAN into a symbolic KAN, where each edge represents a learnable mathematical function, and finally extract a concise symbolic formula. This symbolic representation dramatically reduces memory usage and computational complexity, making it ideal for pico- and nanosatellites. We evaluate and demonstrate the feasibility of this approach for inertial pointing with reaction wheels in simulation using a realistic model of the InnoCube satellite. Our results show that the highly compressed KANs successfully solve the attitude control problem, while reducing the required memory footprint and inference time on the InnoCube ADCS hardware by over an order of magnitude. Beyond attitude control, we believe symbolic KANs hold great potential in aerospace for neural network compression and interpretable, data-driven modeling and system identification in future space missions.
Weronika Łajewska, Damiano Spina, Johanne Trippas
et al.
The increasing reliance on digital information necessitates advancements in conversational search systems, particularly in terms of information transparency. While prior research in conversational information-seeking has concentrated on improving retrieval techniques, the challenge remains in generating responses useful from a user perspective. This study explores different methods of explaining the responses, hypothesizing that transparency about the source of the information, system confidence, and limitations can enhance users' ability to objectively assess the response. By exploring transparency across explanation type, quality, and presentation mode, this research aims to bridge the gap between system-generated responses and responses verifiable by the user. We design a user study to answer questions concerning the impact of (1) the quality of explanations enhancing the response on its usefulness and (2) ways of presenting explanations to users. The analysis of the collected data reveals lower user ratings for noisy explanations, although these scores seem insensitive to the quality of the response. Inconclusive results on the explanations presentation format suggest that it may not be a critical factor in this setting.
We study the problem of weakly private information retrieval (PIR) when there is heterogeneity in servers' trustworthiness under the maximal leakage (Max-L) metric and mutual information (MI) metric. A user wishes to retrieve a desired message from N non-colluding servers efficiently, such that the identity of the desired message is not leaked in a significant manner; however, some servers can be more trustworthy than others. We propose a code construction for this setting and optimize the probability distribution for this construction. For the Max-L metric, it is shown that the optimal probability allocation for the proposed scheme essentially separates the delivery patterns into two parts: a completely private part that has the same download overhead as the capacity-achieving PIR code, and a non-private part that allows complete privacy leakage but has no download overhead by downloading only from the most trustful server. The optimal solution is established through a sophisticated analysis of the underlying convex optimization problem and a reduction between the homogeneous setting and the heterogeneous setting. For the MI metric, the homogeneous case is studied first for which the code can be optimized with an explicit probability assignment, while a closed-form solution becomes intractable for the heterogeneous case. Numerical results are provided for both cases to corroborate the theoretical analysis.
Mahbub Ul Islam Khan, Md. Ilius Hasan Pathan, Mohammad Mominur Rahman
et al.
Electric vehicles (EVs) are commonly recognized as environmentally friendly modes of transportation. They function by converting electrical energy into mechanical energy using different types of motors, which aligns with the sustainable principles embraced by smart cities. The motors of EVs store and consume electrical power from renewable energy (RE) sources through interfacing connections using power electronics technology to provide mechanical power through rotation. The reliable operation of an EV mainly relies on the condition of interfacing connections in the EV, particularly the connection between the 3-<inline-formula> <tex-math notation="LaTeX">$\phi $ </tex-math></inline-formula> inverter output and the brushless DC (BLDC) motor. In this paper, machine learning (ML) tools are deployed for detecting and classifying the faults in the connecting lines from 3-<inline-formula> <tex-math notation="LaTeX">$\phi $ </tex-math></inline-formula> inverter output to the BLDC motor during operational mode in the EV platform, considering double-line and three-phase faults. Several machine learning-based fault identification and classification tools, namely the Decision Tree, Logistic Regression, Stochastic Gradient Descent, AdaBoost, XGBoost, K-Nearest Neighbour, and Voting Classifier, were tuned for identifying and categorizing faults to ensure robustness and reliability. The ML classifications were developed based on the datasets of healthy and faulty conditions considering the combination of six critical parameters that have significance in reliable EV operation, namely the current supplied to the BLDC motor from the inverter, the modulated DC voltage, output speed, and measured speed, as well as the output of the Hall-effect sensor. In addition, the superiority of the proposed fault detection and classification approaches using ML tools was assessed by comparing the detection and classification efficiency through some statistical performance parameter comparisons among the classifiers.
Mohammed Khalid Yousif, Zena Ez Dallalbashi, Shahab Wahhab Kareem
Cloud computing processes vast quantities of data and offers a variety of flexible, secure, on-demand, and cost-effective collaboration options for consumers. Due to the increasing prevalence of hosted services, data security has become an increasingly critical concern. Hadoop, the engine at the heart of cloud computing, causes serious problems for the cloud. Any public, private, or hybrid cloud environment can use this security solution without any hassle (IaaS). Furthermore, it is compatible with the vast majority of Cloud computing's capabilities. Increase cloud security using NTRU encryption. This study made advantage of the (NTRUEncrypt) algorithms residing in Hadoop to speed up the file encryption and decryption processes. If HDFS is engaged in the Map Task, then HDFS will take care of both the encryption and decryption processes. Data on the cloud can be kept private and secure thanks to the proposed protection technique, which makes use of cryptography. Combining the proposed technique with preexisting infrastructure and web-based.
Electric apparatus and materials. Electric circuits. Electric networks
This paper considers the distributed information bottleneck (D-IB) problem for a primitive Gaussian diamond channel with two relays and Rayleigh fading. Due to the bottleneck constraint, it is impossible for the relays to inform the destination node of the perfect channel state information (CSI) in each realization. To evaluate the bottleneck rate, we provide an upper bound by assuming that the destination node knows the CSI and the relays can cooperate with each other, and also three achievable schemes with simple symbol-by-symbol relay processing and compression. Numerical results show that the lower bounds obtained by the proposed achievable schemes can come close to the upper bound on a wide range of relevant system parameters.
Muhammad K. Shehzad, Luca Rose, M. Majid Butt
et al.
With the deployment of 5G networks, standards organizations have started working on the design phase for sixth-generation (6G) networks. 6G networks will be immensely complex, requiring more deployment time, cost and management efforts. On the other hand, mobile network operators demand these networks to be intelligent, self-organizing, and cost-effective to reduce operating expenses (OPEX). Machine learning (ML), a branch of artificial intelligence (AI), is the answer to many of these challenges providing pragmatic solutions, which can entirely change the future of wireless network technologies. By using some case study examples, we briefly examine the most compelling problems, particularly at the physical (PHY) and link layers in cellular networks where ML can bring significant gains. We also review standardization activities in relation to the use of ML in wireless networks and future timeline on readiness of standardization bodies to adapt to these changes. Finally, we highlight major issues in ML use in the wireless technology, and provide potential directions to mitigate some of them in 6G wireless networks.
This article uses case study of a very famous tourism enterprise in China to explore what motivates tourism enterprises to implement service innovations. On the base of literature review, through semi-structured interviews, on-site observation and secondhand data, driving forces of service innovation in Chinese tourism enterprises are concluded by data analysis. Major internal driving forces include development vision, enterprise leaders, and organizational culture. Major external driving forces include changing demand of Chinese residents, demand of the local government to enhance municipal influence, ccompetitive ppressure of surrounding attractions. These internal and external forces motivate Chinese tourism enterprises to carry out service innovations continuously to satisfy tourists’ demands.
Yousuf Khan, Muhammad A. Butt, Svetlana N. Khonina
et al.
In this work, a dielectric photonic crystal-based thermal sensor is numerically investigated for the near-infrared spectral range. An easy-to-fabricate design is chosen with a waveguide layer deposited on a silicon dioxide substrate with air holes drilled across it. To sense the ambient temperature, a functional layer of polydimethylsiloxane biguanide polymer is deposited on the top, the optical properties of which vary with changes in the temperature. An open-source finite-difference time-domain-based software, MEEP, is used for design and numerical simulation. The design of the sensor, spectral properties, and proposed fabrication method are part of the discussion. The performance of the sensor is investigated for an ambient temperature range of 10 to 90 °C, for which the device offers a sensitivity value in the range of 0.109 nm/°C and a figure-of-merit of 0.045 °C<sup>−1</sup>. Keeping in mind the high-temperature tolerance, inert chemical properties, low material cost, and easy integration with optical fiber, the device can be proposed for a wide range of thermal sensing applications.
Cyber security has become a priority issue for all countries in the world since information and communication technology is used in various aspects of life, both in social, economic, legal, organizational, health, education, culture, government, security, defense, and other aspects. In direct proportion to the high level of utilization of information and communication technology, the level of risk and threat of misuse of information and communication technology is also getting higher and more complex. In response to these events, Indonesia then formed the National Cyber and Crypto Agency (BSSN) as a model for national cyber security institutions. This study uses a qualitative method with a descriptive approach. The purpose of this research is to find out how Indonesia's strategy in establishing cyber security in dealing with the threat of cyber crime through the National Cyber and Crypto Agency.
Abstract In this study, characteristics of charge injection under extra high electric field (above 100 kV/mm) in cross‐linked polyethylene (XLPE) were investigated by experiments of conduction current and space charge. The results show that current density from low electric field to sample breakdown corresponds to space charge limited current (SCLC) theory. More specifically, Schottky current is similar to experiment current before 100 kV/mm, while the J–E curve conforms to a modified SCLC theory after 100 kV/mm. Besides, the non‐linear coefficient of J–E curve from 100 kV/mm to extra high electric field is smaller than theoretical value, and the injection depth of space charge is restricted as the field becomes higher than 100 kV/mm, which may be caused by the negative differential mobility of charge. Driven by extra high electric field, charge collides with lattice of dielectric and scatters. As a result, mean free time of charge decreases and charge mobility is reduced with the increased field. Consequently, considering the decrease in charge mobility, a mobility‐limited charge injection equation is proposed, and the validity of the proposed equation under extra high electric field is demonstrated by space charge simulation.
Considering a broad family of technologies where a measure of performance (MoP) is difficult or impossible to formulate, we seek an alternative measure that exhibits a regular pattern of evolution over time, similar to how a MoP may follow a Moore's law. In an empirical case study, we explore an approach to identifying such a composite measure called a Figure of Regularity (FoR). We use the proposed approach to identify a novel FoR for diverse classes of small arms - bows, crossbows, harquebuses, muskets, rifles, repeaters, and assault rifles - and show that this FoR agrees well with the empirical data. We identify a previously unreported regular trend in the FoR of an exceptionally long duration - from approximately 1200 CE to the present - and discuss how research managers can analyze long-term trends in conjunction with a portfolio of research directions.
In this paper, we develop an unsupervised generative clustering framework that combines the Variational Information Bottleneck and the Gaussian Mixture Model. Specifically, in our approach, we use the Variational Information Bottleneck method and model the latent space as a mixture of Gaussians. We derive a bound on the cost function of our model that generalizes the Evidence Lower Bound (ELBO) and provide a variational inference type algorithm that allows computing it. In the algorithm, the coders' mappings are parametrized using neural networks, and the bound is approximated by Monte Carlo sampling and optimized with stochastic gradient descent. Numerical results on real datasets are provided to support the efficiency of our method.