We consider a general form of L-function L(s) defined by an Euler product and satisfies several analytic assumptions. We show several asymptotic formulas for L(1) and log L(1). In particular those asymptotic formulas are valid for Dirichlet L-functions attached to almost all Dirichlet characters. Our theorems should be compared with former results due to Elliott, Montgomery and Weinberger, etc.
Singularities in Newton's gravitation, in general relativity (GR), in Coulomb's law, and elsewhere in classical physics, stem from two ill conceived assumptions: a) there are point-like entities with finite masses, charges, etc., packed in zero volumes, and b) the non-quantum assumption that these point-likes can be assigned precise coordinates and momenta. In the case of GR, we argue that the classical energy-momentum tensor in Einstein's field equation is that of a collection of point particles and is prone to singularity. In compliance with Heisenberg's uncertainty principle, we suggest to replace each constituent of the gravitating matter with a suitable quantum mechanical equivalent, here a Klien-Gordon (KG) or a Yukawa-ameliorated version of it, YKG field. KG and YKG fields are spatially distributed entities. They do not end up in singular spacetime points nor predict singular blackholes. On the other hand, YKG waves reach infinity as $\frac{1}{r}e^{-(κ\pm i k)r}$. They create the Newtonian $r^{-2}$ term as well as a non-Newtonian $r^{-1}$ force. The latter is capable of explaining the observed flat rotation curves of spiral galaxies, and is interpretable as an alternative gravity, a dark matter scenario, etc. There are ample observational data on flat rotation curves of spiral galaxies, coded in the Tully-Fisher relation, to support our propositions.
We propose a linguistic interpretation of three-way decisions, where the regions of acceptance, rejection, and non-commitment are constructed by using the so-called evaluative linguistic expressions, which are expressions of natural language such as small, medium, very short, quite roughly strong, extremely good, etc. Our results highlight new connections between two different research areas: three-way decisions and the theory of evaluative linguistic expressions.
Giannis Delimpaltadakis, Luca Laurenti, Manuel Mazo
Analyzing Event-Triggered Control's (ETC) sampling behaviour is of paramount importance, as it enables formal assessment of its sampling performance and prediction of its sampling patterns. In this work, we formally analyze the sampling behaviour of stochastic linear periodic ETC (PETC) systems by computing bounds on associated metrics. Specifically, we consider functions over sequences of state measurements and intersampling times that can be expressed as average, multiplicative or cumulative rewards, and introduce their expectations as metrics on PETC's sampling behaviour. We compute bounds on these expectations, by constructing appropriate Interval Markov Chains equipped with suitable reward structures, that abstract stochastic PETC's sampling behaviour. Our results are illustrated on a numerical example, for which we compute bounds on the expected average intersampling time and on the probability of triggering with the maximum possible intersampling time in a finite horizon.
The last two decades have seen tremendous growth in data collections because of the realization of recent technologies, including the internet of things (IoT), E-Health, industrial IoT 4.0, autonomous vehicles, etc. The challenge of data transmission and storage can be handled by utilizing state-of-the-art data compression methods. Recent data compression methods are proposed using deep learning methods, which perform better than conventional methods. However, these methods require a lot of data and resources for training. Furthermore, it is difficult to materialize these deep learning-based solutions on IoT devices due to the resource-constrained nature of IoT devices. In this paper, we propose lightweight data compression methods based on data statistics and deviation. The proposed method performs better than the deep learning method in terms of compression ratio (CR). We simulate and compare the proposed data compression methods for various time series signals, e.g., accelerometer, gas sensor, gyroscope, electrical power consumption, etc. In particular, it is observed that the proposed method achieves 250.8\%, 94.3\%, and 205\% higher CR than the deep learning method for the GYS, Gactive, and ACM datasets, respectively. The code and data are available at https://github.com/vidhi0206/data-compression .
At present, adversarial attacks are designed in a task-specific fashion. However, for downstream computer vision tasks such as image captioning, image segmentation etc., the current deep learning systems use an image classifier like VGG16, ResNet50, Inception-v3 etc. as a feature extractor. Keeping this in mind, we propose Mimic and Fool, a task agnostic adversarial attack. Given a feature extractor, the proposed attack finds an adversarial image which can mimic the image feature of the original image. This ensures that the two images give the same (or similar) output regardless of the task. We randomly select 1000 MSCOCO validation images for experimentation. We perform experiments on two image captioning models, Show and Tell, Show Attend and Tell and one VQA model, namely, end-to-end neural module network (N2NMN). The proposed attack achieves success rate of 74.0%, 81.0% and 87.1% for Show and Tell, Show Attend and Tell and N2NMN respectively. We also propose a slight modification to our attack to generate natural-looking adversarial images. In addition, we also show the applicability of the proposed attack for invertible architecture. Since Mimic and Fool only requires information about the feature extractor of the model, it can be considered as a gray-box attack.
Anthropologists and social historians have considered the caste system to be the most unique feature of Indian social organization. In traditional Bengali Hindu Society, the Namasudras, an untouchable caste, were numerically large but economically deprived and socially discriminated against by the higher castes. Under the leadership of Harichand Thakur (1812–1878) and his son Guruchand Thakur (1847–1937), the ‘Matua’ religious sect developed in the late nineteenth century in eastern part of Bengal to meet certain social needs of the upwardly mobile peasant community of the Namasudras who gained solidarity and self-confidence through the help of the Matua socio-religious identities. The real significance of the Matua sect lies in the fact that a downtrodden community sought to set up an alternative religious conception in an oppositional form and in resistance to the ideology which assigns an independent identity to the downtrodden for their uplift in the high caste elite-dominated society and a reworking of the relation of power within local society which they believed would lead to all-round human development. In this article, I would like to show the evidences which would give an undertaking that the Matua socio-cultural reform movement is continuing against the orthodox scriptural and Brahmanical rituals, customs and culture and resulting in an alternative hybrid cultural identity by reflecting on their own indigenous oral literatures and folk culture which are very much humanitarian, liberal, progressive and rational in outlook.
El Kindi Rezig, Mourad Ouzzani, Ahmed K. Elmagarmid
et al.
Data Cleaning refers to the process of detecting and fixing errors in the data. Human involvement is instrumental at several stages of this process, e.g., to identify and repair errors, to validate computed repairs, etc. There is currently a plethora of data cleaning algorithms addressing a wide range of data errors (e.g., detecting duplicates, violations of integrity constraints, missing values, etc.). Many of these algorithms involve a human in the loop, however, this latter is usually coupled to the underlying cleaning algorithms. There is currently no end-to-end data cleaning framework that systematically involves humans in the cleaning pipeline regardless of the underlying cleaning algorithms. In this paper, we highlight key challenges that need to be addressed to realize such a framework. We present a design vision and discuss scenarios that motivate the need for such a framework to judiciously assist humans in the cleaning process. Finally, we present directions to implement such a framework.
Sales forecast is an essential task in E-commerce and has a crucial impact on making informed business decisions. It can help us to manage the workforce, cash flow and resources such as optimizing the supply chain of manufacturers etc. Sales forecast is a challenging problem in that sales is affected by many factors including promotion activities, price changes, and user preferences etc. Traditional sales forecast techniques mainly rely on historical sales data to predict future sales and their accuracies are limited. Some more recent learning-based methods capture more information in the model to improve the forecast accuracy. However, these methods require case-by-case manual feature engineering for specific commercial scenarios, which is usually a difficult, time-consuming task and requires expert knowledge. To overcome the limitations of existing methods, we propose a novel approach in this paper to learn effective features automatically from the structured data using the Convolutional Neural Network (CNN). When fed with raw log data, our approach can automatically extract effective features from that and then forecast sales using those extracted features. We test our method on a large real-world dataset from CaiNiao.com and the experimental results validate the effectiveness of our method.
We present a simple primal-dual algorithm for computing approximate Nash-equilibria in two-person zero-sum sequential games with incomplete information and perfect recall (like Texas Hold'em Poker). Our algorithm is numerically stable, performs only basic iterations (i.e matvec multiplications, clipping, etc., and no calls to external first-order oracles, no matrix inversions, etc.), and is applicable to a broad class of two-person zero-sum games including simultaneous games and sequential games with incomplete information and perfect recall. The applicability to the latter kind of games is thanks to the sequence-form representation which allows us to encode any such game as a matrix game with convex polytopial strategy profiles. We prove that the number of iterations needed to produce a Nash-equilibrium with a given precision is inversely proportional to the precision. As proof-of-concept, we present experimental results on matrix games on simplexes and Kuhn Poker.
Sentiment Analysis aims to get the underlying viewpoint of the text, which could be anything that holds a subjective opinion, such as an online review, Movie rating, Comments on Blog posts etc. This paper presents a novel approach that classify text in two-dimensional Emotional space, based on the sentiments of the author. The approach uses existing lexical resources to extract feature set, which is trained using Supervised Learning techniques.
Recent years have witnessed an unprecedented proliferation of social media. People around the globe author, every day, millions of blog posts, social network status updates, etc. This rich stream of information can be used to identify, on an ongoing basis, emerging stories, and events that capture popular attention. Stories can be identified via groups of tightly-coupled real-world entities, namely the people, locations, products, etc., that are involved in the story. The sheer scale, and rapid evolution of the data involved necessitate highly efficient techniques for identifying important stories at every point of time. The main challenge in real-time story identification is the maintenance of dense subgraphs (corresponding to groups of tightly-coupled entities) under streaming edge weight updates (resulting from a stream of user-generated content). This is the first work to study the efficient maintenance of dense subgraphs under such streaming edge weight updates. For a wide range of definitions of density, we derive theoretical results regarding the magnitude of change that a single edge weight update can cause. Based on these, we propose a novel algorithm, DYNDENS, which outperforms adaptations of existing techniques to this setting, and yields meaningful results. Our approach is validated by a thorough experimental evaluation on large-scale real and synthetic datasets.
Distributed Video Coding (DVC) is a new coding paradigm for video compression, based on Slepian- Wolf (lossless coding) and Wyner-Ziv (lossy coding) information theoretic results. DVC is useful for emerging applications such as wireless video cameras, wireless low-power surveillance networks and disposable video cameras for medical applications etc. The primary objective of DVC is low-complexity video encoding, where bulk of computation is shifted to the decoder, as opposed to low-complexity decoder in conventional video compression standards such as H.264 and MPEG etc. There are couple of early architectures and implementations of DVC from Stanford University[2][3] in 2002, Berkeley University PRISM (Power-efficient, Robust, hIgh-compression, Syndrome-based Multimedia coding)[4][5] in 2002 and European project DISCOVER (DIStributed COding for Video SERvices)[6] in 2007. Primarily there are two types of DVC techniques namely pixel domain and transform domain based. Transform domain design will have better rate-distortion (RD) performance as it exploits spatial correlation between neighbouring samples and compacts the block energy into as few transform coefficients as possible (aka energy compaction). In this paper, architecture, implementation details and "C" model results of our transform domain DVC are presented.
By considering the nonrelativistic limit of de-Sitter geometry one obtains the nonrelativistic space-time with a cosmological constant and Newton-Hooke (NH) symmetries. We show that the NH symmetry algebra can be enlarged by the addition of the constant acceleration generators and endowed with central extensions (one in any dimension (D) and three in D=(2+1)). We present a classical Lagrangian and Hamiltonian framework for constructing models quasi-invariant under enlarged NH symmetries which depend on three parameters described by three nonvanishing central charges. The Hamiltonian dynamics then splits into external and internal sectors with new non-commutative structures of external and internal phase spaces. We show that in the limit of vanishing cosmological constant the system reduces to the one presented in [1] which possesses accelaration-enlarged Galilean symmetries.
We introduce an original approach to geometric calculus in which we define derivatives and integrals on functions which depend on extended bodies in space--that is, paths, surfaces, and volumes etc. Though this theory remains to be fully completed, we present it at its current stage of development, and discuss it's connection to physical research, in particular its application to spinning particles in curved space.
This is the third in a series of papers constructing hyperbolic structures on all Haken three-manifolds. This portion deals with the mixed case of the deformation space for manifolds with incompressible boundary that are not acylindrical, but are more complicated than interval bundles over surfaces. This is a slight revision of a 1986 preprint, with a few figures added, and slight clarifications of some of the text, but with no attempt to connect this to later developments such as groups acting on R-trees, etc.