IEEE TRANSACTIONS ON INFORMATION THEORY

Kernel Stein Discrepancy on Lie Groups: Theory and Applications
Qu X, Fan X and Vemuri BC
Distributional approximation is a fundamental problem in machine learning with numerous applications across all fields of science and engineering and beyond. The key challenge in most approximation methods is the need to tackle the intractable normalization constant present in the candidate distributions used to model the data. This intractability is especially common for distributions of manifold-valued random variables such as rotation matrices, orthogonal matrices etc. In this paper, we focus on the distributional approximation problem in Lie groups since they are frequently encountered in many applications including but not limited to, computer vision, robotics, medical imaging and many more. We present a novel Stein's operator on Lie groups leading to a kernel Stein discrepancy (KSD) which is a normalization-free loss function. We present several theoretical results characterizing the properties of this new KSD on Lie groups and its minimizer namely, the minimum KSD estimator (MKSDE). Properties of MKSDE are presented and proved, including strong consistency, CLT and a closed form of the MKSDE for the von Mises-Fisher, the exponential and the Riemannian normal distributions on . Finally, we present several experimental results depicting advantages of MKSDE over maximum likelihood estimation.
Uniform Convergence of Deep Neural Networks With Lipschitz Continuous Activation Functions and Variable Widths
Xu Y and Zhang H
We consider deep neural networks (DNNs) with a Lipschitz continuous activation function and with weight matrices of variable widths. We establish a uniform convergence analysis framework in which sufficient conditions on weight matrices and bias vectors together with the Lipschitz constant are provided to ensure uniform convergence of DNNs to a meaningful function as the number of their layers tends to infinity. In the framework, special results on uniform convergence of DNNs with a fixed width, bounded widths and unbounded widths are presented. In particular, as convolutional neural networks are special DNNs with weight matrices of increasing widths, we put forward conditions on the mask sequence which lead to uniform convergence of the resulting convolutional neural networks. The Lipschitz continuity assumption on the activation functions allows us to include in our theory most of commonly used activation functions in applications.
Matrix Reordering for Noisy Disordered Matrices: Optimality and Computationally Efficient Algorithms
Cai TT and Ma R
Motivated by applications in single-cell biology and metagenomics, we investigate the problem of matrix reordering based on a noisy disordered monotone Toeplitz matrix model. We establish the fundamental statistical limit for this problem in a decision-theoretic framework and demonstrate that a constrained least squares estimator achieves the optimal rate. However, due to its computational complexity, we analyze a popular polynomial-time algorithm, spectral seriation, and show that it is suboptimal. To address this, we propose a novel polynomial-time adaptive sorting algorithm with guaranteed performance improvement. Simulations and analyses of two real single-cell RNA sequencing datasets demonstrate the superiority of our algorithm over existing methods.
Non-Asymptotic Guarantees for Reliable Identification of Granger Causality via the LASSO
Das P and Babadi B
Granger causality is among the widely used data-driven approaches for causal analysis of time series data with applications in various areas including economics, molecular biology, and neuroscience. Two of the main challenges of this methodology are: 1) over-fitting as a result of limited data duration, and 2) correlated process noise as a confounding factor, both leading to errors in identifying the causal influences. Sparse estimation via the LASSO has successfully addressed these challenges for parameter estimation. However, the classical statistical tests for Granger causality resort to asymptotic analysis of ordinary least squares, which require long data duration to be useful and are not immune to confounding effects. In this work, we address this disconnect by introducing a LASSO-based statistic and studying its non-asymptotic properties under the assumption that the true models admit sparse autoregressive representations. We establish fundamental limits for reliable identification of Granger causal influences using the proposed LASSO-based statistic. We further characterize the false positive error probability and test power of a simple thresholding rule for identifying Granger causal effects and provide two methods to set the threshold in a data-driven fashion. We present simulation studies and application to real data to compare the performance of our proposed method to ordinary least squares and existing LASSO-based methods in detecting Granger causal influences, which corroborate our theoretical results.
Orbit Structure of Grassmannian G2,m and a Decoder for Grassmann Code C(2, m)
Piñero FL and Singh P
In this article, we consider decoding Grassmann codes, linear codes associated to the Grassmannian and its embedding in a projective space. We look at the orbit structure of Grassmannian arising from the multiplicative group in . We project the corresponding Grassmann code onto these orbits to obtain a subcode of a -ary Reed-Solomon code. We prove that some of these projections contain an information set of the parent Grassmann code. By improving the decoding capacity of Peterson's decoding algorithm for the projected subcodes, we prove that one can correct up to errors for Grassmann code, where is the minimum distance of Grassmann code.
On Support Recovery with Sparse CCA: Information Theoretic and Computational Limits
Laha N and Mukherjee R
In this paper, we consider asymptotically exact support recovery in the context of high dimensional and sparse Canonical Correlation Analysis (CCA). Our main results describe four regimes of interest based on information theoretic and computational considerations. In regimes of "low" sparsity we describe a simple, general, and computationally easy method for support recovery, whereas in a regime of "high" sparsity, it turns out that support recovery is information theoretically impossible. For the sake of information theoretic lower bounds, our results also demonstrate a non-trivial requirement on the "minimal" size of the nonzero elements of the canonical vectors that is required for asymptotically consistent support recovery. Subsequently, the regime of "moderate" sparsity is further divided into two subregimes. In the lower of the two sparsity regimes, we show that polynomial time support recovery is possible by using a sharp analysis of a co-ordinate thresholding [1] type method. In contrast, in the higher end of the moderate sparsity regime, appealing to the "Low Degree Polynomial" Conjecture [2], we provide evidence that polynomial time support recovery methods are inconsistent. Finally, we carry out numerical experiments to compare the efficacy of various methods discussed.
Classification logit two-sample testing by neural networks for differentiating near manifold densities
Cheng X and Cloninger A
The recent success of generative adversarial networks and variational learning suggests that training a classification network may work well in addressing the classical two-sample problem, which asks to differentiate two densities given finite samples from each one. Network-based methods have the computational advantage that the algorithm scales to large datasets. This paper considers using the classification logit function, which is provided by a trained classification neural network and evaluated on the testing set split of the two datasets, to compute a two-sample statistic. To analyze the approximation and estimation error of the logit function to differentiate near-manifold densities, we introduce a new result of near-manifold integral approximation by neural networks. We then show that the logit function provably differentiates two sub-exponential densities given that the network is sufficiently parametrized, and for on or near manifold densities, the needed network complexity is reduced to only scale with the intrinsic dimensionality. In experiments, the network logit test demonstrates better performance than previous network-based tests using classification accuracy, and also compares favorably to certain kernel maximum mean discrepancy tests on synthetic datasets and hand-written digit datasets.
Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference
Cai TT, Zhang AR and Zhou Y
We study sparse group Lasso for high-dimensional double sparse linear regression, where the parameter of interest is simultaneously element-wise and group-wise sparse. This problem is an important instance of the simultaneously structured model - an actively studied topic in statistics and machine learning. In the noiseless case, matching upper and lower bounds on sample complexity are established for the exact recovery of sparse vectors and for stable estimation of approximately sparse vectors, respectively. In the noisy case, upper and matching minimax lower bounds for estimation error are obtained. We also consider the debiased sparse group Lasso and investigate its asymptotic property for the purpose of statistical inference. Finally, numerical studies are provided to support the theoretical results.
Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration
Zhou Y, Zhang AR, Zheng L and Wang Y
This paper studies a general framework for high-order tensor SVD. We propose a new computationally efficient algorithm, tensor-train orthogonal iteration (TTOI), that aims to estimate the low tensor-train rank structure from the noisy high-order tensor observation. The proposed TTOI consists of initialization via TT-SVD [1] and new iterative backward/forward updates. We develop the general upper bound on estimation error for TTOI with the support of several new representation lemmas on tensor matricizations. By developing a matching information-theoretic lower bound, we also prove that TTOI achieves the minimax optimality under the spiked tensor model. The merits of the proposed TTOI are illustrated through applications to estimation and dimension reduction of high-order Markov processes, numerical studies, and a real data example on New York City taxi travel records. The software of the proposed algorithm is available online (https://github.com/Lili-Zheng-stat/TTOI).
Mechanisms for Hiding Sensitive Genotypes with Information-Theoretic Privacy
Ye F, Cho H and Rouayheb SE
Motivated by the growing availability of personal genomics services, we study an information-theoretic privacy problem that arises when sharing genomic data: a user wants to share his or her genome sequence while keeping the genotypes at certain positions hidden, which could otherwise reveal critical health-related information. A straightforward solution of erasing (masking) the chosen genotypes does not ensure privacy, because the correlation between nearby positions can leak the masked genotypes. We introduce an erasure-based privacy mechanism with perfect information-theoretic privacy, whereby the released sequence is statistically independent of the sensitive genotypes. Our mechanism can be interpreted as a locally-optimal greedy algorithm for a given processing order of sequence positions, where utility is measured by the number of positions released without erasure. We show that finding an optimal order is NP-hard in general and provide an upper bound on the optimal utility. For sequences from hidden Markov models, a standard modeling approach in genetics, we propose an efficient algorithmic implementation of our mechanism with complexity polynomial in sequence length. Moreover, we illustrate the robustness of the mechanism by bounding the privacy leakage from erroneous prior distributions. Our work is a step towards more rigorous control of privacy in genomic data sharing.
How to reduce dimension with PCA and random projections?
Yang F, Liu S, Dobriban E and Woodruff DP
In our "big data" age, the size and complexity of data is steadily increasing. Methods for dimension reduction are ever more popular and useful. Two distinct types of dimension reduction are "data-oblivious" methods such as random projections and sketching, and "data-aware" methods such as principal component analysis (PCA). Both have their strengths, such as speed for random projections, and data-adaptivity for PCA. In this work, we study how to combine them to get the best of both. We study "sketch and solve" methods that take a random projection (or sketch) first, and compute PCA after. We compute the performance of several popular sketching methods (random iid projections, random sampling, subsampled Hadamard transform, CountSketch, etc) in a general "signal-plus-noise" (or spiked) data model. Compared to well-known works, our results (1) give asymptotically exact results, and (2) apply when the signal components are only slightly above the noise, but the projection dimension is non-negligible. We also study stronger signals allowing more general covariance structures. We find that (a) signal strength decreases under projection in a delicate way depending on the structure of the data and the sketching method, (b) orthogonal projections are slightly more accurate, (c) randomization does not hurt too much, due to concentration of measure, (d) CountSketch can be somewhat improved by a normalization method. Our results have implications for statistical learning and data analysis. We also illustrate that the results are highly accurate in simulations and in analyzing empirical data.
Trace Reconstruction Problems in Computational Biology
Bhardwaj V, Pevzner PA, Rashtchian C and Safonova Y
The problem of reconstructing a string from its error-prone copies, , was introduced by Vladimir Levenshtein two decades ago. While there has been considerable theoretical work on trace reconstruction, practical solutions have only recently started to emerge in the context of two rapidly developing research areas: immunogenomics and DNA data storage. In immunogenomics, traces correspond to mutated copies of genes, with mutations generated naturally by the adaptive immune system. In DNA data storage, traces correspond to noisy copies of DNA molecules that encode digital data, with errors being artifacts of the data retrieval process. In this paper, we introduce several new trace generation models and open questions relevant to trace reconstruction for immunogenomics and DNA data storage, survey theoretical results on trace reconstruction, and highlight their connections to computational biology. Throughout, we discuss the applicability and shortcomings of known solutions and suggest future research directions.
Levenshtein Distance, Sequence Comparison and Biological Database Search
Berger B, Waterman MS and Yu YW
Levenshtein edit distance has played a central role-both past and present-in sequence alignment in particular and biological database similarity search in general. We start our review with a history of dynamic programming algorithms for computing Levenshtein distance and sequence alignments. Following, we describe how those algorithms led to heuristics employed in the most widely used software in bioinformatics, BLAST, a program to search DNA and protein databases for evolutionarily relevant similarities. More recently, the advent of modern genomic sequencing and the volume of data it generates has resulted in a return to the problem of local alignment. We conclude with how the mathematical formulation of Levenshtein distance as a metric made possible additional optimizations to similarity search in biological contexts. These modern optimizations are built around the low metric entropy and fractional dimensionality of biological databases, enabling orders of magnitude acceleration of biological similarity search.
Sparse Recovery Beyond Compressed Sensing: Separable Nonlinear Inverse Problems
Bernstein B, Liu S, Papadaniil C and Fernandez-Granda C
Extracting information from nonlinear measurements is a fundamental challenge in data analysis. In this work, we consider separable inverse problems, where the data are modeled as a linear combination of functions that depend nonlinearly on certain parameters of interest. These parameters may represent neuronal activity in a human brain, frequencies of electromagnetic waves, fluorescent probes in a cell, or magnetic relaxation times of biological tissues. Separable nonlinear inverse problems can be reformulated as underdetermined sparse-recovery problems, and solved using convex programming. This approach has had empirical success in a variety of domains, from geophysics to medical imaging, but lacks a theoretical justification. In particular, compressed-sensing theory does not apply, because the measurement operators are deterministic and violate incoherence conditions such as the restricted-isometry property. Our main contribution is a theory for sparse recovery adapted to deterministic settings. We show that convex programming succeeds in recovering the parameters of interest, as long as their values are sufficiently distinct with respect to the correlation structure of the measurement operator. The theoretical results are illustrated through numerical experiments for two applications: heat-source localization and estimation of brain activity from electroencephalography data.
Sparse and Low-rank Tensor Estimation via Cubic Sketchings
Hao B, Zhang A and Cheng G
In this paper, we propose a general framework for sparse and low-rank tensor estimation from cubic sketchings. A two-stage non-convex implementation is developed based on sparse tensor decomposition and thresholded gradient descent, which ensures exact recovery in the noiseless case and stable recovery in the noisy case with high probability. The non-asymptotic analysis sheds light on an interplay between optimization error and statistical error. The proposed procedure is shown to be rate-optimal under certain conditions. As a technical by-product, novel high-order concentration inequalities are derived for studying high-moment sub-Gaussian tensors. An interesting tensor formulation illustrates the potential application to high-order interaction pursuit in high-dimensional linear regression.
Spectral State Compression of Markov Processes
Zhang A and Wang M
Model reduction of Markov processes is a basic problem in modeling state-transition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a discrete-state Markov chain from empirical trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregability, and lumpability. We develop spectral methods for estimating the transition matrix of a low-rank Markov model, estimating the leading subspace spanned by Markov features, and recovering latent structures like state aggregation and lumpable partition of the state space. We prove statistical upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips.
SOFAR: Large-Scale Association Network Learning
Uematsu Y, Fan Y, Chen K, Lv J and Lin W
Many modern big data applications feature large scale in both numbers of responses and predictors. Better statistical efficiency and scientific insights can be enabled by understanding the large-scale response-predictor association network structures via layers of sparse latent factors ranked by importance. Yet sparsity and orthogonality have been two largely incompatible goals. To accommodate both features, in this paper we suggest the method of sparse orthogonal factor regression (SOFAR) via the sparse singular value decomposition with orthogonality constrained optimization to learn the underlying association networks, with broad applications to both unsupervised and supervised learning tasks such as biclustering with sparse singular value decomposition, sparse principal component analysis, sparse factor analysis, and spare vector autoregression analysis. Exploiting the framework of convexity-assisted nonconvex optimization, we derive nonasymptotic error bounds for the suggested procedure characterizing the theoretical advantages. The statistical guarantees are powered by an efficient SOFAR algorithm with convergence property. Both computational and theoretical advantages of our procedure are demonstrated with several simulations and real data examples.
Learning High-dimensional Generalized Linear Autoregressive Models
Hall EC, Raskutti G and Willett RM
Vector autoregressive models characterize a variety of time series in which linear combinations of current and past observations can be used to accurately predict future observations. For instance, each element of an observation vector could correspond to a different node in a network, and the parameters of an autoregressive model would correspond to the impact of the network structure on the time series evolution. Often these models are used successfully in practice to learn the structure of social, epidemiological, financial, or biological neural networks. However, little is known about statistical guarantees on estimates of such models in non-Gaussian settings. This paper addresses the inference of the autoregressive parameters and associated network structure within a generalized linear model framework that includes Poisson and Bernoulli autoregressive processes. At the heart of this analysis is a sparsity-regularized maximum likelihood estimator. While sparsity-regularization is well-studied in the statistics and machine learning communities, those analysis methods cannot be applied to autoregressive generalized linear models because of the correlations and potential heteroscedasticity inherent in the observations. Sample complexity bounds are derived using a combination of martingale concentration inequalities and modern empirical process techniques for dependent random variables. These bounds, which are supported by several simulation studies, characterize the impact of various network parameters on estimator performance.
Lossless Compression of Binary Trees with Correlated Vertex Names
Magner A, Turowski K and Szpankowski W
Compression schemes for advanced data structures have become a central modern challenge. Information theory has traditionally dealt with conventional data such as text, images, or video. In contrast, most data available today is multitype and context-dependent. To meet this challenge, we have recently initiated a systematic study of advanced data structures such as unlabeled graphs [8]. In this paper, we continue this program by considering trees with statistically correlated vertex names. Trees come in many forms, but here we deal with binary plane trees (where order of subtrees matters) and their non-plane version (where order of subtrees doesn't matter). Furthermore, we assume that each name is generated by a known memoryless source (horizontal independence), but a symbol of a vertex name depends in a Markovian sense on the corresponding symbol of the parent vertex name (vertical Markovian dependency). Such a model is closely connected to models of phylogenetic trees. While in general the problem of multimodal compression and associated analysis can be extremely complicated, we find that in this natural setting, both the entropy analysis and optimal compression are analytically tractable. We evaluate the entropy for both types of trees. For the plane case, with or without vertex names, we find that a simple two-stage compression scheme is both efficient and optimal. We then present efficient and optimal compression algorithms for the more complicated non-plane case.
Parallel Device-Independent Quantum Key Distribution
Jain R, Miller CA and Shi Y
A prominent application of quantum cryptography is the distribution of cryptographic keys that are provably secure. Recently, such security proofs were extended by Vazirani and Vidick (, 113, 140501, 2014) to the device-independent (DI) scenario, where the users do not need to trust the integrity of the underlying quantum devices. The protocols analyzed by them and by subsequent authors all require a sequential execution of multiplayer games, where is the security parameter. In this work, we prove unconditional security of a protocol where all games are executed in parallel. Besides decreasing the number of time-steps necessary for key generation, this result reduces the security requirements for DI-QKD by allowing arbitrary information leakage of each user's inputs within his or her lab. To the best of our knowledge, this is the first parallel security proof for a fully device-independent QKD protocol. Our protocol tolerates a constant level of device imprecision and achieves a linear key rate.
Randomized Linear Algebra Approaches to Estimate the Von Neumann Entropy of Density Matrices
Kontopoulou EM, Dexter GP, Szpankowski W, Grama A and Drineas P
The , named after John von Neumann, is an extension of the classical concept of entropy to the field of quantum mechanics. From a numerical perspective, von Neumann entropy can be computed simply by computing all eigenvalues of a density matrix, an operation that could be prohibitively expensive for large-scale density matrices. We present and analyze three randomized algorithms to approximate von Neumann entropy of real density matrices: our algorithms leverage recent developments in the Randomized Numerical Linear Algebra (RandNLA) literature, such as randomized trace estimators, provable bounds for the power method, and the use of random projections to approximate the eigenvalues of a matrix. All three algorithms come with provable accuracy guarantees and our experimental evaluations support our theoretical findings showing considerable speedup with small loss in accuracy.