Abstract: This dissertation covers three related topics. They are on tracking, search, and network information theory respectively.
For the first topic, we propose an information theoretic formulation for the problem of target tracking via sensor querying. Our goal is to study the fundamental limits of trackability. The target motion is modeled by a finite state-space Markov chain. We show that for a target to be trackable the minimum query quota required is no less than the entropy rate $H$ of the Markov chain, but no more than $\lceil H+1 \rceil$. Subsequently, we consider the adaptive case where the target motion law, namely the probability transition function of the Markov chain, is unknown to the tracker a priori. In this case, the tracker is expected to learn the target motion law and track the target simultaneously. It turns out that the extra burden of learning the motion law for the tracker sacrifices no loss of tracking performance in the asymptotical region---we show that for a target to be universal-trackable no more than $\lceil H+1\rceil$ number of queries at each time step is required.
For the second topic, we study a class of search problems. The general setup models the ubiquitous situation where a searcher aims to find an immobile target with minimum expected latency and the location of the target is a discrete random variable distributed over a set of possible locations. Both bounded and unbounded search problems are considered. We first consider the Bounded Discrete Linear Search Problem (BDLSP)---the distribution of the target location has a finite support on the integer line---and show that the BDLSP is quadratic-time solvable by formulating it as a Markov Decision Problem (MDP). However, the graph search problem (GSP)---the target is located on the vertices of a graph---is shown to be NP-complete. Then we consider the erroneous BDLSP (EBDLSP)---the searcher may miss the target with a probability when the target location is visited---and derive lower and upper bounds on the performance loss for the EBDLSP in terms of that of its error-free counterpart BDLSP. In the second part, we consider the Unbounded Discrete Linear Search Problem (UDLSP)---the distribution of the target location has an infinite support on the integer line. Theoretically, we show that an optimal search policy exists if and only if the double-sided mean of the distribution is finite. Then, we focus on symmetric UBDLSPs and establish the expanding property of optimal policies for symmetric UBDLSPs. Algorithmically, we propose a procedure effectively approximating the optimal value and the optimal policy given that the optimal policy is unique. To investigate the convergence rate of the procedure, we study the growth rate of the turning points of optimal policies for symmetric UBDLSP. We show that the increment sequences of the optimal policies for symmetric UBDLSPs with heavy-tailed distributions are necessarily unbounded.
For the first part of the third topic, we focus on characterizing the range of the entropy function, essential to multi-terminal information theory. We disprove certain potentially valid non-Shannon-type inequalities with a computer-aided search for counterexamples and show that none of the three extra conditional mutual information terms in the Zhang-Yeung inequality can be dropped for it to remain valid. A general condition, subsuming all the previously known conditions, is obtained for the Ingleton inequality---we show that quasi-Hamilton groups, including Hamiltonian groups as a subclass, satisfy the Ingleton inequality. To our best knowledge, this is the first time that certain classes of non-abelian groups are found to satisfy the Ingleton inequality.
Recently, we have uncovered an obscure note written by Shannon, in which the notions of information elements and information lattices were proposed to build a general theory for multi-terminal network communication. In the second part of the third topic, we formalize these two notions and establish isomorphisms between information lattices and certain subgroup lattices. Exploiting this formalization, we identify a comprehensive parallelism between information lattices and subgroup lattices. Qualitatively, we demonstrate isomorphisms between information lattices and subgroup lattices. Quantitatively, we establish a decisive approximation relation between the entropy structures of information lattices and the log-index structures of the corresponding subgroup lattices. This approximation extends the approximation for joint entropies carried out previously by Chan and Yeung. As a consequence of our approximation result, we show that any continuous law holds in general for the entropies of information elements if and only if the same law holds in general for the log-indices of subgroups. As an application, by constructing subgroup counterexamples we find surprisingly that common information, unlike joint information, obeys neither the submodularity nor the supermodularity law. We emphasize that the notion of information elements is conceptually significant---formalizing it helps to reveal the deep connection between information theory and group theory. The parallelism established here admits an appealing group-action explanation and provides useful insights into the intrinsic structure among information elements from a group-theoretic perspective.
Adviser: Dr. Edwin K.P. Chong Co-Adviser: Non-ECE Member: Dr. Donald Estep, Dept. of Mathematics Member 3: Dr. Louis L. Scharf Addional Members: