Give


Graduate Exam Abstract


Zhenliang Zhang

Ph.D. Final

August 13, 2013, 10:00am -- 12:00pm

ECE Conference Room

Learning in Large Networks


Abstract: We consider two topics in this talk: 1) learning in feedforward and hierarchical networks; and 2) string submodularity in optimal control problems. In the first topic, we consider a binary hypothesis testing problem and an associated network that attempts jointly to solve the problem. Each agent in the network takes a private signal of the underlying truth, observes the past actions of his neighboring agents, and makes a decision to optimize an objective function (e.g., probability of error). We are interested in the following questions: 1) Will the agents asymptotically learn the underlying truth? More specifically, will the overall decision converges (in probability) to the underlying truth as the number of agents goes to infinity? 2) If so, how fast is the convergence with respect to the number of agents? To answer these questions, we investigate two types of networks: Feedforward network and hierarchical tree network, which arise naturally in social and technological networks. Moreover, we investigate the following three parameters: 1. memory size; 2. private signal `strength;' 3. communication noisiness. We establish conditions on these parameters such that the agents asymptotically learn the underlying truth. Moreover, we study the relationship between the convergence rates and these parameters. First, we consider the feedforward network, consisting of a large number of nodes, which sequentially make decisions between two given hypotheses. Each node takes a private signal of the underlying truth, observes the decisions from some immediate predecessors, and makes a decision between the given hypotheses. We consider two classes of broadcast failures: 1) each node broadcasts a decision to the other nodes, subject to random erasure in the form of a binary erasure channel; 2) each node broadcasts a randomly flipped decision to the other nodes in the form of a binary symmetric channel. We are interested in whether there exists a decision strategy consisting of a sequence of likelihood ratio tests such that the node decisions converge in probability to the underlying truth. In both cases, we show that if each node only learns from a bounded number of immediate predecessors, then there does not exist a decision strategy such that the decisions converge in probability to the underlying truth. However, in case 1, we show that if each node learns from an unboundedly growing number of predecessors, then the decisions converge in probability to the underlying truth, even when the erasure probabilities converge to 1. We also derive the convergence rate of the error probability. In case 2, we show that if each node learns from all of its previous predecessors, then the decisions converge in probability to the underlying truth when the flipping probabilities of the binary symmetric channels are bounded away from 1/2. In the case where the flipping probabilities converge to 1/2, we derive a necessary condition on the convergence rate of the flipping probabilities such that the decisions still converge to the underlying truth. We also explicitly characterize the relationship between the convergence rate of the error probability and the convergence rate of the flipping probabilities. Second, we consider the hypothesis testing problem in the context of balanced binary relay trees, where the leaves (and only the leaves) of the tree correspond to N identical and independent sensors. The root of the tree represents a fusion center that makes the overall decision. Each of the other nodes in the tree is a relay node that combines two binary messages to form a single output binary message. In this way, the information from the sensors is aggregated into the fusion center via the relay nodes. We consider the case where the fusion rules at all nonleaf nodes are the Bayesian likelihood ratio tests. In this case, we describe the evolution of the Type I and Type II error probabilities of the binary data as it propagates from the leaves towards the root. Tight upper and lower bounds for the total error probability at the fusion center as functions of N are derived. These bounds characterize the decay rate of the total error probability to 0 with respect to N, even if the individual sensors have error probabilities that converge to 1/2. We further investigate this problem in the case where nodes and links fail with certain probabilities. Naturally, the asymptotic decay rate of the total error probability is not larger than that in the non-failure case. However, we derive an explicit necessary and sufficient condition on the decay rate of the local failure probabilities (combination of node and link failure probabilities at each level) such that the decay rate of the total error probability in the failure case is the same as that of the non-failure case. We also consider a more general M-ary relay tree configuration, where each non-leaf node in the tree has M child nodes. We derive upper and lower bounds for the Type I and Type II error probabilities associated with this decision with respect to the number of sensors, which in turn characterize the converge rates of the Type I, Type II, and total error probabilities. We also provide a message-passing scheme involving non-binary message alphabets and characterize the exponent of the error probability with respect to the message alphabet size. In the second topic, we extend the notion of submodularity to optimal control problems. More precisely, we introduce the notion of string submodularity in the problem of maximizing an objective function defined on a set of strings subject to a string length constraint. We show that the greedy strategy achieves a (1-e^{-1})-approximation of the optimal strategy. Moreover, we can improve this approximation by introducing additional constraints on curvature, namely, total backward curvature, total forward curvature, and elemental forward curvature. We show that if the objective function has total backward curvature \sigma, then the greedy strategy achieves at least a \frac{1}{\sigma}(1-e^{-\sigma})-approximation of the optimal strategy. If the objective function has total forward curvature \epsilon, then the greedy strategy achieves at least a (1-\epsilon)-approximation of the optimal strategy. Moreover, we consider a generalization of the diminishing-return property by defining the elemental forward curvature. We also introduce the notion of string-matroid and consider the problem of maximizing the objective function subject to a string-matroid constraint. We investigate three applications of string submodular functions with curvature constraints: 1) designing a string of fusion rules in balanced binary relay trees such that the reduction in the error probability is maximized; 2) choosing a string of actions to maximize the expected fraction of accomplished tasks; and 3) designing a string of measurement matrices such that the information gain is maximized.

Adviser: Ali Pezeshki
Co-Adviser: Edwin K. P. Chong
Non-ECE Member: Haonan Wang, Statistics
Member 3: J. Rockey Luo, ECE
Addional Members: N/A

Publications:
See http://www.engr.colostate.edu/~zlzhang/papers.html


Program of Study:
ECE 514
ECE 516
ECE 614
ECE 652
STAT 720
MATH 501
N/A
N/A