Research Interests

My research interests are in the interdisciplinary areas of artificial intelligence, computer science, and operations research. My focus is artificial intelligence as a discipline of designing intelligent computer systems for automatic reasoning, decision making, planning, control, diagnosis, and learning, which is important in application domains such as corporate decision making, factory production control, project management, and intelligent human-computer interfaces. I am especially interested in exploring representation frameworks for problem formulation, mathematical foundations for proving formal properties, and algorithms for efficient computation. Fundamental concepts and computational machineries from probability theory, operations research, optimal control, and theoretical computer science, have been utilized in my research for coping with many challenging computational tasks required in designing modern intelligent systems. The following is a brief outline of my research :



Part I : Decision Making with Markov Decision Processes

Recently Markov decision processes have been used as an important mathematical and computational foundation for decision making under uncertainty. Markov decision processes are probabilistic finite automata augmented with the information of state-transition cost and performance criteria such as minimizing the expected cumulative cost. Many decision-making problems can be conveniently viewed as control problems over the underlying Markov decision processes. See more details in {Lin Dissertation-97}, {Dean and Lin IJCAI-95}, {Lin and Dean EWSP-95}. See professor Andrew Moore's web site about commercial systems built along this direction.

Decision Making under Uncertainty
Decision making under uncertainty is critical in application domains such as transportation planning, factory production control, and robot navigation. Using feature-based compact representation, the features of application domains are represented as state variables. Decisions involve preconditions, cost, and uncertainty in their consequences, all of which can be compactly specified by probabilistic rules governing the state variables. Decision making under uncertainty is often compactly represented as sequential decision networks. The nodes in sequential decision network are decision stages, each of which is associated many decision options. Links between decision stages represent dependency among decision stages. Intelligent decision-making systems must determine or approximate optimal decision-making policies for the underlying Markov decision processes to attain optimal performance. See an application in transportation planning and also see more applications in industry and corporate management as described in my dissertation {Lin Dissertation-97}.

Coping with
Very Large State Spaces :
Decomposition Techniques

While many important decision-making problems can be conveniently viewed as control problems over the underlying Markov decision processes, standard dynamic-programming algorithms for computing optimal policies can not be naively applied for solving the problems. This is because these standard algorithms require explicit enumeration of the underlying state spaces. This is a great computational challenge for researchers since the sizes of the state spaces explode exponentially in the number of state variables.

In our research, we develop decomposition techniques to exploit structure for computational leverage. Given a problem instance, we first symbolically decompose the underlying dynamical system into subsystems. After analyzing the local dynamics within subsystems and the dependency across subsystems, we decompose the original planning problem into a sequence of subproblems governing the subsystems. Three approaches are investigated for decomposing large Markov decision process: namely hierarchical decomposition, iterative approximation, and strongly-connected component decomposition. These decomposition techniques shed light on how to exploit locality and dependency to cope with very large state spaces. See more details in {Lin Dissertation-97}, {Dean and Lin IJCAI-95}, {Lin and Dean EWSP-95}.

Exploiting Structure for Computational Leverage :
Abstraction and Dimensionality Reduction

Rather than explicitly constructing the entire system, often we can compactly construct each subsystem by abstracting away the state variables that are irrelevant to the subsystem and succinctly capture the interactions among subsystems. Abstraction can result in significant dimensionality reduction in computation in some application domains. Solutions of the decision-making subproblems can then be derived by applying standard algorithms for the corresponding control problems to these compactly constructed subsystems, while a solution to the original decision-making problem is derived by systematically compiling and propagating the solutions to the subproblems. See more details in {Lin Dissertation-97}, {Lin and Dean EWSP-95}.

Formal Computational Complexity and Empirical Efficacy
Decomposition techniques and structural analysis of problem instances are developed in our research to exploit structure, characterizing locality and dependency embedded in problem instances as some structural parameters. Problem instances embedding locality and dependency structure have small structural parameters. Although decision-making problems considered in our research are shown to be hard ( PSPACE-hard in some cases through a reduction from classical STRIPS planning), computational time is exponential only in terms of a structural parameter in the strongly-connected component decomposition, rather than in the sizes of the problem instances. This allows intelligent systems to solve important problem instances that exhibit locality and dependency structure. See more details in {Lin Dissertation-97}, {Lin and Dean EWSP-95}. Empirical experiments also confirm that our dimensionality-reduction techniques are effective and the sizes of the structural parameters are appropriate for efficient computation in subdomains of production control and transportation planning. See more details in {Lin Dissertation-97}.

Insight from Mathematical Programming
Linear programs can be set up for computing optimal policies for Markov decision processes. Although standard dynamic programming approach is more popular than linear programming approach for its better convergence rate, the corresponding linear programs and their dual programs provide valuable insight into the iterative-approximation decomposition technique proposed in {Dean and Lin IJCAI-95}. A special version of the iterative-approximation decomposition technique is shown to be equivalent to applying the Dantzig-Wolfe decomposition for solving the corresponding linear programs. This leads us to establish the convergence property of the iterative approximation algorithm and demonstrate its applicability to weakly-coupled Markov decision processes. See more details in {Lin and Dean CIJ-96},{Lin Dissertation-97}.

Abstraction and Model Minimization for Markov Decision Processes:
Continuing Work
As well acknowledged, naively enumerating states explicitly is infeasible since the sizes of the underlying state spaces grow exponentially in the number of state variables involved. Abstraction or dimensionality reduction through symbolic aggregation of states continue to be of great interest to researchers coping with the dimensionality curse. Instead of naive enumeration, the goal is to symbolically construct equivalent minimal models for very large Markov decision processes. See web sites of professor Tom Dean and professor Craig Boutilier for other related papers along this direction.

Optimal Control with Weakly-coupled Markov Decision Processes:
Continuing Work

As well acknowledged, there are important problem instances in transportation planning, factory production control, and robot navigation. The underlying weakly-coupled Markov decision processes exhibit structure for computation leverage. Decomposition techniques for weakly-coupled Markov decision processes continue to be an important topic for the researchers. See web sites of Ron Parr and professor Tom Dean, for related papers and research.


Part II : Related Probabilistic Models and Research

Learning Optimal Control for Markov Decision Processes:
How does an intelligent system learn on the fly to make decisions to attain optimal performance in the long run even if not much is known in advance about the transition probability and transition cost of the underlying Markov decision processes? Could trial-and-error exploration lead to optimal performance? Research on reinforcement learning provides mathematical foundations and algorithms for the challenge. See the web sites of professor Leslie Kaebling for a nice survey paper and links to related work.

Coping with Partially Observable Markov Decision Processes
For partially observable Markov decision processes, noisy sensors provide noisy state information. How does an intelligent system make decisions to attain optimal performance? Could decision making according to the best estimate of current state still lead to optimal performance? If not, what should be the solution? Research on partially observable Markov decision processes provides mathematical foundations and algorithms for this challenge. See the web sites of professor Leslie Kaebling and professor Michael Littman for links to related papers.

Hidden Markov Models and Statistical Language Learning:
With noisy sensors providing noisy state information, hidden Markov models are partially observable Markov chains. Using Bayes rule and prior knowledge of the underlying transition probability, an intelligent system can predict the most likely state trajectory according to noisy sensor history. On the other hand, an intelligent system can also somewhat learn the transition probability through observation of sensor history using EM algorithm. Hidden Markov models are important tools in the recent research on statistical language learning. See applications in part-of-speech tagging and in training probabilistic context free grammars in professor Eugene Charniak's web site.

Probabilistic Reasoning with Belief Networks
Variants of belief networks are also known as causal networks, influence diagrams, or Markov random fields. Belief networks are graphs with nodes as variables modeling decision options, observations, and state information. Links between nodes represent dependencies between variables. Capable of compactly modeling partially observable Markov decision processes and more, belief networks provide a powerful representation framework for probabilistic inference, prediction, and state estimation. See the web site of Microsoft's decision-theoretic researchers for the application of belief networks in building intelligent human-computer interfaces. See the web site of professor Daphne Koller on exploiting structure to cope with the computational challenge of efficient probabilistic reasoning in belief network.

Part III : Task Scheduling and Temporal Reasoning

Task scheduling and temporal reasoning is important in project management. An engineering project is typically represented as a task network, specifying how the project is divided into component tasks and in what order these tasks could be executed. Tasks are also associated with causal rules, specifying preconditions before tasks can be executed and the consequences after tasks are finished.

For project-management purpose, an intelligent system for project management must verify that tasks can be executed in any order consistent with the task-network specification without resulting in an undesirable side effect. If there does exist an execution order that can result in an undesirable side effect, the system must report such a task sequence as an evidence of fault within the task network. This is the essence of temporal reasoning problems considered in our research. See {Lin and Dean IOS-94} {Lin and Dean TIME-94}, {Lin and Dean CIJ-96} , {Lin Dissertation-97} for more details.

Tradeoffs of Computational Complexity
Previous research has shown that it is NP-Complete to determine the existence of a task sequence that results in a specified effect. Our research further identifies three key factors that contribute to the computational complexity, namely, ordering constraints on tasks, number of variables involved, and the type of causal rules associated with tasks. We establish complexity results regarding how these three factors contribute to the computational complexity, develop polynomial-time algorithms when the key factors are restricted, and show the underlying complexity tradeoffs . {Lin and Dean IOS-94},
{Lin and Dean CIJ-96}.

Exploiting Structure in Hierarchical Task Networks
For hierarchical task networks, we can systematically exploit locality in execution order and causal dependency for computational leverage. In a hierarchical task network, tasks are organized into hierarchical levels. The only one task at the root level symbolically represents the entire project. A task at one level can be further decomposed into subtasks at the next level, with precedence constraints specifying how these subtasks should be executed at the next level. Hierarchical task networks embed strong temporal locality in task execution since the subtasks of a given task are executed as an atomic group. Locality in causal dependency is also available when tasks at the same level only interact with one another through a few common variables. This allows us to develop a localized temporal reasoning algorithm to transform a global temporal reasoning query into a series of local graph-reachability queries. See more details in {Lin and Dean CIJ-96},{Lin Dissertation-97}.

Decomposition Techniques for Computational Leverage
We present a localized temporal reasoning algorithm that uses subgoals and abstraction to exploit locality in execution order and causal dependency. A temporal reasoning query is first conceptually viewed as a graph-reachability query over a state space determined by all state variables, and then efficiently decomposed and resolved through a series of graph-reachability queries over state spaces of reduced dimensionality. The computational time is linear in the number of the tasks involved, and exponential in a structural parameter quantifying the locality in causal dependency and execution order. When a hierarchical network is loosely coupled, the value of the structural parameter is small. In this case, temporal reasoning queries can be very efficiently answered. This work provides a solid evidence of the usefulness of decomposition techniques, abstraction, and dimensionality reduction in exploiting locality. See more details in {Lin and Dean CIJ-96}, {Lin Dissertation-97}.

Part IV: Parallel Computing

My master thesis is on the design and analysis of a generic parallel simplex algorithm on multi-computers with various interconnection topologies. We investigate data mapping of large linear programs onto multi-computers, and then identify various communication patterns embedded in different interconnection topologies for implementing the simplex algorithm. The focus is to analyze the bounds on the number of processing elements allowed in different communication patterns in order to attain linear speedup. See {Chen, Ho, Lin, Sheu ICPP-88}, {Lin thesis-89}, and {Chen, Ho, Lin, Sheu-90} for more details.