Dissertation
Deep Generative Models for Images, Texts, and Graphs
Doctor of Philosophy (PhD), Washington State University
01/2020
Handle:
https://hdl.handle.net/2376/107985
Abstract
Generative models have become a powerful unsupervised learning method that can learn the distribution from existing data and generate new samples following the same distribution. With the advances of deep learning, deep generative models have shown promising performance in learning the distribution of complex data. Deep generative models have been applied to a wide range of applications, including image, text, and graph generation tasks. In most real-world applications, data are represented as images, texts, and graphs. This dissertation develops novel, deep generative models toward solving efficiency and accuracy problems in image, text, and graph analysis tasks. More specifically, these deep generative models are designed for improving three tasks; those are multi-modality missing data completion for Alzheimer's disease diagnosis, dialogue generation, and graph link prediction. I mainly focus on generating high-quality samples to facilitate the data analyses in these tasks. The meaning of high quality samples is specific to each different task. In missing data completion tasks, clear and informative images are required for disease diagnosis. In dialogue generation, diverse and reasonable responses for the given dialogue context are critical for improved communication. In graph link prediction, generating potential or missing links in the network with high confidence leads to greater prediction accuracy.
In this dissertation, I analyze the limitations and drawbacks of existing models and methods for solving the three tasks. To overcome these limitations, the following four deep generative models are proposed for the tasks. An encoder-decoder network optimized by a combination of three loss functions is developed to generative clear and informative images. A conditional dialogue generation model associated with novel discriminator networks is proposed to generate diverse and reasonable responses. A multi-scale link prediction framework that employs a new node aggregation method to transform the graph into different scales is designed to perform link prediction. A line graph neural networks model, where the original graph is transformed into a corresponding line graph to enable efficient feature learning for the target link, is developed for link prediction tasks. Experimental results demonstrate that the proposed models are able to achieve promising, improved performance on image, text, and graph tasks.
Metrics
Details
- Title
- Deep Generative Models for Images, Texts, and Graphs
- Creators
- Lei Cai
- Contributors
- Shuiwang Ji (Advisor)Shira Lynn Broschat (Committee Member)Lawrence B. Holder (Committee Member)Yinghui Wu (Committee Member)
- Awarding Institution
- Washington State University
- Academic Unit
- Electrical Engineering and Computer Science, School of
- Theses and Dissertations
- Doctor of Philosophy (PhD), Washington State University
- Number of pages
- 169
- Identifiers
- 99900581811801842
- Language
- English
- Resource Type
- Dissertation