Author : Lei Cai
Publisher :
ISBN 13 :
Total Pages : 170 pages
Book Rating : 4.6/5 (781 download)
Book Synopsis Deep Generative Models for Images, Texts, and Graphs by : Lei Cai
Download or read book Deep Generative Models for Images, Texts, and Graphs written by Lei Cai and published by . This book was released on 2020 with total page 170 pages. Available in PDF, EPUB and Kindle. Book excerpt: Generative models have become a powerful unsupervised learning method that can learn the distribution from existing data and generate new samples following the same distribution. With the advances of deep learning, deep generative models have shown promising performance in learning the distribution of complex data. Deep generative models have been applied to a wide range of applications, including image, text, and graph generation tasks. In most real-world applications, data are represented as images, texts, and graphs. This dissertation develops novel, deep generative models toward solving efficiency and accuracy problems in image, text, and graph analysis tasks. More specifically, these deep generative models are designed for improving three tasks; those are multi-modality missing data completion for Alzheimer's disease diagnosis, dialogue generation, and graph link prediction. I mainly focus on generating high-quality samples to facilitate the data analyses in these tasks. The meaning of high quality samples is specific to each different task. In missing data completion tasks, clear and informative images are required for disease diagnosis. In dialogue generation, diverse and reasonable responses for the given dialogue context are critical for improved communication. In graph link prediction, generating potential or missing links in the network with high confidence leads to greater prediction accuracy.In this dissertation, I analyze the limitations and drawbacks of existing models and methods for solving the three tasks. To overcome these limitations, the following four deep generative models are proposed for the tasks. An encoder-decoder network optimized by a combination of three loss functions is developed to generative clear and informative images. A conditional dialogue generation model associated with novel discriminator networks is proposed to generate diverse and reasonable responses. A multi-scale link prediction framework that employs a new node aggregation method to transform the graph into different scales is designed to perform link prediction. A line graph neural networks model, where the original graph is transformed into a corresponding line graph to enable efficient feature learning for the target link, is developed for link prediction tasks. Experimental results demonstrate that the proposed models are able to achieve promising, improved performance on image, text, and graph tasks.