# Latent regression Bayesian network for data representation

@article{Nie2016LatentRB, title={Latent regression Bayesian network for data representation}, author={Siqi Nie and Yue Zhao and Qiang Ji}, journal={2016 23rd International Conference on Pattern Recognition (ICPR)}, year={2016}, pages={3494-3499} }

Restricted Boltzmann machines (RBMs) are widely used for data representation and feature learning in various machine learning tasks. The undirected structure of an RBM allows inference to be performed efficiently, because the latent variables are dependent on each other given the visible variables. However, we believe the correlations among latent variables are crucial for faithful data representation. Driven by this idea, we propose a counterpart of RBMs, namely latent regression Bayesian… Expand

#### 4 Citations

Facial Action Unit Recognition Augmented by Their Dependencies

- Computer Science
- 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)
- 2018

Experimental results on three benchmark databases demonstrate that the proposed approaches can successfully capture complex AU relationships, and the expression labels available only during training are benefit for AU recognition during testing. Expand

Facial Action Unit Recognition and Intensity Estimation Enhanced Through Label Dependencies

- Computer Science, Medicine
- IEEE Transactions on Image Processing
- 2019

The results demonstrate that the proposed approaches faithfully model the complex and global inherent AU dependencies, and the expression labels available only during training can boost the estimation of AU dependencies for both AU recognition and intensity estimation. Expand

Probabilistic spiking neural networks : Supervised, unsupervised and adversarial trainings

- Computer Science
- 2019

This research presents a probabilistic framework for evaluating the impact of environmental influences on innovation in the context of sport and human evolution. Expand

Posed and Spontaneous Expression Distinction Using Latent Regression Bayesian Networks

- Computer Science
- ACM Trans. Multim. Comput. Commun. Appl.
- 2020

This work constructs several latent regression Bayesian networks to capture spatial patterns from spontaneous and posed facial expressions given expression-related factors and conducts experiments to showcase the superiority of the proposed approach in both modeling spatial patterns and classifying expressions as either posed or spontaneous. Expand

#### References

SHOWING 1-10 OF 27 REFERENCES

On the quantitative analysis of deep belief networks

- Mathematics, Computer Science
- ICML '08
- 2008

It is shown that Annealed Importance Sampling (AIS) can be used to efficiently estimate the partition function of an RBM, and a novel AIS scheme for comparing RBM's with different architectures is presented. Expand

Neural Variational Inference and Learning in Belief Networks

- Computer Science, Mathematics
- ICML
- 2014

This work proposes a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior and shows that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset. Expand

Auto-Encoding Variational Bayes

- Mathematics, Computer Science
- ICLR
- 2014

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced. Expand

Bounding the Test Log-Likelihood of Generative Models

- Computer Science, Mathematics
- ICLR
- 2014

A more efficient estimator is proposed, and it is proved that it provides a lower bound on the true test log-likelihood, and an unbiased estimator as the number of generated samples goes to infinity, although one that incorporates the effect of poor mixing. Expand

Stochastic Spectral Descent for Discrete Graphical Models

- Mathematics, Computer Science
- IEEE Journal of Selected Topics in Signal Processing
- 2016

A new, largely tuning-free algorithm that derives novel majorization bounds based on the Schatten- ∞ norm and demonstrates empirically that this algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models. Expand

Efficient Learning of Deep Boltzmann Machines

- Mathematics, Computer Science
- AISTATS
- 2010

We present a new approximate inference algorithm for Deep Boltzmann Machines (DBM’s), a generative model with many layers of hidden variables. The algorithm learns a separate “recognition” model that… Expand

Training Products of Experts by Minimizing Contrastive Divergence

- Mathematics, Computer Science
- Neural Computation
- 2002

A product of experts (PoE) is an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Expand

Deep Mixtures of Factor Analysers

- Computer Science, Mathematics
- ICML
- 2012

This paper presents a greedy layer-wise learning algorithm for Deep Mixtures of Factor Analysers (DMFAs) and demonstrates empirically that DMFAs learn better density models than both MFAs and two types of Restricted Boltzmann Machine on a wide variety of datasets. Expand

Connectionist Learning of Belief Networks

- Computer Science
- Artif. Intell.
- 1992

The “Gibbs sampling” simulation procedure for “sigmoid” and “noisy-OR” varieties of probabilistic belief networks can support maximum-likelihood learning from empirical data through local gradient ascent. Expand

A Fast Learning Algorithm for Deep Belief Nets

- Mathematics, Computer Science
- Neural Computation
- 2006

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand