score matching
TL;DR: This paper introduces sliced score matching, a new method to compute the score function by projecting the score onto random vectors.
The general goal of score matching is the estimation of the probability distibution of the data. In other words, we hope to find the parameters of a model so the model distribution is close to the data distribution. The model respresents the parametrized probability distribution of the data, which we call model distribution.
Fig. 1. Generative modeling approach.
Fig. 2. Generative modeling training.
Fig. 2. Graph of 2 layers.
Fig. 3. Deep graph.
Fig. 4. Normalized graph.
Suppose that we have a continuous probability distribution for which we represent
Fig. 5. Score function.
Fig. 6. Overview.
If we want to model the probability distribution using a normalized robability model, we need to ensure that the distribution we want to represent is fully normalized, which means that the are under the density function has to be 1. But we always need to deal with the normalizing constant.
In contrast, if we model the distribution through the score functions, there is no such noralization restriction because wwhen we compute the score function of the neural network, the score function becomes the difference between two terms which second term is the gradient of the normalizing constant which is equal to zero. So, the score function becomes the gradient of the deep neural network. Such gradients are easy to cumpute with automatic differenciation and backpropagation.
Fig. 7. Score function vs probability density function.
Fig. 8. Vector fields.
Fig. 9. Vector field differences that form an objective to optimize.
Fig. 10. Jacobian computation.
Intuition: one dimensional problems should be easier to compute, so how can we convert a high-dimensional problem into a one-dimensional problem?
Idea: By leveraging random projecctions. Projecting high-dimensional vector fields to run directions, will allow us to get one-dimensional scalar fields.
Suppose that we have the following high-dimensional vectors in Fig. 11:
Fig. 11. High-dimensional vectors.
Fig. 12. One-dimensional scalar fields.
Fig. 13. Trace
Fig. 14. Trace
Fig. 15. Score matching objectives.
© Copyright ©2025 All rights reserved | Carlos Hernández Oliván | Colorlib