We’ve seen fairly just a few examples of unsupervised studying (or self-supervised studying, to decide on the extra appropriate however much less in style time period) on this weblog.
Typically, these concerned Variational Autoencoders (VAEs), whose enchantment lies in them permitting to mannequin a latent area of underlying, unbiased (ideally) elements that decide the seen options. A attainable draw back may be the inferior high quality of generated samples. Generative Adversarial Networks (GANs) are one other in style method. Conceptually, these are extremely engaging as a consequence of their game-theoretic framing. Nevertheless, they are often troublesome to coach. PixelCNN variants, alternatively – we’ll subsume all of them right here below PixelCNN – are usually recognized for his or her good outcomes. They appear to contain some extra alchemy although. Beneath these circumstances, what might be extra welcome than a simple approach of experimenting with them? Via TensorFlow Chance (TFP) and its R wrapper, tfprobability, we now have such a approach.
This publish first provides an introduction to PixelCNN, concentrating on high-level ideas (leaving the small print for the curious to look them up within the respective papers). We’ll then present an instance of utilizing tfprobability
to experiment with the TFP implementation.
PixelCNN rules
Autoregressivity, or: We want (some) order
The fundamental thought in PixelCNN is autoregressivity. Every pixel is modeled as relying on all prior pixels. Formally:
[p(mathbf{x}) = prod_{i}p(x_i|x_0, x_1, …, x_{i-1})]
Now wait a second – what even are prior pixels? Final I noticed one photos had been two-dimensional. So this implies we’ve got to impose an order on the pixels. Generally this might be raster scan order: row after row, from left to proper. However when coping with colour photos, there’s one thing else: At every place, we even have three depth values, one for every of purple, inexperienced, and blue. The unique PixelCNN paper(Oord, Kalchbrenner, and Kavukcuoglu 2016) carried by autoregressivity right here as effectively, with a pixel’s depth for purple relying on simply prior pixels, these for inexperienced relying on these similar prior pixels however moreover, the present worth for purple, and people for blue relying on the prior pixels in addition to the present values for purple and inexperienced.
[p(x_i|mathbf{x}<i) = p(x_{i,R}|mathbf{x}<i) p(x_{i,G}|mathbf{x}<i, x_{i,R}) p(x_{i,B}|mathbf{x}<i, x_{i,R}, x_{i,G})]
Right here, the variant applied in TFP, PixelCNN++(Salimans et al. 2017) , introduces a simplification; it factorizes the joint distribution in a much less compute-intensive approach.
Technically, then, we all know how autoregressivity is realized; intuitively, it could nonetheless appear stunning that imposing a raster scan order “simply works” (to me, at the least, it’s). Perhaps that is a type of factors the place compute energy efficiently compensates for lack of an equal of a cognitive prior.
Masking, or: The place to not look
Now, PixelCNN ends in “CNN” for a motive – as standard in picture processing, convolutional layers (or blocks thereof) are concerned. However – is it not the very nature of a convolution that it computes a median of some types, wanting, for every output pixel, not simply on the corresponding enter but additionally, at its spatial (or temporal) environment? How does that rhyme with the look-at-just-prior-pixels technique?
Surprisingly, this drawback is less complicated to unravel than it sounds. When making use of the convolutional kernel, simply multiply with a masks that zeroes out any “forbidden pixels” – like on this instance for a 5×5 kernel, the place we’re about to compute the convolved worth for row 3, column 3:
[left[begin{array}
{rrr}
1 & 1 & 1 & 1 & 1
1 & 1 & 1 & 1 & 1
1 & 1 & 1 & 0 & 0
0 & 0 & 0 & 0 & 0
0 & 0 & 0 & 0 & 0
end{array}right]
]
This makes the algorithm trustworthy, however introduces a distinct drawback: With every successive convolutional layer consuming its predecessor’s output, there’s a constantly rising blind spot (so-called in analogy to the blind spot on the retina, however positioned within the high proper) of pixels which are by no means seen by the algorithm. Van den Oord et al. (2016)(Oord et al. 2016) repair this by utilizing two totally different convolutional stacks, one continuing from high to backside, the opposite from left to proper.

Conditioning, or: Present me a kitten
To this point, we’ve all the time talked about “producing photos” in a purely generic approach. However the actual attraction lies in creating samples of some specified sort – one of many lessons we’ve been coaching on, or orthogonal data fed into the community. That is the place PixelCNN turns into Conditional PixelCNN(Oord et al. 2016), and additionally it is the place that feeling of magic resurfaces. Once more, as “basic math” it’s not laborious to conceive. Right here, (mathbf{h}) is the extra enter we’re conditioning on:
[p(mathbf{x}| mathbf{h}) = prod_{i}p(x_i|x_0, x_1, …, x_{i-1}, mathbf{h})]
However how does this translate into neural community operations? It’s simply one other matrix multiplication ((V^T mathbf{h})) added to the convolutional outputs ((W mathbf{x})).
[mathbf{y} = tanh(W_{k,f} mathbf{x} + V^T_{k,f} mathbf{h}) odot sigma(W_{k,g} mathbf{x} + V^T_{k,g} mathbf{h})]
(If you happen to’re questioning concerning the second half on the appropriate, after the Hadamard product signal – we received’t go into particulars, however in a nutshell, it’s one other modification launched by (Oord et al. 2016), a switch of the “gating” precept from recurrent neural networks, equivalent to GRUs and LSTMs, to the convolutional setting.)
So we see what goes into the choice of a pixel worth to pattern. However how is that call truly made?
Logistic combination probability , or: No pixel is an island
Once more, that is the place the TFP implementation doesn’t comply with the unique paper, however the latter PixelCNN++ one. Initially, pixels had been modeled as discrete values, selected by a softmax over 256 (0-255) attainable values. (That this truly labored looks as if one other occasion of deep studying magic. Think about: On this mannequin, 254 is as removed from 255 as it’s from 0.)
In distinction, PixelCNN++ assumes an underlying steady distribution of colour depth, and rounds to the closest integer. That underlying distribution is a combination of logistic distributions, thus permitting for multimodality:
[nu sim sum_{i} pi_i logistic(mu_i, sigma_i)]
General structure and the PixelCNN distribution
General, PixelCNN++, as described in (Salimans et al. 2017), consists of six blocks. The blocks collectively make up a UNet-like construction, successively downsizing the enter after which, upsampling once more:

In TFP’s PixelCNN distribution, the variety of blocks is configurable as num_hierarchies
, the default being 3.
Every block consists of a customizable variety of layers, known as ResNet layers as a result of residual connection (seen on the appropriate) complementing the convolutional operations within the horizontal stack:

In TFP, the variety of these layers per block is configurable as num_resnet
.
num_resnet
and num_hierarchies
are the parameters you’re most probably to experiment with, however there are just a few extra you’ll be able to take a look at within the documentation. The variety of logistic distributions within the combination can also be configurable, however from my experiments it’s finest to maintain that quantity quite low to keep away from producing NaN
s throughout coaching.
Let’s now see a whole instance.
Finish-to-end instance
Our playground might be QuickDraw, a dataset – nonetheless rising – obtained by asking folks to attract some object in at most twenty seconds, utilizing the mouse. (To see for your self, simply take a look at the web site). As of at present, there are greater than a fifty million situations, from 345 totally different lessons.
At first, these information had been chosen to take a break from MNIST and its variants. However similar to these (and plenty of extra!), QuickDraw may be obtained, in tfdatasets
-ready kind, through tfds, the R wrapper to TensorFlow datasets. In distinction to the MNIST “household” although, the “actual samples” are themselves extremely irregular, and infrequently even lacking important elements. So to anchor judgment, when displaying generated samples we all the time present eight precise drawings with them.
Making ready the info
The dataset being gigantic, we instruct tfds
to load the primary 500,000 drawings “solely.”
To hurry up coaching additional, we then zoom in on twenty lessons. This successfully leaves us with ~ 1,100 – 1,500 drawings per class.
# bee, bicycle, broccoli, butterfly, cactus,
# frog, guitar, lightning, penguin, pizza,
# rollerskates, sea turtle, sheep, snowflake, solar,
# swan, The Eiffel Tower, tractor, prepare, tree
lessons <- c(26, 29, 43, 49, 50,
125, 134, 172, 218, 225,
246, 255, 258, 271, 295,
296, 308, 320, 322, 323
)
classes_tensor <- tf$forged(lessons, tf$int64)
train_ds <- train_ds %>%
dataset_filter(
perform(document) tf$reduce_any(tf$equal(classes_tensor, document$label), -1L)
)
The PixelCNN distribution expects values within the vary from 0 to 255 – no normalization required. Preprocessing then consists of simply casting pixels and labels every to float
:
preprocess <- perform(document) {
document$picture <- tf$forged(document$picture, tf$float32)
document$label <- tf$forged(document$label, tf$float32)
checklist(tuple(document$picture, document$label))
}
batch_size <- 32
prepare <- train_ds %>%
dataset_map(preprocess) %>%
dataset_shuffle(10000) %>%
dataset_batch(batch_size)
Creating the mannequin
We now use tfd_pixel_cnn to outline what would be the loglikelihood utilized by the mannequin.
dist <- tfd_pixel_cnn(
image_shape = c(28, 28, 1),
conditional_shape = checklist(),
num_resnet = 5,
num_hierarchies = 3,
num_filters = 128,
num_logistic_mix = 5,
dropout_p =.5
)
image_input <- layer_input(form = c(28, 28, 1))
label_input <- layer_input(form = checklist())
log_prob <- dist %>% tfd_log_prob(image_input, conditional_input = label_input)
This tradition loglikelihood is added as a loss to the mannequin, after which, the mannequin is compiled with simply an optimizer specification solely. Throughout coaching, loss first decreased shortly, however enhancements from later epochs had been smaller.
mannequin <- keras_model(inputs = checklist(image_input, label_input), outputs = log_prob)
mannequin$add_loss(-tf$reduce_mean(log_prob))
mannequin$compile(optimizer = optimizer_adam(lr = .001))
mannequin %>% match(prepare, epochs = 10)
To collectively show actual and faux photos:
for (i in lessons) {
real_images <- train_ds %>%
dataset_filter(
perform(document) document$label == tf$forged(i, tf$int64)
) %>%
dataset_take(8) %>%
dataset_batch(8)
it <- as_iterator(real_images)
real_images <- iter_next(it)
real_images <- real_images$picture %>% as.array()
real_images <- real_images[ , , , 1]/255
generated_images <- dist %>% tfd_sample(8, conditional_input = i)
generated_images <- generated_images %>% as.array()
generated_images <- generated_images[ , , , 1]/255
photos <- abind::abind(real_images, generated_images, alongside = 1)
png(paste0("draw_", i, ".png"), width = 8 * 28 * 10, top = 2 * 28 * 10)
par(mfrow = c(2, 8), mar = c(0, 0, 0, 0))
photos %>%
purrr::array_tree(1) %>%
purrr::map(as.raster) %>%
purrr::iwalk(plot)
dev.off()
}
From our twenty lessons, right here’s a alternative of six, every exhibiting actual drawings within the high row, and faux ones beneath.






We in all probability wouldn’t confuse the primary and second rows, however then, the precise human drawings exhibit huge variation, too. And nobody ever stated PixelCNN was an structure for idea studying. Be at liberty to mess around with different datasets of your alternative – TFP’s PixelCNN distribution makes it straightforward.
Wrapping up
On this publish, we had tfprobability
/ TFP do all of the heavy lifting for us, and so, may concentrate on the underlying ideas. Relying in your inclinations, this may be an excellent scenario – you don’t lose sight of the forest for the bushes. Then again: Do you have to discover that altering the supplied parameters doesn’t obtain what you need, you may have a reference implementation to start out from. So regardless of the end result, the addition of such higher-level performance to TFP is a win for the customers. (If you happen to’re a TFP developer studying this: Sure, we’d like extra :-)).
To everybody although, thanks for studying!
Salimans, Tim, Andrej Karpathy, Xi Chen, and Diederik P. Kingma. 2017. “PixelCNN++: A PixelCNN Implementation with Discretized Logistic Combination Probability and Different Modifications.” In ICLR.