The age of big data might not only be driven by those who master the collection and management of data, but also those who can create it.  Introduction In this article I’d like to explore a way to solve the data problem in medical imaging AI; that is the lack of data. I’m going to focus around the UK where I work; but the conclusions are relevant for any healthcare industry. What’s the data problem? In medical imaging projects; particularly those that involve AI, there is a requirement for imaging data. Obtaining this data is a significant challenge to businesses in medical imaging. It is in the collective interest of everyone to make this data available; it drives innovation and[…]

Aaaaand… finally; after Parts 1-4 we are ready to implement and test the pixelSNAIL model. In the previous posts we went in detail through the theory and concepts behind the pixelCNN style models which were designed to generate unique new images after being trained on a dataset. In this post I will go through some runs of the pixelSNAIL model; using a Keras implementation I built to port the original code from the largely deprecated TensorFlow 1 to TensorFlow 2. You can find this repository here. \begin{aligned} \end{aligned} Initial Exploration with pixelSNAIL The first task was to get pixelSNAIL working; this code is very difficult to read and also is written in TensorFlow 1 style (I think TensorFlow 1 tends[…]

Hello and welcome to Part 4 in the series on autoregressive generative models. We have covered some of the theory of these models and have implemented causality and probability to allow a CNN to be capable of generating new images based on a training set. In this post we are going to focus on the building blocks that are used to construct the leading model of this class; pixelSNAIL (as well as some of the other variants of pixelCNNs). This brings in quite a few ideas from the field of deep learning and in some cases I will touch on them only briefly; and aim to cover them in more detail in other posts. Gated Residual Block We have discussed[…]

Hi and welcome to Part 3 of the series on Autoregressive Generative Models; last time we explained how to incorporate probabilistic interpretations to a network to allow it to generate new image samples. This time we cover another key principle underlying these models; causality. Let’s get on with it… Causality One of the fundamental properties of a generative model is causality; which is the requirement that predictions made for a single element of a sequence (for example a pixel of an image, word of a sentence or note of a piece of music) only depends on previously generated elements and not future elements. This is easiest to understand in the context of 1D or 2D convolutions where a regular convolution[…]

Hello, welcome to part 2 of the autoregressive model series. Part 1 was a quick introduction to the models and what they do (generate images). In this part we will go in depth on one of the key elements of these models; the probabalistic treatment of the network output. Probability and Loss in Autoregressive Generative Models Here I am going to discuss the loss function for autoregressive generative models and how it relates to probability. We want a generative model to be able to produce a probability density function \begin{aligned} p(x_{ij}) \ , \ x_{ij}\in [0,255]^3, \end{aligned} which describes the likelihood that the given image pixel takes a value $x$. In the full model; where the network uses previous pixel[…]

Hello and welcome to this series of tutorials on autoregressive models! I decided to put this together to help me (and hopefully you) learn about these types of model which have cutting edge applications in image generation. I want to go into depth on this because there are some core ideas I think it’s worth exploring and understanding; particularly as I find the papers are very thin on detail. I have split this up into chunks Part 1 : General introduction to generative autoregressive models and particularly the pixelCNN family of models. Part 2 : Theory (Probability); an exploration/explanation of the key ideas that make these models work. Part 3 : Theory (Causality); an exploration/explanation of the key ideas that[…]

This whole tutorial starts with a simple example problem Suppose we have 99 empty papers and we wrote numbers from 1 to 99 (using each number) on one side of the papers randomly. Then we mixed all the papers randomly and started to write numbers from 1 to 99 on the other sides (empty sides) of each paper randomly. What is the probability of having the same number on both sides of at least one paper? https://math.stackexchange.com/questions/394007/a-tricky-probability-question Give it a go and quickly you will see how tricky it is. The answer requires an understanding of derangements and we will go through this idea in detail in this post. What is a derangement? So what is a derangement? Consider a[…]