Monday 30 June 2014

Inpainting: a users’ guide

Over the past few posts I have introduced the idea of inpainting and the mathematics behind the ability to “intelligently” fill in a region based on the information provided by the rest of the image. The theory was developed by Dr Thomas Maerz, from which he constructed a set of matlab codes that simulated his ideas. Dr Martin Robinson has since developed Thomas’ ideas into a FREE GIMP plugin.

This week I present a guide to using Martin’s plugin.
________________________________________________________________
________________________________________
In its most basic form Martin’s code is extremely easy to use: you select a region; set a few parameters and tell GIMP to run the code. As before, we will use the plugin to remove bee from the following photo. Also, we once again use the GIMP’s extensive suite of selection tools to highlight the region we are interested in.
When you have selected the region you are interested in fire up the inpainting plugin. If it has been installed correctly (see Martin’s documentation for more information) then by default it should be in the “Filters” drop down menu under “Misc”. Once it is opened you should see a box that looks like Figure 1.

First approximation
Figure 1. The general user interface of the inpainting plugin.
In this basic case the text box should automatically be filled in with the correct details. Namely, the image shown at the top should be the area that you selected, the source should be the image you want to inpaint and the mask type should be a selection.

You will then notice that there are four parameters: epsilon, kappa, sigma and rho. These are very important and control various aspects of the inpainting algorithm. Specifically, not only do they influence what colour a chosen pixel will be, but they also influence how details will propagate into the domain. This means if there are straight lines leading into the boundary the parameters will alter how those straight lines are continued. The following box gives more detail on each of the parameters.

epsilon defines the region over which the colour is averaged. If epsilon is large then colours over a larger region are used. If epsilon is small the code only uses local colour detail to paint the new pixel.

kappa controls how the level set lines (discussed in a previous post) effect the direction of features that we want to propagate into the region. If we want the picture features to follow the level set line direction we make kappa large, otherwise we make it small.

sigma controls how much the input data is smoothed. If sigma is small then small scale features will propagate into the domain, alternatively, if sigma is large then only clear, distinct line features will be taken into the inpainting region.

rho controls the averaging of the directional information. This is needed because some of the features may be heavily directional and we may want to propagate these directions into the image. A large rho causes a large amount of information of the boundary to be averaged, leading to more accurate directional information. If there is little directional information in the boundary we make rho small.

In the basic case shown below I set the parameters to epsilon=20, kappa=100, sigma=0.01, rho=20. The way I found these numbers was partly through playing around with the sliders and partly thinking about their definitions. Since the picture is high definition we want the maths to take in detail from a large region, so we make epsilon big. Because there are a number of thin creases in the petals that we want to propagate and carry on the line we set kappa and rho high (to keep the direction) and sigma low (to keep the detail).

As you will see below even this simple approach does a good job. However, by putting in a little extra effort we can make the result so much better.
Advanced inpainting
As you may notice from the image, most of the lines meet up one the bee is removed. However, because the bee covers regions connecting the petals to the central anthers of the plant, some of the central plant detail gets propagated into the inpainted region. To stop details being propagated in to regions we do not want them we use a “stop path”.

We select the path tool from the gimp work box and draw a curve. This curve will represent the points to which the details will propagate. Since the anthers are propagating too far we draw a red line that maps closely to the anther region as seen below. As before, the black and white line represents the region selected to be inpainted.
Once we have the path defined we then go back to the plugin and select “Selection With Stop Path” in the “Mask Type” drop down menu. Once this is selected the “Stop Path” drop down menu should become selectable. In this menu we select the curve that we have drawn and, finally, click apply.

Below we show the base case (left) along with the final image of this advanced version (right), and both of these can be compared with the original (above). Hopefully, you can see that, by just applying a little more work we get a much better result.

________________________________________________________________
________________________________________
Over the last few posts I have introduced the idea of inpainting, met the minds behind it and, hopefully, you’ve had ago yourself. Once again, here are all the links that have mentioned over the last few weeks:

Monday 16 June 2014

Inpainting: how does it work?

Last time I introduced the idea of inpainting, which is an automatic way of “intelligently” filling in a region of an image. Dr Thomas Maerz has worked on a technique which is not only computationally quick, but can produce astounding results. Incredibly, Thomas is allowing his ideas to be made freely available by Dr Martin Robinson who has implemented them in a simple to use GIMP plugin, which can be downloaded from here.

This week we take a look at the theory behind the code.

If you are interested in discovering more about the mathematics of inpainting then Thomas' papers can be found here and here, whilst an original version of Thomas' matlab code can be found on github.
________________________________________________________________
________________________________________
1) Select the image.
You begin with an image, in which there is a region you want to remove, or repair. As an example, let’s take the image below and try to remove the bee from the flower.
2) Select the region.
Select the region using the many tools available in gimp such as the rectangle box, colour selection, or, as I used here, the simple free hand tool. You do not have to delete the area, but since the interior is ignored I will delete it to clearly show the region we are working with.
3) Create level sets.
For all of the pixels in the selected area the plugin calculates the nearest distance to the boundary. In the image below the lines represent contours that are the same distance away from the boundary. For example, the black line represents the white pixels that are 10 pixels away from the nearest boundary; the red line represents the white pixels that are 20 pixels away from the nearest boundary, etc. The colour of the lines are for visualisation purposes only and serve no function in the mathematics.

These contours then form what are known as level sets. Essentially this means that a certain function has a constant value on them. In our case the function is the distance to the boundary.
4) Process the curves.
The curves are processed sequentially, working our way in from the boundary. The boundary is the last region of colour before we enter the cleared region. This boundary gives us all the initial data we need and it is propagated into the region.

Since this curve is coloured we step to the next contour, for example the black contour in the above picture. We choose a pixel on this contour and consider all the previously coloured pixels. This new pixel is filled in with a colour value that is a weighted average of the previously coloured pixels. Namely, coloured pixels that are close to the white pixel we are considering are given more weight when it comes to choosing the appropriate colour for the white pixel. Once the colour is chosen we move to the next white pixel on our current contour. Once all the pixels on the contour are coloured we move into the next contour.

This provides gimp with an algorithm allowing it to automatically fill in the white region, as shown above.

5) Enjoy the result.
The bee (in the original image on the left) has been completely removed in the processed image on the right.
________________________________________________________________
________________________________________

Although the final image on the right was produced in approximately 30 seconds it can still be improved upon as there are clearly regions where the creases in the petals do not line up. Next time I will present a user guide to Martin’s implementation of Thomas’ work and show that this can be improved upon.

Monday 2 June 2014

Inpainting: automatic mathematical photoshopping

Most of us take bad photos. I know I do. There is always a random guy in the background of my landscape shots, or part of my image is obscured by bars as I try to photograph animals in zoos. Moreover, I’ve plenty of old non-digital photos that have various scuffs and scratches on.

Of course, photoshop gives us the ability to cure all these ills by adding, removing and changing any detail we should wish. However, a well photoshopped image takes time, patients and a lot of skill, somethings that few of us have now-a-days.

Thankfully, due to the work of Dr Thomas Maerz and implementation by Dr Martin Robinson (post doctoral researchers in the Oxford Mathematical Institute), any novice image manipulator can now quickly and simply remove details from their images, through a process known as “inpainting”. The idea behind inpainting is to create a rigorous mathematical approach that will fill in a section of an image, based on the information from the rest of the image. Examples of this process are shown below. No doubt you will think that they are too good to be true. However, I assure you that all the inpainted images are generated automatically through Thomas’ incredible work. Not only is this work impressive theoretically, but Martin has actually created a FREE GIMP plugin that you can use to produce your own incredible inpainted images.
Figure 1. The left image is a vandalised image that has data removed. The right image is the result of applying Thomas’ automatic inpainting theory to the vandalised image [1].
On 2nd February 2014 I sat down with Thomas and Martin to discuss the ideas behind inpainting. This week I introduce them, the problem of inpainting and demonstrate the codes abilities.
________________________________________________________________
________________________________________
Thomas and Martin, could you give me a brief idea of your mathematical backgrounds?
Figure 2. Coffee capsules created by mathematicians, for mathematicians.
Martin: I did computer systems and engineering in Brisbane, where I specialised in generating 3D structures from 2D images. The techniques I worked on where actually going to be applied to the tourist industry. The idea being that you would be able to recreate a 3D representation of a beach, or the inside of church.

I then moved to Monash University in Melbourne and did my Phd in fluid mixing. This led me to work with Nestle on granular and fluid mixing with the aim of understanding what processes happen inside their coffee capsules. I now work on implementing multiscale theories behind coupling probabilistic events with deterministic events.

Thomas: I studied mathematics and achieved my Masters and Phd in Munich under the supervision of Prof. Dr Folkmar Bornemann. Both of my degrees have been focused on the application of numerical analysis to image analysis and particularly inpainting. My Master’s project was coming up with the implementation of the original framework and my Phd was more on the analysis of the model.

So you created the method… and then checked it worked?
Thomas: Exactly [grinning]. Now I’m working more on the “closest point method” [a way of solving differential equations on very general surfaces, see here for details].

Why would we want to use inpainting?
Martin: I think the main two uses would be for removal and repair. Most people would immediately think they would be able to use this technique to get rid of someone in their photos, but I would suggest it would be better suited to removing specific features, such as a grating over an image. The other reason you may want to use it to repair old photographs that may have degraded.
Figure 3. Uncaging a parrot. The left image is the original. The middle image shows the area which is to be inpainted and the right image is the result [1].

Are there many other people working on inpainting?
Thomas: Oh yes, very many. There are people who use methods based on partial differential equations. However, different groups use different differential equations. Other mathematicians using wavelet transforms, Fourier transforms and harmonic analysis. Engineers tend to use very different techniques. They primarily focus on discrete, pixel based methods. On the other hand optimisation people tend to think of an image as a graph.

If there are so many people working on inpainting why should we use your technique rather than any other?
Thomas: We compared our method with some of the other more complex partial differential techniques and we found that our results where comparable in quality, whilst being much faster in terms of processing time.

Martin: The plugin was made to be free and as user friendly as possible in order to get people to start using it.

Why make the plugin free?
Martin: I try to primarily use open source software in my research, which is useful when you are working in a university, which often does not have the money buy big commercial software packages. So it is nice to give something back to the community.

Thomas: It also encourages future development. The plugin is a great gateway to get people interested in my research.
________________________________________________________________
________________________________________

For those of you who are interested the website with the plugin can be found here. Next time I will a brief guide to the mathematics [1,2] that makes the code work. Below are a couple more examples demonstrating the power of inpainting.

Figure 4. Left: 70% of the image of a tram has been removed, so only the information on a square lattice is left. Right: inpainting recovers the pictures beautifully.
You may not be impressed by Figure 4 as you can still make out the train in the top left image. However, take a look at Figure 5. Here, 56% of the image has been randomly destroyed, yet once again inpainting works like magic.

Figure 5. Left: Image with 56% of the image randomly is destroyed. Right: The reconstructed image.
[2] Thomas' second paper on this work can be found  here: Image Inpainting Based on Coherence Transport with Adapted Distance Functions