I found an interesting project that uses deep learning to reconstruct faces from very-low-resolution images at GitHub – david-gpu/srez: Image super-resolution through deep learning. Here is what the github main page for the project says about it:
Image super-resolution through deep learning.
This project uses deep learning to upscale 16×16 images by a 4x factor. The resulting 64×64 images display sharp features that are plausible based on the dataset that was used to train the neural net.
Here’s an random, non cherry-picked, example of what this network can do.
From left to right, the first column is the 16×16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth.
As you can see, the network is able to produce a very plausible reconstruction of the original face. As the dataset is mainly composed of well-illuminated faces looking straight ahead, the reconstruction is poorer when the face is at an angle, poorly illuminated, or partially occluded by eyeglasses or hands.This particular example was produced after training the network for 3 hours on a GTX 1080 GPU, equivalent to 130,000 batches or about 10 epochs.
The results are pretty impressive, except for the low-angle profile and the glasses. The glasses problem could undoubtedly be fixed with a slightly larger range of training images (ones that included a large number of people wearing different styles of glasses). Generalizing to cover the low-angle photo would require a much larger training set, with a lot of different camera angles for a lot of different faces. There also seems to be some difficulty with skin tone—all the reconstructions are coming out more tanned than the originals. I suspect that for dark-skinned people the reconstruction will lighten their skin—moving everyone closer to average for the training set.
I suspect that images that aren’t so carefully framed and cropped would also cause problems, as would ones that have some form of blurring different from the down-sampling used to create these images. The results are impressive enough, though, that they almost make me want to go into research in deep learning.