I managed to find a working implementation of one of the papers I was interested in understanding, while looking at OpenCV. It is in Japanese, but you can find the code with some comments in English here: http://opencv.jp/opencv2-x-samples/usage_of_sparsemat_2_superresolution
A small warning if you try to run it, it is very memory intensive. With a 1600×1200 image it used 10GB of RAM on my system. It also crashes if your image’s dimensions are not a multiple of the resolution enhancement factor.
All tests are done with 16 low resolution images and with increasing the resolution 4 times. The image below is the result for the best case, where the images are positioned evenly. Left image is one of the 16 Low Resolution (LR) images, the right is the original, and with the middle being the Super Resolution result after 180 iterations:
There is a bit of ringing artifacts around the high-contrast edges, but notice how it manages to slightly bring out the lines in the eye, even though it looks completely flat in the LR image.
Below is the same, but with input images haven been degraded by noise and errors. While it does loose a little bit of detail, the results are still fairly good and with less noise than the input images.
The last test is with random sub-pixels displacements, instead of them being optimal. The optimal is shown to the left as a comparison. It is clear that it loses it effectiveness as the image becomes more blocky.
My plan is to use this implementation as an aid to understand the parts of the article I don’t fully understand. I would like to try this out with DVD anime sources, but this method (or just this implementation) just wouldn’t work with 150+ images. You can wait on a slow algorithm to terminate, but memory is more of a hard limit. But this method allows to have separate blur/scaling matrices for each LR image, so you can probably improve it by keeping them equal.