In response to, or perhaps in coincidence with, the previous post on achieving greater resolution via mosaics with the Rhinocam, etc, the topic of resolution came up at lunch with photographer colleague and friend Lee Peterson. With much talk of the soon to be released 5Ds and the Olympus technique of resolution enhancement via “pixel shifting” the issue of multi-shot, pixel shifting medium format backs arose and, truth to tell the images from those digital backs were incredible. Taking 4 shots, shifting the sensor one pixel each time to obviate the Bayer filter and get uninterpolated results, they vastly out-shot similar sized single shot sensors. And then, to seal the deal for me, in came an email question about an old technique that used to be called “Processing for Super Resolution.”
This technique is far from new though newer functionalities in Photoshop have made it easier than before with much less math involved figuring opacity percentages for averaging layered frames by using smart objects. To be sure, it sounds pretty good: take any camera and double (or some claim triple or even quadruple) their native resolution) in post production… well, actually, partly in post production and partly in shooting techniques.
The concept is straight forward in theory. Take a large number of handheld frames of the same object allowing the micro-spasms in the human body to naturally move the camera very slightly shot to shot. The recommended number, depending on the author/guru du jour, was from 10 – 30 or even more if your computer could handle that many layers (not a problem with 6 megapixel cameras back when I first read this but a memory hog if your camera has upwards of 20 megapixels). Bear in mind that the pixel shifting backs did it with 4 shots but they were servo controlled to precisely move by one pixel in each direction. I confess to you, I cannot control the camera movement by one sensor pixel… sorry. Besides they were attempting to have ALL color info (RGGB) captured for each individual pixel on the camera sensor.
Then, when you have the files captured, according to the theory, use Photoshop to stack them, re-align them and average the frames’ opacities, and thereby create your “super resolution” file.
But did it work?
You just have to know, given my own love of high resolution and large prints, that I would have tried it. And there is a message about it buried in the knowledge that I chose to use spherical panoramic heads, the digi-view rig and now the Rhinocam instead of it. But I never wrote about the procedure and my old field trial shots were with cameras that even students would laugh at today in terms of resolution and which, at the very fantasy laden best results, could not match the real resolution of, for example, the 5D MkII. Perhaps it is time to revisit it and let you all see some modern results and make up your own minds.
My last post about the Rhinocam used the aloe in my backyard as a sample, not because it is a stunning composition but solely because all of the inherent detail made it a good candidate for resolution testing. So I will use the same target for this test and then we can compare that to the multi-row pano I just shot in the last post. The lighting is a lot flatter than last weekend when I shot using the Rhinocam but I think the rendering of detail will be comparatively revealing. This time I used a Canon 85mm f1.8 again on the Canon 5D MkII. I did this procedure with Photoshop CC 2014 on a 6 year old Toshiba laptop running Windows 7. If it works here it will work anywhere…
Here are the post processing steps to perform this procedure:
- Using Bridge load all 20 files into Photoshop as layers
- Select all layers
- Perform Edit -> Auto Align as a stack
- Crop away the overlap on the edges
- Make sure all layers are still selected
- Resize to 200% based on Pixel Dimensions using the “Nearest Neighbor” algorithm
- Perform Layers -> Smart Objects -> Convert to Smart Object
- Perform Layers -> Smart Objects -> Stack Mode -> Mean
- Flatten Layers
- Do any desired editing
- Resize as needed for output
Make no mistake. This sounds simple and is… but it is an incredibly memory hungry procedure that takes LOTS of RAM and/or lots of time. The stacked file was huge, 10.2 gigabytes, so it really can take a while to do any of these complex processing steps. It took several hours to render this one final image.
Here are two images: they have no post processing other than the super resolution process. Both are large files so they can be clicked on one or two times to see them full screen and compare the resulting detail and resolution.
The top one is a single capture file with no post processing, brought in directly from the RAW file and sized for the blog with no other editing.
And now here is the one processed for “super resolution” using the steps outlined above then sized for the blog with no other editing.
I would also recommend that you check out the sample from the last post shot with the Rhinocam. They are all placed here pretty close to the same file size so enlargements should be good comparisons. These new shots have greater depth of field but f8 from the 85mm lens on the Canon and f8 on the 150mm lens I used on the Rhinocam would naturally have less depth of field.
Well I think it is a good news/bad news deal. The good news is that it does certainly have an effect WHEN THE IMAGE IS ENLARGED. This result is much better than my old tests done in around 2002. Using the “Mean” mode for smart objects also does eliminate some normal problems with multi-shot images such as movement of objects in the wind. But for small prints or standard computer display the difference is really minimal. Click on the files to see them enlarged and then perhaps you can start to see the effect.
Especially note the little details and textures on the leaf surfaces. Edge detail is pretty good in both shots however surface detail does get enhanced in the process. Also remember that for computer monitor display the display resolution here is only 100 ppi; it does show up better at the printing resolution of 300 ppi.
Is it, however, 4x the resolution as some claim? I do not think so. I’m not sure it is actually even 2X the resolution, at least on computer display files. But it is better.
However that small result does come at a large price. If you do not have a computer with large amounts of RAM this can take forever and lock up the computer due to insufficient memory. Whether or not it is worth the effort is up to the photographer and the needs for final output for a shot. I do not intend to sell my spherical panoramic heads, or digi-view or Rhinocam; they are still the best tools in the right circumstances.
And though there is a bit more complexity to the capture phase for all of the mosaic approaches, they are vastly faster to process on normal computers. For example, I assembled the Rhinocam frames in the previous post in about 10 minutes on my laptop. Using the same laptop (I really did try to keep all variables as identical as I could make them) it took over 3 HOURS of rendering time. And I think the Rhinocam shot actually has greater detail.
Bottom line, the results are good if – IF — you have an image wIth lots of fine detail in it AND you want to enlarge it considerably. But wow, is it ever a strain on computer resources. I did not test using 10 instead of 20 frames because in my old tests there was an enormous difference, but it might be worth trying to help cut down some of the time and memory drag. I did try using just four files like a digital back but the results were, shall we say tactfully, not worth writing home about… It was fairly quick but utterly pointless.
As the rodeo riders say, “Ya pays yer money and takes yer ride…”. And I still want to see how the 5Ds performs…