With new innovations in camera technology and design, cameras are starting to see some really useful features arise. Some notables are focus peaking, in-camera focus bracketing, and the one I’d like to discuss today, pixel shifting.
Right now, only a handful of cameras have the ‘Pixel Shift’ feature built-in. I’ve experimented with the Pixel Shift on Sony’s a7R III, but the Pentax K-1, Panasonic Lumix DC-G9, and Olympus OM-D E-M1 Mark II are other popular cameras that also have the feature.
When a camera captures a normal exposure, every single pixel on the sensor records either red, green, or blue exclusively. Each respective pixel then has to essentially approximate values for the remaining two colors based on what adjacent pixels record. For instance, during an exposure, a pixel on the sensor is best optimized to capture the color red. That pixel accurately captures red, but then approximates the values for green and blue, it estimates those values depending on what the other pixels around it are doing. Complicated, but that’s the gist.
With a Pixel Shift exposure, the camera sensor shifts slightly after each shot is taken. Thus, every single pixel has a chance to accurately record red, green, and blue. Generally, this is accomplished in four exposures and then those exposures are stacked and merged together using some type of software. Each camera system has its own respective program to do the merging. This results in almost super resolution images (although the resolution doesn’t actually change), color resolution is much improved and images are noticeably sharper. It produces pretty stunning results.
Here are some diagrams illustrating how the technology works, these images are courtesy of Sony.
Results
Below are some side by side comparisons of a normal RAW capture and the equivalent Pixel Shift capture. As you can see, the results are very impressive. There is a definite improvement in color resolution and the file is tack sharp. That’s why I’m really excited about this feature and I’m hopeful to see it released it more camera bodies. You won’t be using this for wildlife photography, but it can improve your image quality drastically if you’re shooting static landscapes. The images below are from comparison tests done by DPReview and PetaPixel.
Without Pixel Shift
Limitations
The greatest limitation to Pixel Shift is motion. The camera needs to be absolutely still when it records the required exposures. Because the sensor needs time to shift and stabilize, the fastest delay between exposures is usually around one second. That means if you have blowing trees, moving water, moving clouds, or animals/people in motion, you’re going to have a hard time using this feature. Pixel Shift is best for static scenes where movement is limited, this makes it a great feature to try on landscapes. I recommend making sure your camera is mounted on a sturdy tripod to eliminate any camera movement. That will make your life easier while merging the files in post-processing as well.
Another shortcoming is the fact that no widely used post-processing software is able to merge the Pixel Shift exposures. For instance, Sony’s Pixel Shift RAW files can only be merged using Sony’s own Imaging Edge software. Lightroom or Photoshop cannot be used to merge the actual Pixel Shift RAW files together (as of right now). That’s disappointing, but the feature is relatively new so hopefully we will get an update to the Adobe Creative Suite in the future, which addresses Pixel Shift files. Currently, photographers are able to auto-align and merge HDR files, focus stack files, and panoramas files, so I’m confident the developers at Adobe have something in the works.
Also, if you’re still confused about how Pixel Shift actually works, Sony has a great video on the subject below.
Conclusion
Just to review, Pixel Shift is a feature that allows every single pixel on the sensor to have accurate values for R, G, and B color channels. This is achieved by the sensor shifting during a string of exposures. The end result is four or more exposures that you must blend together in post-processing using software respective to your camera. After the merge, a single file remains, which has drastically better image quality versus a single normal RAW capture of the same scene. Remember though, this feature cannot be used if there is motion from exposure to exposure, it’s best suited for static scenes (landscapes, architecture, cityscapes). This is a feature that is very intuitive as well, you basically turn it on and your camera takes the exposures necessary. The camera does most of the work, although there is some added work in post obviously. If you haven’t already, try it out and report back. The results look very promising and it’s something that I’d like to experiment with more.
Download our Trip Catalog for detailed information on our many destinations for photography tours, workshops, and safaris.
Download our Trip Catalog for detailed information on our many destinations for photography tours, workshops, and safaris.