Pixel Shift – What Is It and Why You Should Try It

With new innovations in camera technology and design, cameras are starting to see some really useful features arise. Some notables are focus peaking, in-camera focus bracketing, and the one I’d like to discuss today, pixel shifting.

Right now, only a handful of cameras have the ‘Pixel Shift’ feature built-in. I’ve experimented with the Pixel Shift on Sony’s a7R III, but the Pentax K-1, Panasonic Lumix DC-G9, and Olympus OM-D E-M1 Mark II are other popular cameras that also have the feature.

When a camera captures a normal exposure, every single pixel on the sensor records either red, green, or blue exclusively. Each respective pixel then has to essentially approximate values for the remaining two colors based on what adjacent pixels record. For instance, during an exposure, a pixel on the sensor is best optimized to capture the color red. That pixel accurately captures red, but then approximates the values for green and blue, it estimates those values depending on what the other pixels around it are doing. Complicated, but that’s the gist.

With a Pixel Shift exposure, the camera sensor shifts slightly after each shot is taken. Thus, every single pixel has a chance to accurately record red, green, and blue. Generally, this is accomplished in four exposures and then those exposures are stacked and merged together using some type of software. Each camera system has its own respective program to do the merging. This results in almost super resolution images (although the resolution doesn’t actually change), color resolution is much improved and images are noticeably sharper. It produces pretty stunning results.

Here are some diagrams illustrating how the technology works, these images are courtesy of Sony.

 

Results

Below are some side by side comparisons of a normal RAW capture and the equivalent Pixel Shift capture. As you can see, the results are very impressive. There is a definite improvement in color resolution and the file is tack sharp. That’s why I’m really excited about this feature and I’m hopeful to see it released it more camera bodies. You won’t be using this for wildlife photography, but it can improve your image quality drastically if you’re shooting static landscapes. The images below are from comparison tests done by DPReview and PetaPixel.


With Pixel Shift


Without Pixel Shift

Limitations

The greatest limitation to Pixel Shift is motion. The camera needs to be absolutely still when it records the required exposures. Because the sensor needs time to shift and stabilize, the fastest delay between exposures is usually around one second. That means if you have blowing trees, moving water, moving clouds, or animals/people in motion, you’re going to have a hard time using this feature. Pixel Shift is best for static scenes where movement is limited, this makes it a great feature to try on landscapes. I recommend making sure your camera is mounted on a sturdy tripod to eliminate any camera movement. That will make your life easier while merging the files in post-processing as well.

Another shortcoming is the fact that no widely used post-processing software is able to merge the Pixel Shift exposures. For instance, Sony’s Pixel Shift RAW files can only be merged using Sony’s own Imaging Edge software. Lightroom or Photoshop cannot be used to merge the actual Pixel Shift RAW files together (as of right now). That’s disappointing, but the feature is relatively new so hopefully we will get an update to the Adobe Creative Suite in the future, which addresses Pixel Shift files. Currently, photographers are able to auto-align and merge HDR files, focus stack files, and panoramas files, so I’m confident the developers at Adobe have something in the works.

Also, if you’re still confused about how Pixel Shift actually works, Sony has a great video on the subject below.

Conclusion

Just to review, Pixel Shift is a feature that allows every single pixel on the sensor to have accurate values for R, G, and B color channels. This is achieved by the sensor shifting during a string of exposures. The end result is four or more exposures that you must blend together in post-processing using software respective to your camera. After the merge, a single file remains, which has drastically better image quality versus a single normal RAW capture of the same scene. Remember though, this feature cannot be used if there is motion from exposure to exposure, it’s best suited for static scenes (landscapes, architecture, cityscapes). This is a feature that is very intuitive as well, you basically turn it on and your camera takes the exposures necessary. The camera does most of the work, although there is some added work in post obviously. If you haven’t already, try it out and report back. The results look very promising and it’s something that I’d like to experiment with more.

Matt Meisenheimer is a photographer based in Wisconsin. His artistry revolves around finding unique compositions and exploring locations that few have seen. He strives to capture those brief moments of dramatic light and weather, which make our grand landscapes so special. Matt loves the process of photography – from planning trips and scouting locations, taking the shot in-field, to post-processing the final image.

Matt is an active adventurer and wildlife enthusiast as well. He graduated with a degree in wildlife ecology and worked in Denali National Park and Mount Rainier National Park as a biologist. He also spent 6 months working in the deserts of Namibia before finding his path in photography. Matt’s passion for the wilderness has taken him to many beautiful places around the world.

As a former university teaching assistant, Matt is passionate about instruction. It is his goal to give his students the technical and creative knowledge they need to achieve their own photographic vision. He truly enjoys working with photographers on a personal level and helping them reach their goals.

You can see Matt’s work and portfolio on his webpage at www.meisphotography.com

1 reply
  1. Bob Panick
    Bob Panick says:

    When you first started talking about this and mentioned the Olympus E-M1ii I thought, cool he’ll talk about something other than SoCaNikon. Maybe he’ll talk about how Olympus does 8 shots, and actually increases resolution to 80 MP. How Olympus merges the images in camera, so you can use the editing software of your choice. How Olympus has logic in the merging software to address areas of movement to some extent; flowing water, and moving leaves are handled, granted it can’t do trees swaying back and forth in a strong wind.

    Unfortunately the answer is No, No, No, and No.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.