top of page
  • Writer's pictureSunil D Shashidhara

Add color to your black and white images using deep learning

Updated: Dec 15, 2020

In this blog post, we will explain the pain stacking manual process to add color to your black and white images and then show you how deep learning can help you drastically speed up the process! We also have a number of before and after pictures to see the results for yourself.


Roger Godfrin, the only survivor of a massacre during which Nazi troops locked 643 citizens (including 500 women and children) inside a church and set fire to it on June 10, 1944, in Oradour Sur Glane, France.


When you see a historical black and white photo, you may wonder what was the real color, what did the photographer see when taking the photo. It is not easy to discover the exact color of B&W photos, but it is possible to colorize the photo based on experience and imagination. It can be done by investigating the possible colors of the objects in the photos, for example, the color of clothes, building, trees, cars, etc., and colorize them manually using tools such as Photoshop. In recent times, deep learning has made an attempt at automating this process by training models on a humongous amount of data. In this blog, we'll look at how colorization of photos had been performed historically and also look at the results of a state of the art deep learning GAN based architecture called deOldify, in its attempt at doing the same.


Manual Colorization


Manual Colorization is a laborious process, painting layer by layer of color on an original black and white image, and making it realistic enough to make you think that you are looking at a true color photograph. Some photographs may contain more than 1,000 different layers of color to ensure that every single detail is accounted for. In order to obtain as many exact color samples as possible, artists also painstakingly repair any damage to the photograph and perform as much analysis as possible. The entire process is broken down into multiple stages:


you can skip to the deep learning way of doing it if that is what you're interested in!


Appraisal & Evaluation:

Before any work can begin, a thorough understanding of the level of damage that the photograph had sustained across time is essential. In the above portrait of Lincoln, we can see the circular marks show the damage sustained by the portrait taken in Washington DC, February 1865.


Restoration & Reconstruction:

Once the extent of the damage is known, an intensive digital clean and restoration process begins. In some cases, whole areas of the photograph may need compositing and reconstructing, so that the foundational black-and-white information underneath the later color is as close as possible to its original state when it was taken. Contrast adjustments are also made at this stage to even out any anomalies from the original process.


Blocking In Color:

Blocking in the color is a simultaneously meditative and overwhelming process, in which layers of color are digitally ‘painted’ onto the restored photograph. How many layers? In some instances, thousands. Human skin alone would require 20 layers of pink, yellow, green, red, and blue hues to simulate what a person used to look like. The garish color scheme is intentional, allowing every detail to be picked up, so we can differentiate Lincoln’s jacket buttons from his waistcoat buttons and everything in between. Each area will contain several (sometimes dozens of) individual layers to simulate color gradation in light and shadow


Historical Research:

Attention to detail matters. Running concurrently with the restoration process, the historical research can take a great deal of time to get right, and involves everything from looking at satellite imagery, looking through reference books in archives, and getting in touch with subject experts, in order to get as many accurate color references as possible. In more complex images, hundreds of references need to be obtained in order to produce the most authentic version of a photograph.


Matching References:

The real magic of a colorized photograph appears when the garish areas blocked in color meet the historical research, which is adjusted to match each color reference we can find. In the above pic, we can observe Brooks Brothers’ suit and Lincoln’s pocket watch, whilst the photograph of the table and chair is from the same period. The color is then sourced directly from the reference material, and adjusted to the lighting conditions


To sum up, a lot of manual work is needed to transform black and white images to color, and this is where deOldify comes in. The concept is not to entirely automate deOldification of photographs, but to serve as a point of reference to minimize the time needed for manual work. As suggested in the video below it can take anything from a day to a month to execute this process effectively on the basis of the amount of detail involved.


DeOldify


DeOldify is an open-source project and uses GAN (General adversarial network) based architecture, which contains two neural networks, generator, and discriminator. The generator’s job is to predict the color based on the black and white photo and then generate colorized photos. The discriminator’s job is then to judge if the generated photo is real enough compared to the real photo.


If the discriminator can easily tell the photo is generated, which means the generator is not good enough, the generator needs more training. When the generator is improving and the discriminator cannot tell the difference anymore, the discriminator will be trained more in order to tell the difference.


DeOldify uses a variation of GAN that solves some key problems in the previous DeOldify models. The new model eliminates the problem of glitches and artifacts found in the older models and it's incredibly effective.


For the Generator, DeOldify uses a UNET to generate color photos from black and white. UNET is a deep learning model developed by Olaf Ronneberger et al. which is capable of performing semantic as well as for instance segmentation. It was initially developed for Biomedical Image Segmentation. You can check out another blog of ours where we use UNET to segment objects from an image.

UNET architecture


Results from DeOldify


Here's an attempt at colorizing a scene from Charlie Chaplin's Iconic The Circus, a Silent film from the 1920s. The colorization of the video isn't perfect and we can observe color gradients are not stable and constant as time progresses.

Original Clip


Colorized Clip



Below are attempts at colorizing some of the iconic moments of the past. As we can observe DeOldification works better on individual photos than on videos.

Pt. Jawaharlal Nehru salutes the Indian national flag after hoisting it on August 15, 1947



American sailors stand amid wrecked planes at the Ford Island seaplane base, watching as the USS Shaw explodes in the center background during the Japanese raid on Pearl Harbor, Hawaii on December 7, 1941.



In celebration of Japan's surrender, a U.S. Navy sailor kisses a woman during festivities in New York City on August 14, 1945.



The atomic bombing of Nagasaki, Japan by the U.S. on August 9, 1945.



A celebration of Germany's surrender takes place on Paris' Champs Elysees, as seen from the top of the Arc de Triomphe, on May 8, 1945.



View of Mysore from the Jagan Mohan Palace, Circa 1890



Staff at the Superintendents Bungalow – Mysore Mines Circa1894



India antique photo of maharaja Mysore & Lokendra Singh


References


1. References for photos - old Mysore, WW2 and Indian independence

2. deOldify Github repository - here

3. Colab notebook for deOldify can be found here

4. Dynaichrome - Firm which works on manual colorization

519 views0 comments

Recent Posts

See All
bottom of page