After the appearance of Image Colorization in the literature and the different developments of colorization techniques, it was necessary to search for new applications for this new field rather than coloring gray images. This thesis is a research and implementation study for various applications that can exploit from colorization techniques. Three proposed colorization applications are proposed in this thesis; Automatic Movie Colorization System, Color Image Encoding System Using HSI Space Embedding System and Color Image Encoding System Using Morphological Decolorization. The first application of image colorization attracts researchers in this field is Old Movies Colorizing. This thesis presents a new proposal for a system that works on coloring the old movies automatically. The proposed system is based on coloring the film shot by shot instead of frames as it is common in this area. This is done by splitting the film into shots and coloring the first frame in the shot (the key frame) . After that the motion vector between the frames of each shot is generated to transfer the colors from the key frame to the following frames in the shot using the motion vector.
Color image encoding becomes an important application for image colorization. The idea is to remove colors from color images at the sender side while retaining the information about colors to enable image recolorization at the receiver side. The reason behind this methodology is to exploit from the smaller size of gray images. At
the receiver side, the colors are restored and the images are recolored. This Methodology of encoding is called Image Decolorization. The thesis presents two different approaches to color images decolorization: Color Embedding and Automatically Color Seeds Selection. A new system to compress the color channels in the color model "Hue, Saturation and Intensity" (HSI) is proposed. The encoded chromatic channels are hided inside the lighting channel using the “Least Significant Bit “(LSB) method. This is done by converting the Hue channel into objects and then to be encoded by an Object Compression method. For Saturation channel, there are two methods to compress S channel are proposed; "the Minimum Color Difference" (MCD) and "Y(Luma) Intensity Difference"(YID). The third proposed system is
a new automatic color seeds selection method based on Morphology . The seeds are extracted from the inner boundaries of image objects and hided in the luminance channel using LSB method.
Table of Content:
Dedication
Acknowledgmen
Abstrac
Table of Content
List of Figure
List of Table
List of Abbreviation
List of Symbol
List of Publication
List of Citation
Chapter one : Introduction
1.1 Digital Image Fundamentals
1.2 Coloring Problem:
1.3 Colorization Technique
1.4 Decolorization
1.5 Research Objective
1.6 Thesis Motivation
1.7 Thesis Organization:
Chapter two : Literature Survey
2.1 Colorization Techniqu
2.1.1 Transformational Coloring
2.1.2 Image Matching /Coloring by Reference
ý Manual Selection
ý Intelligent Selection
ý Fusion Based
2.1.3 User Selection/Colorization by Seeds
2.2 Colorization Application
2.2.1 Movies Colorization
2.2.2 Medical Images Colorization
2.2.3 Color Image Compression
2.3 Decolorization Technique
2.3.1 Automatic Seeds Selection
2.3.2 Color Embedding Decolorization
Chapter three : Movie Colorization System
3.1 Introduction
3.2 Proposed Movie Colorization System
3.2.1 Shot Cut Detection Subsystem
3.2.2 Frame Colorization Subsystem
3.2.3 Motion Deteection subsystem
ý Three Step Method (TSM)
3.2.4 Shot colorization subsystem
3.3 Results and Discussion
3.3.1 Coloring Quality
3.4 Market Research
3.4.1 Processing Time
3.4.2 Market Need:
3.4.3 Market Demand:
3.4.4 Competing Products:
Chapter Four: Color Embedding For HSI Mode
4.1 Introduction
4.2 The proposed Color Encoding System
4.2.1 Hue Proposed Encoder System
ý Proposed Color Correction stage
ý Segmentation stage
ý Object Compression stage
4.2.2 Saturation Encoding Technique
ý The Minimum Color Difference (MCD)
ý Y(Luma) Intensity Difference (YID)
4.2.3 Intensity Encoding
4.3 Results and Discussion
4.3.1 Other Quality Measures
ý Objective Measures
ý Subjective Measures:
4.3.2 More System Results:
4.4 Color Embedding
4.4.1 Experiment (1) Using MCD
4.4.2 Experiment (2) Using YID
Chapter Five: Morphological Decolorization System
5.1 Introduction
5.2 Decolorization Using Morphology
5.3 System Quality Assessment
5.4 Results and Discussion
5.4.1 Seeds Selection Evaluation
5.4.2 The System Compression Professionalism
5.4.3 Comparison With JPEG/JPEG
5.5 Seeds Hiding
Chapter Six : CONCLUSION AND FUTURE WORK
6.1 Research Conclusion
6.2 Future Works:
6.2.1 Movies Colorization System Future Work
6.2.2 Color Embedding For HSI Model Future Wor
6.2.3 Morphological Decolorization System Future Wor
Appendix A :Color Models
A.1 Color Fundamental
A.2 Color Model
A.2.1 RGB
A.2.2 CMY/CMYK
A.2.3 HSI/HLS
A.2.4 HSV/HSB
A.2.5 YIQ
A.2.6 YUV
A.2.7 CIE Lab/lαβ
Appendix B: Fundamental
B.1 Image Clustering
B.1.1 K-Means
B.1.2 Accelerated K-Means
ý Applying The Triangle Inequality
B.1.3 K-Mean ++
ý K-Means++ Algorithm :
B.1.4 Mean Shift
ý Mean shift filtering
ý Mean shift Segmentation
B.2 Image Compression
B.2.1 Huffman Coding
B.2.2 JPEG Coding
B.2.3 JPEG2000 Coding
B.3 Quality Assessments
B.3.1 Objective Quality Assessments:
ý MSSIM :
ý Colorfulness Metric:
B.3.2 Subjective Quality Assessment:
B.4 Fundamentals of Morphology
ý Dilation
ý Erosion
ý Opening
ý Closing
ý Skeletonization
Appendix C: Awards and Certificate
C.1 Patent issue
C.2 Awards
C.3 Publications
Reference
تعريف بالباحث
إهــــــــداء
شكر و تقدير
نبذة عن البحث هـ
ملخص البحث
List of Figures
chapter One :Introduction
Figure 1. 1: Normalized Rgb Cube Model
Figure 1. 2: Original Rgb Image ( 499, 554 Bytes
Figure 1. 3:A Comparison Between Image Types With Their Actual Size On Disk
Figure 1. 4 Comparison Between Color Image and Gray Imagge
Chapter Two : Literature Survey
Figure 2. 1 Coloring Techniques
Figure 2. 2: Levin's System (Left)Scribbles and (Right)Result
Chapter Three : Movie Colorization System
Figure 3. 1: Three Shots Exampl
Figure 3. 2: System Block Diagram
Figure 3. 3: Shot Cuts Results of "Ismaiel Yassen Fe El-Ostool" Movi
Figure 3. 4: Key Frames of 15,614 Frame
Figure 3. 5: False Shot Cuts From Frame 2410 To Frame
Figure 3. 6: False Shot Cut
Figure 3. 7: Main System Screenshot
Figure 3. 8: Frame Colorization Screenshot
Figure 3. 9: Three Step Method Exampl
Figure 3. 10: Proposed System Result
Figure 3. 11 More Coloring Results
Figure 3. 12: PSNR Plot
(A) Frame 1: The Key Fram
(B) Frame 2: Max PSNR
(C) Frame 51: (Zoomed In) Minimum PSNR
(D) Frame 58: (Zoomed In) Last Fram
Figure 3. 13: Example of Coloring Movie Shot Using The Original Colors of The Key Frame (Left) Original Frame, (Right) Recolored Frame
Figure 3. 14: Voting Result
Chapter Four :Color Embedding For Hsi Mode
Figure 4.1: A Double Cone of Hsi Color Model
Figure 4.2: A Color Image: 'Colored Shape' 640×480 P
Figure 4.3: Hue , Saturation, and Intensit
Figure 4.4: The Proposed Encoder System Diagram
Figure 4.5: Hue Before and After Correctio
Figure 4.6: Segmented Imag
Figure 4.7: Segmented Hue Image (4 Objects
Figure 4.8: (Left) Original, (Right) Using Segmented H
Figure 4.9: OCP Algorithm
Figure 4.10: Object Compression Results
Figure 4.11: R, G and B Ranges In H Sector
Figure 4.12: S Vs. MCD
Figure 4.13: Dct Transform Before and After Thresholdin
Figure 4.14: (Left) Original S, (Center) Decoded S, (Right) Differenc
Figure 4.15: (Left) Original (Right) Decoded (After OCP and MCD
Figure 4.16: S (Blue) vs. YID(Black
Figure 4.17: Dct Transform Before and After Thresholdin
Figure 4.18: (Left) Original S, (Center) Decoded S, (Right) Differenc
Figure 4.19: (Left) Original (Right) Decoded (After OCP and YID
Figure 4.20: Comparison Using MCD: (A)YCBCR_JPEG2000 (B) HSI_JPEG2000 (C) Proposed System (D) Zoomed Part (E) Structure Ma
Figure 4.21: Comparison Using YID: (A)YCBCR_JPEG2000 (B) HSI_JPEG2000 (C) Proposed System (D) Zoomed Part (E) Structure Ma
Figure 4.22: MCD Results With: Qs= 0.1 , Qs= 0.5 , and Qs= 0
Figure 4. 23: Images Used In Mos Rating (Left)Before (Right) Afte
Figure 4.24: (Top) Original (Middle) MCD Decoded (Bottom) YID Decode
Figure 4.25: YID Exampl
Figure 4.26: MCD Example 'Young Lady'
Figure 4.27: MCD Example 'Mosque'
Figure 4.28: Original Hsi Channels Vs. Decoded (Up) Original Hue, Saturation and Luminance (Down) Decoded Hue, Saturation and Luminance Channel
Figure 4.29: (Left) Original Image (Right) Decode Imag
Figure 4.30: (Left) OCP Data (Right)MCD Coefficient
Figure 4.31: (Up) Host Image (Down) Stego Imag
Figure 4.32: (A) Before Color Hiding (B) After Color Retrievin
Figure 4.33: YID Saturation Estimation Resul
Figure 4.34: (Left) Ocp Data (Right)YID Coefficients
Figure 4.35: (Up) Host Image (Down) Stego Imag
Figure 4.36: (A) Original Image (B) After Decoding (C) After Color Extractio
Chapter Five : Morphological Decolorization System
Figure 5.1: Morphology Operation
Figure 5.2: (Up) Proposed System Flowchart (Down) Visual Step
Figure 5.3: (Left) Original , (Right) Levin's Scribble
Figure 5.4: (Left) Levin Colorization Result. (Right) Liron Colorization Result
Figure 5.5: MDS Result (Only Inner Boundary Seeds
Figure 5.6: MDS Result (Inner Boundary Seeds and Skeleto
Figure 5.7: (Left) Liron's Seeds, (Middle) Levin's Results, (Right) Liron's Result
Figure 5.8: MDS Result
Figure 5.9: Cheng's Result
Figure 5.10: Miyata's Result
Figure 5.11: Clustered (Left), Seeds (Middle), Marked Image (Right
Figure 5.12: (Left)Original, (Mid)Decoded (3, 1%, 3),(Right) Decoded(4,50%,
Figure 5.13: Seeds Hiding Diagram
Figure 5.14: (Left)The Gray 'Boy' Image (Right)The Stego Imag
Appendix A: Color Model
Figure A. 1 : Absorption of Light By The Red, Green, and Blue Cones In The Human Eye As A Function of Wavelength
Figure A. 2 : RGB Color Cube
Figure A. 3 : Additive Colors and Subtractive Color
Figure A. 4 : (A) A Double Cone of Hsi/Hls Color Model (B) HSV/HSB Model
Figure A. 5: Different Models Representation Image
Figure A. 6: Hue, Saturation and Lightness/Brightness Channel
Figure A. 7 :A)XYZ Space B) XYZ Chromaticity Diagram .C) Lab Color Spac
Appendix B: Fundamental
Figure B. 1: K-Mean Procedure Block Diagram
Figure B. 2: Lake Image , Size (512×
Figure B. 3: Huffman Tree Exampl
Figure B.4 : JPEG vs JPEG
Figure B. 5: SSIM System Diagram
Figure B. 6: Skeleton of A Rectangle Defined In Terms of Bi-Tangent Circle
Figure B. 7: Morphology Operation
List of Tables
Chapter Four :Color Embedding For HSI Model
Table 4.1 Hue and Intensity degradation
Table 4.2 Saturation degradatio
Table 4.3 The Results of Saturation Encoding Using MLD Onl
Table 4.4 Poll results for 40 voter
Table 4. 5 'Tiger' Encoding Result
Table 4.6 Bit Stream
Table 4.7 MCD Encoding Result
Table 4.8 Hiding/Extraction Result
Table 4.9 Hiding/Extraction Result
Chapter Five :Morphological Decolorization System
Table 5.1 Relation between system variables and quality and compression rati
Table 5. 2 Quality Compariso
Table 5.3 Different system parameter
Table 5.4 Quala result
Table 5.5 JPEG vs. Proposed System
Table 5.6 JPEG2000 vs. Proposed System
Appendix A :Color Models
Table A. 1: Hue and luminance distributio
Table A. 2 change in saturatio
Table A. 3 RGB values from HSV
Appendix B
Table B. 1 Clustering Techniques Compariso
Table B. 2 Huffman codes exampl
Table B. 3 Objective Metric
Table B. 4 Colorfulness Metric for Different Color Space
Table B. 5 MOS Rate
List of Abbreviations
Abbildung in dieser Leseprobe nicht enthalten
List of Symbols
Abbildung in dieser Leseprobe nicht enthalten
List of Publications
- Publications Already Published:
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. A Fully Automated Black And White Movies Colorization System. In The 7th International Conference on Informatics And Systems (INFOS 2010) (Cairo University, Egypt, March 28-30 2010).pp.1-6
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. An Effective Compression Technique for HSL Color Model. In The 2011 World Congress on Computer Science and Information Technology (WCSIT’11) (Egypt, January 24-27 January 24-27 2011).
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. An Effective Compression Technique for HSL Color Model. The Online Journal on Computer Science and Information Technology (OJCSIT) 1, 1 (2011), 29–33.
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. Hybrid Encoding Scheme for HSI Model Using The Minimum Color Difference. In 28th NATIONAL RADIO SCIENCE CONFERENCE (NRSC 2011) (National Telecommunication Institute, Egypt, April 26-28, 2011).pp.1-8
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. Color Image Encoding Using Morphological Decolorization. In The 5th International Conference on Intelligent Computing and Information Systems (ICICIS 2011) (Cairo, Egypt, Jun 30- Jul 3, 2011).pp.319-324.
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. Space Transformation For Hsl Model Encoding. In The 5th International Conference on Intelligent Computing and Information Systems (ICICIS 2011) (Cairo, Egypt, Jun 30- Jul 3, 2011).pp.420-426.
- Publications In Press:
Semary, N. A., Hadhoud, M. M., Abbas, A. M. and Abdul-Kader, H. Novel Compression System for Hue Saturation and Intensity Color Space . International Arab Journal of Information Technology (IAJIT) —, — (----), —
- Publications Accepted But Not Published:
Semary, N. A., Hadhoud, M. M., and Abbas, A. M. Complete Black and White Movies Colorization System. The IADIS Computer Graphics, Visualization, Computer Vision and Image Processing 2010 (CGVCVIP 2010) (Freiburg, Germany, July 27 – 29, 2010).
List of Citations
[Abbildung in dieser Leseprobe nicht enthalten] Willey, S. Colourisation of monochrome images. Disseration for bachelor, University of Bath, 2009.
Semary, N. A., Hadhoud, M. M., Ismail, N. A., and Al-Kelani, W. S. A texture recognition coloring technique for natural gray images. In The 2007 International Conference on Computer Engineering & Systems (ICCES’07) (Ain shams University, Cairo, November 27-29, 2007).
[Abbildung in dieser Leseprobe nicht enthalten] Hyun, D.-Y., Heu, J.-H., Kim, C.-S., and Lee, S.-U. Reliable colorization algorithm for images and videos. In 17 th European Signal Processing Conference ( EUSIPCO) (Glasgow, Scotland, August 24-28, 2009)
Semary, N. A., Hadhoud, M. M., Ismail, N. A., and Al-Kelani, W. S. Texture recognition based natural gray images coloring technique. In 24th National Radio Science Conference (NRSC2007) (Faculty of Engineering, Ain shams University, Egypt, March 13-15, 2007).
Chapter one Introduction
Gray image colorization is a new image processing topic, and although different trials for manual gray movies colorization were found in 80's, researches for automatic colorization appear in the last few years. Different techniques appear in the literature and the number of researches and technologies are growing up every day.
The colorization technology leads researchers during the last few years to find more applications for this technology, not only giving colors to uncolored images (Colorization) but also eliminating colors from the color images and videos (Decolorization) and recoloring them back to make benefit from black and white images and videos features.
This chapter presents an introduction on the gray image coloring problem starting with a short description on digital image fundamentals. Also this chapter presents an introduction on the different techniques used for colorization and decolorization followed by a brief introduction to the research objectives and motivations in these fields.
1.1 Digital Image Fundamentals
Digital image is considered to be a discrete function I(x,y), where x and y are spatial (plan) coordinates, and the amplitude of I at any pair of coordinates (x,y) is called the intensity or the gray level of the image at that point. Each point in the image is called pixel and is denoted by its position and intensity. For a color image, any pixel color is defined by a triple values vary from 0 to 255 or from 0 to 1 for normalized values, describing the amount of red, green and blue color components of this pixel. This type of image is sometimes called a full color image, RGB image, or alternatively a contone image. The term contone comes from the fact that these images give the appearance of a continuous range of color tones. RGB cube color model (Figure 1. 1) is the simplest color model describing the color space. Other color models are presented in Appendix A. Typically four types of images can be considered [18]:
True color image: In this type, each color component (also called channel) is represented by a single byte (8 bits), giving 256 discrete levels of each color channel. Each pixel is therefore represented by 3 bytes, or 24 bits, giving a total of about 16 million different combinations of red, green and blue. The quality of a 24 bit RGB image is perfectly adequate for almost all purposes. Figure 1. 2 presents a true color image of size (500×333 pixels), so the size of data is 500 × 333 × 3= 499500 bytes = 487.79kb.
Binary image: In a binary image, each color pixel is represented by a single bit that means to the color as black or white. The binary image is presented in figure 1. 3(a), this black and white image is obtained by threshold the RGB image by value = 0.7. The number of bytes needed to represent this image is (500 × 333) / 8 = 20812.5 bytes = 20.32 KB
Intensity image or a gray scale image: A greyscale image contains only grays, much like a black and white photograph or TV movie. It’s sometimes called a monochrome image .Simply when the red, green, and blue values are equal (the marked diagonal in figure 1. 1) it is considered as gray image, so each pixel in a greyscale image is represented by a single value, representing the gray level from 0 for black right through to 255 for white. Typically each pixel is represented by a single byte, giving only 256 levels of gray (Figure 1. 3(b)). The size of this gray image is 500 ×333 = 166500 bytes = 162.6 KB
Indexed image: In this type, each pixel has an index referring to a triple RGB color value in attached color map. The color depth of the image is measured by the number of triple values in the color map. Figure 1. 3 (c) and figure 1. 1 (d) presents an indexed image with 256 levels without and with its color map respectively. The size of this indexed image = size of gray + size of index = (500×333) + (256×3) =167268 bytes = 163.35 KB
Notice that, the size calculated to the images doesn't conclude the header of the image file. Reader can notice that, the gray image is about third the size of the colored one.
1.2 Coloring Problem:
Gray image coloring or "colorization" means to give colors to gray images. It becomes a new research point area since it is utilized to increase the visual appeal of images such as old black and white photos, movies or scientific illustrations. In addition, the information content of some scientific images can be perceptually enhanced with color by exploiting variations in chromaticity as well as luminance. To illustrate the problem of coloring, there are two definitions to describe the gray value as an equation of the three basic components of RGB color model (red, green, and blue) [18]:
1: Intensity (most common used):
Gray = (Red + Green + Blue) /3 ( 1. 1)
2: Luminance (NTSC standard):
Gray = 0.299 Red + 0.587 Green + 0.114Blue( 1. 2)
These two equations are not reversible, that means, any gray value can't be converted back to its red, green, and blue components. Since the possible number of colors are( 256×256×256 ), there exist 256×256 combinations of completely different colors for each gray value from 0 to 255. For instance, figure 1.4 shows an example of different colors that have the same gray value.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1. 1: Normalized RGB cube model
Abbildung in dieser Leseprobe nicht enthalten
Figure 1. 2 Original RGB image ( 499, 554 bytes)
Abbildung in dieser Leseprobe nicht enthalten
Figure 1. 3: A comparison between image types with their actual size on disk.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1. 4 Comparison between Color Image and Gray Image
For more illustration, to convert a color image to a gray scale one using other color models like HSB, HSI, YCbCr, YIQ and l αβ (Appendix A), only the intensity channel B, I,Y and l respectively is transferred to the gray image, while the chromatic channels HS, CbCr, IQ and αβ respectively have no values. That means if , for m × n gray image, there are 2× m × n missed chromatic values needed to convert it back to a colored image.
1.3 Colorization Techniques
This problem of coloring excites researchers especially in image processing area and the trends for coloring were appeared from 80s. These trends were classified previously in [54] into three categories: Hand coloring, Semi automatic coloring, and automatic coloring. Chapter 2 will illustrate these trends in details.
1.4 Decolorization
The continuous development in the techniques of coloring improves the coloring results, which led the researchers to find more coloring applications. Researchers in this area began looking for new methods to eliminate the colors from color images while retaining some information of their real values in order to restore color images with very high quality.
In this research, the most important methods used in decolorization will be discussed in details in chapter 2. In chapters 4 and 5, two innovative decolorization techniques are presented .
1.5 Research Objectives
The main goals of this thesis are:
1. Studying different colorization techniques found in the literature
2. Creating a fully automated Black and White Movies colorization suitable for market needs
3. Finding new applications for image colorization not stopping to emphasize the appearance of black and white photos and movies
4. Contribute new motivations in "Image Decolorization" field, what expands the usage and research in colorization era.
By the end of this thesis, the goals of research have been reached successfully.
1.6 Thesis Motivations
This thesis proposes new applications for image colorization:
- An Automatic Movie Colorization System
- New Compression Technique Using HSI Color Model
- Innovative Automatic Seed Selection System Based On Morphological Operations.
1.7 Thesis Organization:
The thesis is organized as follows: Chapter 2 surveys the previous works contributed for colorization and decolorization. Chapter 3 presents the contributed application for Movies Colorization. In chapter 4, a proposed color embedding system dedicated for HIS model will be presented. Chapter 5 presents another proposed decolorization method based on morphological automatic seeds selection. Finally, chapter 6 concludes the thesis and discussed the future works. At the end of this thesis, appendix-A presents the different color models to help readers understanding the specification of each mentioned color model and appendix-B provides brief information about many image processing topics used in the thesis without motivation. Appendix-C concludes the full text of thesis publications, photo copies of certificates awarded to thesis achievements and sampled recolored movies and attached to this dissertation on a CD.
Chapter two Literature Survey
This chapter presents the literature review on different colorization and decolorization techniques in brief with detailed description of the techniques mentioned and used in this research.
2.1 Colorization Technique
Colorization is a term introduced in 1970 and later patented by Wilson Markle [40]. After that, different trends for grayscale coloring were appeared since 80s.
In our previous publications [54,55,56], these trends have been classified into three categories: Hand coloring, Semi automatic coloring, and automatic coloring.
Hand coloring has been long used by artists as a way of showing their talent. Usually image editing softwares like Adobe Photoshop or Paintshop Pro are used in such a way to convert gray images to color images. One of the commercial software packages, BlackMagic [44] is designed for still image colorization, provides the user with a range of useful brushes and color palettes.
The main drawback of this method is that a segmentation task is done completely manually.
Semi_Automatic refers to mapping luminance values to color values, which is done by converting the gray image to an indexed image. Pseudocoloring is a common example for the semi automatic coloring technique for adding color to grayscale images. The choice of the colormap is commonly determined by human decision. This colormap is also called 'Look Up Table' (LUT).
However, by using a colormap which does not increase monotonically in luminance, pseudocolored images may introduce perceptual distortions. Studies have found a strong correlation of the perceived “naturalness” of face images and the degree to which the luminance values increase monotonically in the colormap [61]. Due to this fact, pseudocoloring is suitable for coloring illustrative images like medical or industrial images. Pratt [1991] describes this method for medical images such as X-ray, Magnetic Resonance Imaging MRI, Scanning Electron Microscopy (SEM) and other imaging modalities as an “image enhancement” technique because it can be used to “enhance the delectability of detail within the image” [61].
Automatic coloring has been proposed in the literature by so many authors. But there is no published work which has surveyed all the works in this area in one publication. More over, there is no standard classification for those trends. From our side of view, their relatively works can be classified into three categories [54] (show in figure 2. 1) according to the source of colors to be transferred to the gray image pixels. These categories are Transformational coloring, Image Matching and User Selection. Last two categories can be found in literature as Coloring by Reference and Seeds Coloring. Each of these types will be presented in details.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2. 1 Coloring Techniques
2.1.1 Transformational Coloring
In the transformational coloring, the coloring is done by applying a transformation function Tk upon the intensity value of each pixel Ig(x,y) resulting in the chromatic value Ick(x,y) for channel k.
[Abbildung in dieser Leseprobe nicht enthalten] (2. 1)
Where Ig is the intensity value of pixel the x,y, and Ick is the k channel chromatic value for the same pixel). Since there must be three channels for the new colored pixel in any color model (Appendix A), there must be three transformation functions that transform the lonely available value ‘ intensity’ to three channels values. Some times more parameters are added to the equation such as pixel position or any other features. Pseudocoloring can be considered as a case of this type, where the transformation function simply maps the color map to the gray levels directly. J. Yoo et al. [68] proposed a system that generalized the pseudocoloring. They used the SOFM neural network for codebook construction from actual images. Variants of intensity are classified to code vectors for color encoding using a back-propagation neural network. The drawback of their system is that they used RGB color space (Appendix A) which caused a decrease in the quality of the colored results.
Transformational coloring has the lack of reality in the obtained colors. Besides, it may cause inconsistency or cutoff channels artifact between the colors. This type can be useful for illustrative images such as medical images.
2.1.2 Image Matching /Coloring by Reference
In this technique, a colored image is selected to be a reference for the color palette used in coloring process. The pixels of the gray image Ig are matched with a source colored image Is pixels; the most similar pixel color is transferred to the corresponding gray one by the color transfer technique proposed by E.Reinhard et .al [50]. The process can be described as follows:
For each gray pixel Ig(x,y) there exists a colored pixel Is(x2,y2) such that the distance E (some similarity measure) between them is the minimum value. The chromatic values in Is are transferred to Ig and the achromatic value of Ig remains.
From that description, the color model used for this type of coloring should have a separated luminance channel. Authors differ in the way of matching the two images' pixels; most of them use only the luminance value of the pixel; which mean that each pixel in the grayscale image will have the chromatic components of the nearest pixel in luminance in the reference color image.
To obtain a correct reference image, it's important to select a source image that is similar as much as possible to the destination gray image. Similarity refers to the closeness of both source and destination images in color, texture, lightness … etc. It is found that there are 3 ways to select the source image; Manual selection, Intelligent selection and fusion based.
ý Manual Selection
In "Global matching procedure" of T. Welsh et al [61], the source image is selected manually. Transferring the entire color “mood” of the source to the target image is done by matching luminance and texture information between the images. Then the technique transfers only chromatic information and retain the original luminance values of the target image.
Welsh algorithm works well when the luminance distributions of the target and source images are locally similar. Its performance degrades when the pixel histograms of the target and source luminance images are substantially different. So they have proposed also another technique to improve the coloring results when the matching results are not satisfying. It was achieved by asking the user to identify and associate small rectangles, called “swatches” in both the source and destination images to indicate how certain key colors should be transferred. This is performed by luminance remapping as in the global procedure but only between corresponding swatches.
The advantage of the approach is that in the first stage colors are transferred to the swatches selectively which prevents pixels with similar neighborhood statistics from the wrong part of the image from corrupting the target swatch colors. It also allows the user to transfer colors from any part of image to a select region even if the two corresponding regions vary largely from one another in texture and luminance levels. Secondly, since it is expected there to be more texture coherence within an image than between two different images, it is expected that pixels which are similar in texture to the colorized target swatches to be colorized similarly.
ý Intelligent Selection
Since this trend depends completely on the user selection of the source image, some authors proposed techniques to elevate this constraint. L. Vieira et al [59] have proposed what could be called a fully automated coloring system. They have used a database of colored images as a source of implicit prior knowledge about color statistics in natural images. Their system works like content-based image retrieval system that searches for the suitable source images in an image database. To assess the merit of their methodology, they performed a survey where volunteers were asked to rate the plausibility of the colorings generated automatically for grayscale images.
In 2007 we have proposed an intelligent fully automatic coloring system for textural images like natural images [54]. It works by segmenting the image into different textured regions then finding the suitable class for each region from a set of different textures stored in a special database. Each class is supported with a specific color value. The well known techniques for gray image texture segmentation and classification were used. The colorization is done by using HSV/HSB (Hue, Saturation, and Value/Brightness) color model. Since in this model only the brightness channel has the value of the gray image intensity while the hue and saturation channels have zero values, it's simple to colorize the image by adding hue and saturation values for each pixel. The hue for any region is taken from the texture class color value. Saturation channel is computed by inverting the original gray image intensity. The proposed system is compared with two common coloring techniques in the literature. The proposed system results are very satisfying and perform the other techniques.
Y. Rathore et al. [49] proposed method that needs only the user to provide a target gray level image for the process of ‘colorization’, a colorful image of the similar content as the grayscale image is automatically retrieve from the database of images, as an input source image. Then, the best matching source pixel is determined using luminance and texture matching procedure, for each pixel of the target image into a perceptually de-correlated color space. Once a best matching source pixel is found, its chromaticity values are assigned to the target pixel while the original luminance value of the target pixel is retained.
ý Fusion Based
Other trials like [27, 70, 21] have used image fusion concept to estimate the colors of the gray images from the same image captured by different sensors.
J. H. Jang and J. B. Ra [27] proposed a pseudo-color image fusion scheme for multi-sensor gray images, which was based on the intensity-hue-saturation (IHS) color space. In the algorithm, registered gray input images were first fused by using a scheme on the basis of gradient-based wavelet structure. Then, the fused gray intensity value was considered as the intensity (I) component of the IHS transform. Hue (H) values were assigned so as to generate human-friendly colors and saturation (S) values were assigned according to H values.
Yufeng Zheng and Edward A. Essock [70] proposed a new ‘‘local-coloring’’ method that functions to render the night-vision (NV) image segment-by-segment by taking advantage of image segmentation, pattern recognition, histogram matching and image fusion. Specifically, a false-color image (source image) was formed by assigning multi-band NV images to three RGB (red, green and blue) channels. A nonlinear diffusion filter was then applied to the false-colored image to reduce the number of colors. The final grayscale segments were obtained by using clustering and merging techniques.
Maarten A. Hogervorst and Alexander Toet [21] proposed a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera’s) in natural daytime colors. The color mapping was derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimized the match between the multi-band image and the reference image, and yielded a night-vision image with a natural daytime color appearance. The lookup-table based mapping procedure was extremely simple and fast and provided object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes.
Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times.
2.1.3 User Selection/Colorization by Seeds
In user selection coloring, system users have to mark the gray image by some colored scribbles or seeds. This step usually is performed using a brush like tool. The computer then is responsible of spreading the color of those scribbles on the bounded region containing the colored samples. Different techniques are proposed on how to determine the similar neighbors or bounded region. Anat Levin et al. [36] system is an example of this type. They have the assumption, that neighboring pixels in space-time with similar intensities should have similar colors.
The method by Levin et al. colorizes an image by minimizing a quadratic energy function derived from the color differences between a pixel and the weighted average of its neighborhood colors. Their colorization results are largely sensitive to the size and position of each scribble, and have over-smoothed color artifacts. The color blending method by Liron Yatziv and Guillermo Sapiro [67] enables fast colorization by applying the shortest distance between a pixel and a scribble to the color blending weight. Many authors like Tao et al. [30], Yao Li et al. [37], Qing Luan et al. [38] and more, enhanced Levin's system coloring quality and minimized its processing time.
In this thesis, Levin's system is used for colorization purpose. For more information about Levin's system, the technique is based on a unified framework applicable to both still images and image sequences. The user indicates how each region should be colored by scribbling the desired color in the interior of the region, instead of tracing out its precise boundary (Figure 2. 2). Using these user supplied constraints their technique automatically propagates colors to the remaining pixels in the image sequence. They have used YUV color space, as a common space for video processing where Y is the monochromatic luminance channel, which they referred to simply as intensity, while U and V are the chrominance channels, encoding the color. The algorithm is given as input an intensity volume Y (x; y; t) and outputs two color volumes U (x; y; t) and V (x; y; t).The target pixel pt and source pixel ps should have similar colors if their intensities are similar. So the goal is to minimize the difference between the color U (pt) at pixel pt and the weighted average of the colors at neighboring pixels[36] :
[Abbildung in dieser Leseprobe nicht enthalten](2. 2)
Where, fc is the cost function and wts is a weighting function that sums to one, large when Y (pt) is similar to Y (ps), and small when the two intensities are different. The simplest weighting function that is commonly used by image segmentation algorithms and is based on the squared difference between the two intensities:
wts µ[Abbildung in dieser Leseprobe nicht enthalten](2. 3)
A second weighting function is based on the normalized correlation between the two intensities:
wts µ[Abbildung in dieser Leseprobe nicht enthalten](2. 4)
Where, µt and σt are the mean and variance of the intensities in a window around pt. The correlation affinity assumes that the color at a pixel U(pt) is a linear function of the intensity Y(pt): U(pt) = aiY(pt) + bi and the linear coefficients ai; bi are the same for all pixels in a small neighborhood around pt. It means that when the intensity is constant the color should be constant, and when the intensity is an edge the color should also be an edge (although the values on the two sides of the edge can be any two numbers). While this model adds to the system a pair of variables per each image window, a simple elimination of the ai; bi variables yields an equation equivalent to equation (2.5) with a correlation based affinity function. The notation pt N(ps) denotes the fact that pt and ps are neighboring pixels.
In a single frame, they define two pixels as neighbors if their image locations are nearby. Between two successive frames, they define two pixels as neighbors if their image locations, after accounting for motion, are nearby. More formally, let vx (x; y); vy (x; y) denote the optical flow calculated at time t. Then the pixel (x 0; y 0; t) is a neighbor of pixel (x 1; y 1; t +1) if:
[Abbildung in dieser Leseprobe nicht enthalten] (2. 5)
Where, T is some threshold. The flow field vx (x 0); vy (y 0) is calculated using a standard motion estimation algorithm . Note that the optical flow is only used to define the neighborhood of each pixel, not to propagate colors through time. Their algorithm is closely related to algorithms proposed for other tasks in image processing. In image segmentation algorithms based on normalized cuts, one attempts to find the second smallest eigenvector of the matrix D - W where W is a (n pixels × n pixels) matrix whose elements are the pairwise affinities between pixels (i.e., the pt , ps entry of the matrix is wt s) and D is a diagonal matrix whose diagonal elements are the sum of the affinities. The second smallest eigenvector of any symmetric matrix A is a unit norm vector u that minimizes u T Au and is orthogonal to the first eigenvector. By direct inspection, the quadratic form minimized by normalized cuts is exactly the cost function fc, that is
fc(u)= uT(D - W)u (2. 6)
Thus, the algorithm minimizes the same cost function but under different constraints. In image denoising algorithms based on anisotropic diffusion of Perona and Malik 1989; Tang et al. 2001 one often minimizes a function similar to equation 1, but the function is applied to the image intensity as well.
Abbildung in dieser Leseprobe nicht enthalten
Figure 2. 2: Levin's system (left)Scribbles and (right)Results
User selection coloring gives high quality colorization, but is rather time-consuming, and what is much more important, it requires the colorization to be fully recomputed after even slightest change of the initial marked pixels.
- Sung Ha Kang and Riccardo March [29] explored variational colorization models via weighted harmonic map. Calculus of variations provides flexibility in modeling, and mathematical analysis yields a sound basis for the models. By using the chromaticity and brightness color model, chrominance is accurately modeled. They considered a penalized version of the variational problems in order to handle the non convex constraint. The weighted harmonic map model could deal with bigger colorization area. With only a small area of color given, color could be diffused to large region. With this model, color blending was naturally achieved by geodesic direction in chromaticity space, and it was able to deal with texture colorization combined with image decomposition methods.
2.2 Colorization Applications
Colorization didn’t stop to enhance the fray images only, but also the trend to colorize the old gray movies was the aim of many researchers during the last decades. This section covers some of these trials besides other fields that could benefit from image colorization.
2.2.1 Movies Colorization
The initial process was invented by Canadians Wilson Markle and Brian Hunt [40] and was first used in the 70s to add color to monochrome footage of the moon from the Apollo mission [31].
In 1973 Carl Hanseman [20] registered an American patent for converting monochrome signals to color. He proposed a circuit that could be used in a monochrome video camera to record color videos.
Wilson Markle described the process he invented for adding color to black and white movies or TV programs. In 1987 Markle et al. proposed a colorization algorithm in which the film is converted to video tape and a color mask is manually painted for at least one reference frame in a shot. Motion detection and tracking is then applied, allowing colors to be automatically assigned to other frames in regions where no motion occurs. Colors in the vicinity of moving edges are assigned using optical flow which often requires manual fixing by the operator.
In 1988 Markle et. al. [40] registered an American patent too for his coloring system which based on converting the video tape into shots and a color mask is manually painted for at least one reference frame in a shot. Motion detection and tracking is then applied, allowing colors to be automatically assigned to other frames in regions where no motion occurs.
Until now, still there is not much information about the techniques that are used in those commercial colorization systems used in the industry. However, there are some indications that these systems still rely on defining regions and tracking them between the frames of a shot [25].
In 1991, James Donald [11] master thesis was conducted to see if colorization affects the public's enjoyment of a film, and if there is indeed a preference for colorization over black and white. Two groups were shown the same sequence from the movie "It's A Wonderful Life"; one version colorized, the other in black and white. Both groups then completed a questionnaire concerning the movie and their reactions to it. The results of the experiment showed that although the majority of the participants stated that they do not like colorization, it had no significant effect on their enjoyment of the film.
In most film colorization techniques, the key frames in the image sequence are firstly colorized by a conventional colorization technique for still image. In the technique, a small number of color seeds are sown on the monochrome key frame, and color seeds are propagated spatially to remaining monochrome pixels. Then, each color in the colorized frame propagates to the next monochrome frame by extracting a displacement vector for each pixel between two frames. The displacement vector is calculated by a simple block-matching algorithm, and color propagation to temporary direction is performed. So, video colorization can be realized by setting only some color seeds on key frames.
In 2004, Levin system [36] proposed his seeds based colorization technique which uses the optical flow concept for colorization. Levin proposed system could be used for coloring a sequence of images as well. In 2006 Takahiko Horiuchi and Hiroaki Kotera [22] proposed two colorization algorithms for monochrome image sequence without scene changing in video. In their approach, the experimental results suggested a success of colorization for video by setting key frames for each 10 frames.
In 2005 Ritwik Kumar and Suman K. Mitra [33, 34] has been shown that integration of color transfer with video compression schemes can produce significant improvement in compression without much loss in quality. They have proposed a novel mechanism of color transfer for video frames that could be integrated with the standard video compression MPEG1. Compression is achieved by firstly discarding chrominance information for all but selected reference frames and then using motion prediction and DCT based quantization techniques. While decoding, luminance-only frames are colored using chrominance information from the reference frames using a color transfer technique.
In 2009 Vivek George Jacob and Sumana Gupta [26] proposed a semi-automatic process for colorization where the user indicates how each region should be colored by putting the desired color marker in the interior of the region. The algorithm based on the position and color of the markers, segments the image and colors it. In order to colorize videos, few reference frames are chosen manually from a set of automatically generated key frames and colorized using the above marker approach and their chrominance information is then transferred to the other frames in the video using a color transfer technique making use of motion estimation.
In 2009 Mina Koleini et al. [31] proposed a novel method for machine-based black and white films colorization. The kernel of the proposed scheme was a trained artificial neural network (ANN) which maps the frame pixels from a grayscale space into a color space. They employed the texture coding method to capture the line/texture characteristics of each pixel as its most significant gray scale space feature, and using that feature, expect a highly accurate B/W to color mapping from the ANN. The ANN was trained by the B/W-color pairs of an original reference frame.
Other trial for coloring black and white cartoon movies was produced by D. S´ykora et al 2005 [58]. They proposed a color-by-example technique which combined image segmentation, patch-based sampling and probabilistic reasoning. This method was able to automate colorization when new color information is applied on the already designed black-and white cartoon. Their technique was especially suitable for cartoons digitized from classical celluloid films, which were originally produced by a paper or cell based method. In this case, the background is usually a static image and only the dynamic foreground needed to be colored frame-by-frame.
[...]
-
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X. -
Upload your own papers! Earn money and win an iPhone X.