• web of science banner
  • web of science banner
  • scopus banner
  • Engineering Information banner
  • Inspec Direct banner
  • Dialog banner
  • EBSCO banner
Subscription button Subscription button
ETRI J award winner banner
Article  <  Archive  <  Home
Spectrum-Based Color Reproduction Algorithm for Makeup Simulation of 3D Facial Avatar
In-Su Jang, Jae Woo Kim, Ju-Yeon You, and Jin Seo Kim
vol. 35, no. 6, Dec. 2013, pp. 969-979.
http://dx.doi.org/10.4218/etrij.13.2013.0079
Keywords : 3D avatar, spectrum, color, characterization, makeup, cosmetics.
Manuscript received  Apr. 15, 2013;   accepted  Sept. 23, 2013.  
  • Abstract
    • Abstract

      Various simulation applications for hair, clothing, and makeup of a 3D avatar can provide more useful information to users before they select a hairstyle, clothes, or cosmetics. To enhance their reality, the shapes, textures, and colors of the avatars should be similar to those found in the real world. For a more realistic 3D avatar color reproduction, this paper proposes a spectrum-based color reproduction algorithm and color management process with respect to the implementation of the algorithm. First, a makeup color reproduction model is estimated by analyzing the measured spectral reflectance of the skin samples before and after applying the makeup. To implement the model for a makeup simulation system, the color management process controls all color information of the 3D facial avatar during the 3D scanning, modeling, and rendering stages. During 3D scanning with a multi-camera system, spectrum-based camera calibration and characterization are performed to estimate the spectrum data. During the virtual makeup process, the spectrum data of the 3D facial avatar is modified based on the makeup color reproduction model. Finally, during 3D rendering, the estimated spectrum is converted into RGB data through gamut mapping and display characterization.
  • Authors
    • Authors

      In-Su Jang
      ETRI
      jef1015@etri.re.kr
      Jae Woo Kim
      ETRI
      jae_kim@etri.re.kr
      Ju-Yeon You
      ETRI
      jyyou@etri.re.kr
      Jin Seo Kim
      ETRI
      kjseo@etri.re.kr
  • Full Text
    • I. Introduction

      3D scanning, modeling, and rendering processes are needed to make 3D facial avatar content. A 3D facial avatar model is generally obtained by estimating the geometric information of the object using a multi-camera system with two or more digital cameras and mapping them into the virtual world. To represent the 3D facial avatar model, a virtual scene is rendered and projected onto a display considering the viewpoints of the cameras, the lighting environment, the geometric position of the object, and so on. In these sequential processes, RGB data is used to represent the color information of the 3D facial avatar. However, for a real makeup simulation on a 3D facial avatar, RGB data is insufficient. RGB data representing the color of an object includes the characteristic of a particular illuminant. The light from an illuminant is reflected onto the object, and the reflected light is perceived in the human visual system as the color of the object. Therefore, instead of RGB data, the intrinsic characteristics of the object are needed to reproduce the makeup color generated by mixing multiple color characteristics of skin, cosmetics, and illuminant, because RGB data from the reflected light includes the illuminant component [1]-[3].

      To consider the intrinsic characteristics of the object and the illuminant, a spectrum-based color reproduction process is proposed. Light from the illuminant and the reflectance of the object can be defined as the spectrum domain. Thus, if the spectral reflectance of the object is obtained, a real color reproduction is possible under any circumstances. Herein, the researchers focus on the real color reproduction of a cosmetics foundation depending on the viewing condition. The skin color before and after makeup with a cosmetics foundation and an artificial skin sample are measured and analyzed using a gonio-spectrophotometer. The authors in [4]-[6] proposed a transition model of the skin spectrum data and the simulated model, but they used only one skin spectrum dataset and one cosmetics foundation. Although the simulated color is similar to the skin color with foundation makeup, it is difficult to apply a real makeup simulation process based on 2D imaging or a 3D rendering process because the skin cannot be represented by a single color dataset. Moreover, the characteristics of various skin types should be considered because the makeup colors using the same cosmetic can be represented differently based on the skin color or skin type.

      To apply the spectrum-based color reproduction method to a makeup color simulation using a 3D facial avatar, the color characteristics of the imaging devices, such as digital cameras and displays, should be considered. Conventional makeup applications are generally operated on 2D images, although some 3D models have tried to reproduce colors similar to real cosmetics without considering the color characteristics of the imaging devices [7]- [9]. They simply capture the faces and generate a 3D avatar using the captured images. The makeup simulation processes then change the color of the particular area for the 3D avatar without considering the color characteristics from the camera, monitor, and cosmetic. Their simulation results look appropriate but are not the same as a real makeup simulation. In other words, the reproduced skin color of a 3D avatar on the display before and after makeup is applied is different from that of real makeup, and the color of the cosmetic applied to the skin also does not match the real color since the researchers did not consider the color characteristics of the imaging devices.

      The color characteristics of the imaging devices are different based on the type, model, manufacturer, and so on. Display devices, in particular, can show different results based on the time and content displayed. Under such environments, images captured by a digital camera should be mapped onto a 3D facial avatar with the same color appearance as that of the real object [10]. To solve this problem, the color management should be defined throughout all processes. Various research studies on color management have been carried out to reproduce real color or matching color for homogeneous or heterogeneous imaging devices, such as cameras, monitors, printers, and projectors. The International Color Consortium (ICC) was established to promote the use and adoption of open, vendor-neutral, cross-platform color management systems [11], [12]. They encourage vendors to support the ICC profile format and the workflows required to use ICC profiles. The ICC device profile has color information in a CIEXYZ or CIELAB color space, which, unlike RGB and CMYK color spaces, is a device-independent color space [13]. This profile also includes the device color gamut. Thus, it is possible to find the RGB color data of the target device, which is similar to the reference device. The RGB color data of an input device is converted into other RGB data for an output device by the ICC profile in the form of a lookup table, and therefore color matching between one device and another can be carried out very quickly. However, the spectrum data should be used instead of CIEXYZ or CIELAB in the makeup color reproduction process. Thus, the color management process should be performed based on the spectrum data.

      Consequently, in this paper, the spectrum-based makeup color reproduction algorithm and the color management process with respect to the implementation of the algorithm are described. To reproduce the makeup color, the spectral characteristic models of cosmetics are estimated through a statistical analysis of the measured spectral reflectance for the skin samples before and after makeup. The spectral characteristic model reflects the effects from the tools used to apply cosmetics, such as a brush and sponge. The spectrum-based color management processes for a realistic makeup simulation are also proposed for 3D facial avatar scanning, modeling, and rendering. The spectral camera characterization is used to estimate the spectrum information of the avatar from the images captured by the 3D scanning system. By considering the monitor’s color characteristics based on CIEXYZ, the spectrum data is converted into RGB data and displayed on the monitor during the 3D rendering process.

      II. Makeup Color Reproduction

      The makeup color reproduction methods of a 3D facial avatar can be classified into two approaches: texture color conversion based on the face detection of the 3D facial avatar and a real-time makeup simulation using virtual makeup tools. Most conventional Web-based methods belong to the former approach because the makeup color reproduction is simple if the facial mask is extracted successfully by the face detection algorithm. However, it is not easy to apply a personal makeup skill to the makeup color simulation. The latter approach should support virtual makeup tools and the corresponding visual effects. A brush stroke, for example, might be represented on the 3D facial avatar by the operator or user, and the resulting makeup might be quite different based on the user’s makeup skill and experience. Because the real-time processing of makeup color rendering must be guaranteed for the simulation to be more realistic, we focus on the development of a real-time makeup simulation.

      1. Spectral Characteristic of Cosmetics

      The cosmetics used in this paper can be classified into eleven types: foundation, concealer, powder, eyebrow, shadow, eyeliner, lipstick, blusher, highlight, shading, and lining color. Each cosmetic has a different physical form and chemical characteristic, such as emulsion, powder, stick, oil, paste, soap, solution, and aerosol. The corresponding makeup tools are also varied, and the makeup effects are different by applying these tools to the skin. Therefore, the makeup effects or makeup colors cannot be represented through a simple mathematical model in an RGB color space. A statistical analysis and modeling of the characteristics of the cosmetics are needed to represent the various makeup colors correctly.

      Fig. 1.

      Measuring process for four skin regions.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F001.jpg

      Commercial cosmetics are classified into eleven parts and sampled to analyze the spectral characteristics, and 164 cosmetics are thereby obtained. The sample cosmetics are measured using a spectrophotometer (CM-512m3A and CM-3600A). Two types of measurement are performed: first, the cosmetics are directly measured to reproduce the colors of the cosmetics on a virtual palette of the 3D makeup simulation system, and, second, the skin is sampled before and after the makeup is measured to analyze the spectral characteristic of the cosmetics.

      Figure 1 shows a visual example of the measuring process with a contact measurement device on a tester’s face. Four skin sample regions (head, eye, cheek, and mouth) are selected, and 120 skin samples are measured for 30 testers.

      Figure 2 shows the measured data before and after the makeup is applied. The black line on the figure denotes the spectral data for a cosmetic that is directly measured. The spectral reflectance is decreased in the low wavelength region, increased in the high wavelength regions, and equalized to some degree for the whole wavelength range, although each changing rate for the skin samples is different. In addition, the spectral distributions after the makeup is applied are similar to the spectral distribution of the cosmetic.

      Fig. 2.

      Measured spectrum data of sample skins (a) before and (b) after makeup is applied.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F002.jpg
      Fig. 3.

      Measured CIELAB color data of sample skins before and after makeup in (a) L* - a*, (b) L* - b*, and (c) a* - b* spaces.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F003.jpg

      In contrast, Fig. 3 represents the change of color information before and after makeup is applied in the CIELAB color space. The red dot denotes the color information for the cosmetics, and the arrows denote the transition of color before and after makeup is applied in L* - a*, L* - b*, and a* - b* color spaces, where L is the lightness and a and b represent two color coordinates. Comparing the spectral distributions shown in Fig. 2, the data transitions are so irregular that it is hard to estimate the color transition correctly. We can therefore conclude that the spectrum data instead of the CIELAB color data can be used to represent the color information for an accurate color reproduction in makeup simulation applications.

      2. Makeup Color Reproduction Model

      The transition of spectral distribution for 20 skin samples per cosmetic are observed to estimate the conversion model. Figure 4 shows the transition of spectral distributions for the cosmetics foundation 1 and foundation 2. The red and blue colors denote the spectral distributions for two skin samples. The dotted lines denote the original skin spectral distributions, whereas the solid lines denote the skin spectral distributions after the makeup is applied. We can see the trends of the spectral transition, which are different for each cosmetic. For foundation 1, the skin spectrum data from 450 nm to 550 nm is decreased, whereas the other parts are retained. This means that foundation 1 reduces the bluish and greenish colors of the skin, thereby increasing the reddish color. The skin after makeup is applied is changed into a reddish or brownish color. In the results by cosmetics foundation 2, the spectrum data of the bluish and greenish parts around 430 nm and 550 nm, respectively, are increased. This means that foundation 2 reduces the reddish color of the skin. Therefore, the makeup color reproduction model should reflect the transition of the skin spectrum by each characteristic of the cosmetics after the makeup is applied because various approaches make it possible to reproduce a certain color, such as increasing the spectrum corresponding to the color or decreasing the other part of the spectrum.

      Fig. 4.

      Spectrum transitions for two skin samples before (dot) and after (line) makeup with foundations (a) 1 and (b) 2.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F004.jpg
      Fig. 5.

      (a)Transition ratio of spectral distributions for ten testers and (b) changes of spectral distributions by stroke steps.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F005.jpg

      Figure 5(a) shows the transition ratio of the spectral distribution using a cosmetics foundation and skin samples for ten testers. The transition ratio indicates the rate of increase for the changed skin spectral reflectance before and after the makeup is applied. A ratio of 1.0 means there is no change. The overall shapes of the graphs are similar, although their heights are not matched because the skin brightness values of the testers are different. Thus, the transition ratios for each cosmetic are used as parameters of the makeup color reproduction model. Figure 5(b) shows the changed spectral distributions by applying a single cosmetic repeatedly. The black solid line denotes the spectral distribution of the original skin sample, and the black dotted lines denote the spectral distributions after the makeup is applied once, twice, and three times. The red dotted line denotes the saturated spectral distribution by several strokes using makeup tools or fingers. Increasing the number of strokes, the spectral distribution also increases up to the red line. To estimate the spectral characteristic model for makeup with nonlinear characteristics, we propose the spectral characteristic model defined as follows:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM001.jpg

      where ro is the base spectral reflectance before the makeup, re is the changed spectral reflectance after the makeup, α is the spectral transition ratio for cosmetic c, and β is the stroke function with the number of strokes, s, for cosmetic c. Parameters α and β of the spectral characteristic model are estimated through a statistical approach based on the measured data for both several skin samples and cosmetics. Parameter α is related to the characteristic of the cosmetics and varies with the wavelength. Parameter β is related to the makeup tools or thickness of the makeup layer and varies with the number of strokes. In this paper, visual effects caused by the makeup tools are not considered because the focus is on the realistic makeup color reproduction.

      III. Camera Characterization

      A digital camera’s color characteristics are very sensitive and differ based on the viewing environment conditions. Moreover, the 3D scanning system used in this paper has more than two digital cameras. The color characteristics of these cameras might also be different even if the same models are used. Thus, it is necessary to calibrate the color characteristics of each camera. Next, the spectral camera characterization is performed to obtain the spectral camera sensitivities including the conversion matrix from the spectrum to the CIEXYZ data, which forms the camera characterization model. The output texture images from the 3D modeling process are converted into a spectral reflectance map by the conversion matrix.

      Fig. 6.

      Captured images for (a) JPEG and (b) raw file formats.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F006.jpg

      1. Image File Format Captured by Digital Camera

      Commercial digital cameras have unique image processing chips to enhance the quality of the captured image. The captured raw image data is treated and modified according to the method linked to the camera used, and, thus, every manufacturer has its own sense of color. Consequently, for the same scene, if the image processing chip is different, the colors of the output images are different from each other although the mounted lens and camera setting are the same. In addition, the image processing cannot be estimated, and most image process methods are nonlinear and complicated. General camera characterization models are linear to the conversion matrix, and it is hard to estimate the original camera characteristics before the image processing is applied. To remove the uncertainty, we use the raw file image, as shown in Fig. 6. The color of the raw image data is not matched to a real color. However, the accuracy of the camera characterization can be improved because the data is not modified by the image processing chip. Moreover, the color of a raw image can be easily corrected after the camera characterization process.

      2. Color Calibration in Multi-camera System

      Digital cameras used in the 3D scanning system should be assembled and set exactly the same way to one another to capture the target object correctly. However, even if the cameras are the same model from the same manufacturer, the color characteristic of the cameras has small differences in the manufacturing processes of the CCD or other optic components. Moreover, the geometric locations of the illuminants used and the objects in the 3D scanning system can cause a color difference for the same object because it is hard to illuminate the objects uniformly. Thus, instead of modifying the geometric locations, color calibration using a color chart is proposed to match the output colors of the cameras. As shown in Fig. 7, the color chart is captured in the 3D scanning system, and the RGB data of each color patch in the captured images are extracted. One of the cameras becomes a reference, and the others are corrected by the polynomial regression, as in the following equation:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM002.jpg

      where Rr, Gr, and Br represent the RGB data of the reference, Rt, Gt, and Bt represent the RGB data of the target, and a is the parameter of the polynomial equation. Increasing the order of the polynomial equation, the accuracy of the estimation might also be increased. In general, however, the difference in the camera color characteristics is relatively small if we use the same model and setting conditions. The first-order polynomial equation can be used without losing the estimation accuracy. Thus, the parameters of the polynomial equation can be easily obtained through a matrix operation. Using the parameters for each camera, the output image of a camera can then be corrected to match the color of the reference camera.

      Fig. 7.

      Acquisition of sample color data for color calibration of multi-camera system.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F007.jpg

      3. Spectrum-Based Camera Characterization (SBCC)

      Once the colors of the patches captured by the 3D scanning system are matched to each other through the color calibration process, the real color information of the target object cannot be guaranteed by performing only the color calibration process. Spectrum-based camera characterization should be added after the camera calibration so that the real color reproduction can be accomplished. The SBCC estimates the spectral reflectance information of the target object with RGB images captured by digital cameras in the 3D scanning system. The RGB data of the reference camera is used, and the characterization is performed only for the reference camera because it is assumed that the color characteristics of the other cameras are already matched to that of the reference camera by completing the color calibration process. To obtain the spectral reflectance of the target object, color characteristics between an input light and the output RGB data should be estimated first. In terms of color management, the digital camera is considered an input device, and, thus, the range of input light is not limited, while the corresponding output RGB data is limited from 0 to 255. In general, it is not efficient to find the input versus output relationships of the digital camera for all color data in the characterization model. Thus, the performance of spectrum-based characterization is very dependent on the selection of the sample colors used in the characterization. Since the main objective of the 3D scanning, modeling, and rendering processes proposed in this paper is to reproduce the real color of a human face, skin color samples must be added to enhance the accuracy of the SBCC. The digital SG color checker with 24 basic color patches and 14 additional skin color patches are used to perform the SBCC. The other color patches in the digital SG color checker are not used because we must concentrate on the skin color reproduction.

      To estimate the spectral reflectance of the target object from the output RGB image, the output data of the digital camera is calculated through

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM003.jpg

      where dk is the digital output data of the k channel, I is the light intensity from the illuminant, r is the spectral reflectance of the target object, o is the optical sensitivity of the camera, c is the sensitivities of the CCD or CMOS sensor, t is the spectral sensitivity of the color filter, and η is noise [3]. There is an assumption that the illuminant has a uniform distribution over the target object regardless of the geometric conditions. The parameters for the camera and illuminant, such as the optical sensitivity, characteristics of the CCD or CMOS, spectral sensitivity of the color filter, and spectrum distributions of the illuminant, are approximated into the camera sensitivity function, S. If we can ignore the noise factor, the equation can be converted into the discrete model with the spectrum sampling, which has the following form:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM004.jpg

      Eventually, the spectral reflectance, r, is obtained from both the digital RGB value, d, and the camera sensitivity function, S. Thus, the inverse function, Q, of the camera sensitivity function, S, is required to obtain the spectral reflectance by calculating the equation as follows:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM005.jpg

      The inverse of the camera sensitivity function can be estimated using the measured spectral reflectance and captured RGB data for 38 sample patches of the color checker by modifying (5) as follows:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM006.jpg

      The singular value decomposition is applied to obtain the inverse matrix in (6). The output RGB data of the digital camera is then converted into the spectral reflectance through (5). Apart from the direct approach proposed in this paper, the camera sensitivity function can be generally estimated using a linear combination of such well-known basis functions as the Gaussian and Wavelet functions. However, because we focus only on the skin colors, this simple and direct approach is both sufficient and efficient.

      4. Spectral Reflectance Data for Creating Texture Map

      After the 3D modeling process is completed, a texture map, which is an RGB image, is converted into spectral reflectance data by performing the spectrum-based camera characterization process. The resolution of the spectral reflectance data is consistent with that of the texture map. In general, the spectrum data ranges from 400 nm to 700 nm, which is the visible spectral region. The resolution of the measuring instruments varies from 1 nm to 10 nm, and the sampling at 10 nm is mostly used except for particular cases. A higher resolution induces more accurate data. However, the amount of data is also exponentially increased by increasing the resolution, which can affect the processing time for 3D rendering.

      Fig. 8.

      Color differences based on sampling intervals of spectrum data.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F008.jpg

      The proper sampling interval for minimizing errors is determined by computing the color differences between the measured and computed data in a CIELAB color space. The spectral reflectance of real skin samples is measured using data sampled at intervals of 10 nm, 20 nm, 30 nm, 40 nm, and 50 nm. The sampled reflectance is then converted into a CIELAB color space using CIE1964 observers and D65 illuminant and compared with the computed CIELAB value for the measured reflectance without sampling. As shown in Fig. 8, the color difference is close to 1 at 10 nm and 20 nm. In other cases, the color difference is increased because of the loss of spectrum data by the increase in the sampling interval.

      Consequently, the RGB texture map created from the 3D modeling process is converted into the spectral reflectance data through a spectrum-based camera characterization and stored in a spectrum map. Namely, a single pixel in the RGB texture map is replaced in the sampled spectral reflectance data in the spectrum map with a 20-nm sampling interval.

      IV. Display Characterization for Color Rendering

      When the makeup color reproduction process is completed, the texture map of the 3D facial avatar still has the form of spectral data. Therefore, the conversion from spectral data to RGB data is required, and the display characterization plays a role in this conversion. Before the display characterization, the color gamut of the cameras used in the 3D scanning system should be considered so that the color of the object can be accurately reproduced.

      1. Spectrum Data Conversion

      The International Commission on Illumination (CIE) defined the tri-stimulus values, CIEXYZ, in terms of a mathematical integration over the wavelength using the color matching functions. The spectral data in the texture map are converted into a CIEXYZ value as follows:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM007.jpg

      where I(λ) is the relative spectral power distribution of an illuminant, images/2013/v35n6/ETRI_J001_2013_v35n6_969_I00701.jpg and images/2013/v35n6/ETRI_J001_2013_v35n6_969_I00702.jpg represent the color matching function for the CIE1931 or 1964 standard observers, r(λ) is the spectral reflectance on the texture map, and k is a normalizing factor [13]. If the relative spectral power distribution of the illuminant is then changed, the rendering of the 3D facial avatar is possible under a changed illuminant. Because this rendering process for the illuminant is independent of the imaging devices using an RGB color space, the original color characteristic of the 3D facial avatar under the illuminant can be reproduced realistically.

      Using only the CIEXYZ value, the real color reproduction is possible through the display characterization process. However, if the color gamut of the cameras used in the 3D scanning system is not matched to the color gamut of the display, the color data out of the display gamut are converted into an arbitrary value in the display characterization. The gamut mapping process is required to compensate the difference of the color gamuts. A CIEXYZ color space is not a linear space and not bounded, and a CIELAB color space is thus used in gamut mapping. For conversion into a CIELAB color space, (8) is used.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM008.jpg

      where Xn, Yn, and Zn are the tri-stimulus values for the reference white patch [14]. The spectrum data of the texture map is then converted into a CIELAB color space.

      2. Gamut Mapping

      To compensate for the gamut difference between two imaging devices, their gamut boundary information is needed. The gamut boundary is a surface determined by a color gamut’s extremes. There are generally two approaches based on the clipping and compression. The former method maps all colors outside the gamut onto the surface of the target device gamut. In contrast, the latter method compresses all the color in the reference device gamut into the target device gamut [15]. Thus, if the gamut difference is minimal, the clipping-based method is appropriate and vice versa. In the 3D makeup simulation system, because the color obtained by the 3D scanner should be accurately reproduced on the display and the gamut difference between the cameras used and the display is relatively small, the clipping-based method is used. The gamut mapping process is performed in an L* - C* plane, and the CIELAB values are converted by

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM009.jpg

      where L* indicates the lightness, C* is the chroma, and h is the hue angle between a* and b* in a CIELAB color space [14].

      Figure 9 shows the color mapping process in an L* - C* plane. The dotted line denotes the color gamut boundary of the reference device, that is, the camera, and the solid line denotes the color gamut boundary of the target device, that is, the display. Gamut mapping is performed practically in a 2D plane with a certain hue angle. The mapping direction is generally concluded using the lightness value. In this paper, the clipping-based hue-preserving minimum delta E method is used to minimize the color difference [14].

      3. Display Characterization

      The general display characterization is applied to convert a CIEXYZ value into RGB data. For the camera characterization, because the obtained spectrum data is used to reproduce the makeup color under various illuminants, the spectrum data is more accurate than the CIEXYZ data. However, for the display characterization, only the output part of the 3D makeup simulation system and the direct color space conversion from CIEXYZ to RGB without any color changes are used. We therefore use CIEXYZ data instead of spectrum data.

      Fig. 9.

      Clipping-based gamut mapping process between reference (dotted line) and target (solid line) gamut.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F009.jpg
      Fig. 10.

      Forward display characterization process using tone curve model.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F010.jpg

      General display characterization methods can be classified into two approaches: the use of a tone curve model and linear interpolation [10]. The gain-offset-gamma (GOG) model and S-curve model are representative model-based methods. Tetrahedral interpolation and piecewise linear interpolation are generalized methods for the latter. In the 3D makeup simulation system, the gamut boundary information of the display is obtained through a forward display characterization converting RGB into CIEXYZ, and mapped CIELAB values are converted into RGB values through a backward characterization converting CIEXYZ into RGB. Nevertheless, in the characterization using an S-curve model, its backward characterization is not possible because the inverse transform cannot be obtained. In addition, characterization methods using linear interpolation take a lot of time, and its performance depends on the sample data. Therefore, the GOG-model-based characterization methods are applied to the 3D makeup simulation system. Figure 10 shows the flow of the display characterization using a tone curve model. Digital input, d, for RGB channels is linearized using a tone curve model, and converted into CIEXYZ values by a 3×3 conversion matrix composed with CIEXYZ values for RGB primaries. The backward characterization parameters are easily computed using the inverses of a 3×3 matrix and tone curve.

      V. Experiments

      The 3D facial scanning system consists of three DSLR cameras. Basic Macbeth color checkers with 24 color patches are used to perform the color calibration between cameras. LEDs are used as the illuminant of the scanning system, and the illuminant is close to the D65 standard. Figure 11(a) shows a raw image captured by the 3D scanning system. The lightness is lower than in the other images in (b) and (c) because the pixel color data is modified through any preprocessing. The RGB values of the captured image are converted into the spectrum data and stored in the spectrum map. The spectral characterization using SVD for the DSLRs is performed with 24 color patches. Also, Macbeth color checkers digital SD with 96 patches are used to evaluate the characterization, and the color differences between the estimated and measured data is computed as follows [13]:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM010.jpg
      Fig. 11.

      Scanned images from color reproduction process: (a) raw image obtained by scanning system, (b) corrected image after characterization process, and (c) image after makeup color reproduction.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F011.jpg
      Fig. 12.

      RMS errors and △Eab color differences between measured and estimated data for training samples.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F012.jpg
      Fig. 13.

      RMS errors and △Eab color differences between measured and estimated data for testing samples.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F013.jpg

      A color difference calculation is done for 23 of the color patches, and we obtain an average color difference of 1.42. It is generally accepted that if △Eab is under 3, the difference is almost indistinguishable. Figure 11(b) shows the converted image by applying both the spectral characterization and the color conversion from the spectrum into RGB.

      Next, the proposed makeup color reproduction model using spectrum data is tested for the sample skin and cosmetics. To estimate the parameters of the makeup color reproduction model, the spectrum of the skin of 20 subjects and the spectrum of 12 cosmetics, such as foundation, blusher, and lipstick, are used. The subjects include 10 males and 10 females ranging in age from 20 years old to 40 years old. The spectrum data is measured using a spectroradiometer, and the spectrum error between the measured and estimated data is computed as follows [13]:

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_FM011.jpg

      where n is the number of sampled wavelengths. In these experiments, 16 pieces of data per single color with a 20-nm interval are used. Figure 12 shows the RMS errors for half of the training data. The average RMS error is 0.0177, the standard deviation is 0.0097, and the maximum error is 0.0396. Two cosmetics foundations and ten subjects who are not used in the training process are used to evaluate the performance of the makeup color reproduction model after the training process. The results represented in Fig. 13 show that the average RMS error is 0.0307, the standard deviation is 0.0069, and the maximum error is 0.0412. The RMS error increases, but the standard deviation and maximum error decrease or are similar to the results using the training data. Figure 11(c) shows the resulting image after a makeup color reproduction using foundation and lipstick.

      Table 1.

      Display characterization errors for several displays in CIELAB color space.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_T001.jpg

      After the spectrum-based makeup color reproduction is completed, the gamut mapping and display characterization are performed sequentially to display the resulting 3D facial avatar to appear as close to an actual face viewed by human eyes as possible. To do so, five displays are used to show the 3D facial avatar after the makeup is applied: two liquid tablets equipped with a stylus pen are used as the main displays, and two notebooks and a smart pad are used as mobile displays. The display characterization with the GOG and an alternate GOG model is performed for these displays. Table 1 shows the △Eab color differences between the measured and estimated data for 216 sample patches. All of the displays have △Eab color differences under 3.0 except for notebook 1, which has over-controlled color signals to enhance the image quality as manipulated by the manufacturer. From the results, we conclude that most of the displays used in this paper are appropriate for the makeup simulation system.

      Fig. 14.

      Averaged color difference in CIELAB color space for 20 cosmetics.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F014.jpg
      Fig. 15.

      Color management process in makeup simulation system.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F015.jpg
      Fig. 16.

      Makeup color reproduction results on 3D makeup simulation system.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F016.jpg

      The performance of the whole color reproduction process of the 3D makeup simulation system is evaluated by including the above-mentioned process for ten subjects and twenty cosmetics. The estimated color data after applying the makeup color reproduction process and the measured data by applying real makeup are compared, the results of which are shown in Fig. 14.

      The average color difference is 4.91, the standard deviation is 1.29, and the maximum difference is 7.64. The color difference includes errors for camera characterization, makeup color reproduction, and display characterization. Generally, a color difference value of 3.0 is a perceptibility threshold of complex images [16].

      Finally, the color management method is applied to the 3D makeup simulation system, as shown in Fig. 15. The 3D scanning and modeling module is preprocessed with the color calibration and the spectrum-based camera characterization process. A texture image is obtained from the generated 3D facial model with the images captured by the scanning system. A spectrum map for the texture image is then extracted, and it is then modified by the makeup color reproduction process. Through the processes of color space conversion, gamut mapping, and display characterization, the final modified texture image is generated and applied to the 3D facial avatar model by guaranteeing a realistic color appearance. Figure 16 shows an example makeup simulation result applied to a 3D facial avatar of a traditional musical theater character.

      VI. Conclusion

      A makeup color reproduction method and a color management process for its implementation were proposed to reproduce realistic makeup color for a 3D facial avatar. The skin spectrum data of a human face was estimated through the spectrum-based characterization of a digital camera used in the 3D scanning system. Based on the estimated spectrum data for the skin and measured spectrum data for the cosmetics, the real makeup color can be reproduced and displayed through a gamut mapping and display characterization process. All processes were applied to the 3D makeup simulation system, the performance was evaluated using sample data, and acceptable results were obtained. In addition, because most cosmetics have a pearl component, research into pearl effect modeling will progress for makeup color reproduction.

  • References
    • References

      [1] 

      A. Mansouri et al. “Representation and Estimation of Spectral Reflectances Using Projection on PCA and Wavelet Bases,” Color Research Appl., vol. 33, no. 6, 2008, pp. 485-493.  

      [2] 

      A. Mansouri, F. Marzani, and P. Gouto, “Neural Networks in Cascade Schemes for Spectral Reflectance Reconstruction,” IEEE Int. Conf. Image Process. II, Genova, Italy, 2005, pp. 718-721.

      [3] 

      J.Y. Hardeberg, Acquisition and Reproduction of. Color Images: Colorimetric and Multispectral Approaches, doctoral dissertation, École Nationale Supérieure des Télécommunications, France, 2001.

      [4] 

      S. Tominaga and Y. Moriuchi, “Principal Component Analysis- Based Reflectance Analysis/Synthesis of Cosmetic Foundation,” J. Imaging Sci. Technol., vol. 53, no. 6, 2009, pp. 060403_1- 060403_8.

      [5] 

      S. Tominaga and Y. Moriuchi, “PCA-Based Reflectance Analysis/Synthesis of Cosmetic Foundation,” 16th Color Imaging Conf., 2008, pp. 195-198.

      [6] 

      N. Ikeda et al., “Reflection Measurement and Visual Evaluation of the Luminosity of Skin Coated with Power Foundation,” Color Research Appl., early view (online version), 2012.

      [7] 

      A.M. Rahman et al., “Augmented Rendering of Makeup Features in a Smart Interactive Mirror System for Decision Support in Cosmetic Products Selection,” 14th IEEE/ACM Symp. Distrib. Simulation Real-Time Appl., 2010, pp. 203-206.

      [8] 

      K. Scherbaum et al., “Computer-Suggested Facial Makeup,” Eurograph., vol. 30, no. 2, 2011, pp. 485-492.

      [9] 

      N. Tsumura et al., “Image-Based Skin Color and Texture Analysis/Synthesis by Extracting Hemoglobin and Melanin Information in the Skin,” ACM Trans. Graph., vol. 22, 2003, pp. 770-779.  

      [10] 

      I. Jang et al., “User-Configured Monitor-to-Printer Color Reproduction,” J. Imaging Sci. Technol., vol. 55, no. 2, 2011, pp. 020506_1-020506_10.

      [11] 

      H. Zeng and M. Nielsen, “Color Transformation Accuracy and Efficiency in ICC Color Management,” Proc. IS&T/SID 9th Color Imaging Conf., IS&T, Springfield, VA, USA, 2001, pp. 224-232.

      [12] 

      ICC specification of International Color Consortium. http://www.color.org/specification/ICC1v43_2010-12.pdf

      [13] 

      S. Westland, C. Ripamonti, and V. Cheung, Computational Colour Science Using MATLAB, 2nd ed., Chichester: John Wiley & Sons, Ltd., 2012.  

      [14] 

      J. Morovic, Color Gamut Mapping, Chichester: John Wiley & Sons, Ltd., 2008.  

      [15] 

      B. Kang et al., “Gamut Compression and Extension Algorithms Based on Observer Experimental Data,” ETRI J., vol. 25, no. 3, June 2003, pp. 156-170.  

      [16] 

      C. Sano et al., “Colour Difference for Complex Images,” 11th Color Imaging Conf., pp. 121-125.

  • Cited by
  • Metrics
    • Metrics

      Article Usage

      3108
      Downloaded
      3043
      Viewed

      Citations

      0
      2
  • Figure / Table
    • Figure / Table

      Fig. 1.

      Measuring process for four skin regions.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F001.jpg
      Fig. 2.

      Measured spectrum data of sample skins (a) before and (b) after makeup is applied.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F002.jpg
      Fig. 3.

      Measured CIELAB color data of sample skins before and after makeup in (a) L* - a*, (b) L* - b*, and (c) a* - b* spaces.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F003.jpg
      Fig. 4.

      Spectrum transitions for two skin samples before (dot) and after (line) makeup with foundations (a) 1 and (b) 2.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F004.jpg
      Fig. 5.

      (a)Transition ratio of spectral distributions for ten testers and (b) changes of spectral distributions by stroke steps.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F005.jpg
      Fig. 6.

      Captured images for (a) JPEG and (b) raw file formats.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F006.jpg
      Fig. 7.

      Acquisition of sample color data for color calibration of multi-camera system.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F007.jpg
      Fig. 8.

      Color differences based on sampling intervals of spectrum data.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F008.jpg
      Fig. 9.

      Clipping-based gamut mapping process between reference (dotted line) and target (solid line) gamut.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F009.jpg
      Fig. 10.

      Forward display characterization process using tone curve model.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F010.jpg
      Fig. 11.

      Scanned images from color reproduction process: (a) raw image obtained by scanning system, (b) corrected image after characterization process, and (c) image after makeup color reproduction.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F011.jpg
      Fig. 12.

      RMS errors and △Eab color differences between measured and estimated data for training samples.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F012.jpg
      Fig. 13.

      RMS errors and △Eab color differences between measured and estimated data for testing samples.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F013.jpg
      Fig. 14.

      Averaged color difference in CIELAB color space for 20 cosmetics.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F014.jpg
      Fig. 15.

      Color management process in makeup simulation system.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F015.jpg
      Fig. 16.

      Makeup color reproduction results on 3D makeup simulation system.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_F016.jpg
      Table 1.

      Display characterization errors for several displays in CIELAB color space.

      images/2013/v35n6/ETRI_J001_2013_v35n6_969_T001.jpg