Background Modeling and Foreground Detection

The desire to segment or discriminate moving foreground objects from the background scene is fundamental and crucial to many computer vision applications. A common technique, "background subtraction", has been used for years in many vision systems as a preprocessing step for object detection and tracking. However, most algorithms are susceptible to possible lighting changes, specular highlights, shadows, discretized noise in compressed video, etc. These cause the consequent processes, e.g. tracking, recognition, etc., to fail.

This problem is the underlying motivation of our work. We developed a robust and efficiently computed background subtraction algorithm that is able to cope with local illumination change problems, such as shadows and highlights, as well as global illumination changes. The algorithm is based on a proposed computational color model which separates the brightness from the chromaticity component.

Images below shows the result of our method. As the person moves into the room, she both obscures the background and casts shadows on the floor and wall. Red pixels depicts the shaded background pixels, and we can easily see how the shape of the shadow changes as the person moves. Although it is difficult to see, there are green pixels, which depict the highlighted background pixels, appearing along the edges of the person's body.



Our method demonstrates its robustness and efficiency. It has been used in many real-time vision systems. Below are some sample applications.

Recently, we has developed another algorithm for background subtraction which employs vector quantization technique. The motivation behind this approach is to develop a foreground segmentation method for compressed video. In compressed video, there is discretized noise and the distribution of the data is not smooth and is non-deterministic. Parametric modeling such as in the first approach would fail. Our newly developed codebook-based background modeling encodes background scene in very long period of time under finite memory. It compresses the long history of background pixel values in a codebook. Because of its characteristic of embedding temporal coherence of pixel values, the approach works well on compressed video, especially MPEG encoded video. The codebook approach is also suitable for adaptation to both slowly and quickly background changes such as cloud effect or a new object depositing into background scene. Images below shows the result of our new codebook-based background subtraction applied on 60kbits/sec MPEG video. (joint work with David Harwood and Kyungnam Kim.)







Related Publications. . . . .
  1. A Robust Background Subtraction and Shadow Detection
    T. Horprasert, D. Harwood, and L.S. Davis
    Proc. ACCV'2000, Taipie, Taiwan, January 2000

  2. A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection
    T. Horprasert, D. Harwood, and L.S. Davis
    Proc. IEEE ICCV'99 FRAME-RATE Workshop, Kerkyra, Greece, September 1999