Image segmentation

Segmentation means a method to separate an image into individual parts(segments) in relation to objects present in the image. The amount of such objects is mostly too high and so it is necessary that an overlapping of individual objects occurs. This type of image is not possible to completely segment well. We then try at least to obtain a partial segmentation. The procedure in which we do this depends mainly on the type of image and objects present in it.

Segmentation is split according to technique used:

  • Global - we use global features of an image
  • Local - we use local features of an imagewe use local features of an image

Threshold segmentation

First, let us create an image histogram f(x,y). An image histogram is a function assigning to all the levels of brightness or color a corresponding frequency of brightness or color in the image. A histogram gives global information on the image. Then, we choose a threshold level T, by which we divide the two maximums of the histogram. Then, for the point of image for which it is valid: f(x,y)>T is  the point of object, and the remaining points are the points of background. Because a histogram is mostly a rather complex function, finding a threshold level in not so easy. Searching is based on testing the importance of local minimums and maximums. In searching for more threshold levels we should set the intervals of brightness I1, ..., In. The outcome is then an image with n brightness levels.

Global thresholding

In the global thresholding we are looking for a frontier, or limit between objects and background, while we split the histogram into two separated intervals set by the threshold T, where the interval B1 determines the background points and B2 the object points

$$$APPLET

Applet Histogram of global thresholding

<

Local thresholding

One option of segmentation is to split an image into parts and those are to be thresholded separately. This method is called local thresholding

Image frontier and contour

By the frontier of an area can be understood a set of such points for which in any surrounding there occur points of this area as well as of its complement. In a discrete image we call it a frontier or contour. Similarly as in the case of rastering polygons we have defined 4-contiguous and 8-contiguous areas, analogously also here we create a 4-contour and 8-contour. The point P is a 4-contour (8-contour) point of area if at least one of its 4-adjacent (8-adjacent) does not belong to the given area.

Simple algorithm of a frontier

For every point of an area we test whether it is a frontier. For this it is enough to verify whether its 4-adjacent is from a complement of this area. By 'area' we understand a two dimensional field or matrix A[xx,y], where x and y are dimensions of the image. b[s,t] is a point of this matrix for s  =  1,...,x a t = 1,...,y, with the value of 1 if it is internal and with the value of 0 if it is external.

$$$APPLET

Applet Simple algorithm of a frontier of 4- and 8-contours, interaction 
through the mouse enables the addition or subtraction of points of an area

Simple algorithm of a contour

Here we also progress similarly as in the case of polygons. Assume that the area is set by a matrix (field) with the dimensions mxn. First, we find a frontier point: gradually from the upper line we search for the matrix elements from left to right unless we encounter the first non-zero point. This point is concurrently the first point of contour.

Scan algorithm of a contour

Up to now we have assumed that the area is set in a two-dimensional field. After using the preceding methods for searching for a contour and a frontier, we should read the whole area into the memory. But if we want to minimize the size of memory, or if there is a lack of memory capacity, that is never enough (see the statement by Bill Gates: 640 KB must be enough for everything), we can use the scan algorithm, where it is enough to keep only two or three lines in the memory concurrently. This algorithm is explained in [1]. 

Brightness correction

Through brightness correction we try to achieve an additional modification of an image. In processing an image via optical equipment, a non-uniform light diffusion occurs, resulting in a non-bright picture. Through this method we make efforts to find for every image point f(x,y) a correlation coefficient k(x,y). The outcome corrected point then is g(x,y) = f(x,y)*k(x,y). Our goal is to minimize the difference between the original and our result picture. The degradation function k(x,y) is determined as follows: We take an image with the known brightness c. We mark it c(x,y). Then, we will correct the difference according to the relations:

    c = k(x,y) * c(x,y)       and           v(x,y) = k(x,y) * f(x,y) = c * f(x,y) / c(x,y)

It can, however, occur that the resulting brightness is outside the permissible interval. Here, we cut off the extreme values or through shifting the brightness scale we move our calculated values into the permissible interval.

Image filtering

The aim of filtering is to remove noise arising through image processing and transmission, as well as sharp transits or to emphasize some features of the image. 

In the case of a noisy image we can replace the damaged pixels by averaging the adjacent points. Here we use small changes of adjacent points and through it also a similar value of brightness.

In processing image function frequencies, we can distinguish two methods: the image smoothing and the image focusing. In the case of image smoothing, we inhibit high frequencies, and in this way also random noise. In such a case edges and sharp frontiers become blurred. On the other hand, through image focusing we emphasize higher frequencies, and by this an emphasizing of edges and sharp frontiers, but also greater noise.

In modifying an image we use mainly two approaches: area and frequency. Area approaches work directly with the image or its part f(x.,y), which is transformed into the output image g(x,y) according to the formula:

g(x,y) = T[g(x,y)] .

Frequency approaches use convolution of images. 

The convolution of two functions f(x) and g(x) (marked asf(x)*f(y)) is defined by the following integral

,

where a is a an integral variable.

The resulting image g(x,y) is created through convolution f(x,y) and k(x,y) , and it is valid

    G(u,v) = K(u,v) * F(u,v)

Where   G, K and F are the Fourier transformations of the functions g, k , and f. The, the resulting image is calculated as follows:

                -1
    g(x,y)      = F(K(x,y) * F(x,y)).

Smoothing an image

Smoothing serves for removing noises. The smoothing methods used: averaging, median filtration, low pass filtering
Averaging

In filtering an image, we filter through local averaging of the surrounding of the point (x,y):
,

where M is the number of points of the surrounding O.

Focusing of an image

By edge in an image is understood a set of points of the image in the vicinity of which the brightness is significantly changed [1]. Focusing means that we try to emphasize such edges. 

Skeleton of a set

Under this term we understand the formations of lines which outline a given set, maintaining its geometric essence. There are many algorithms for obtaining a skeleton, while their outputs differ because there is no exact determination of 'skeleton' of a set in raster graphics. The general approach is based on diminishing the given set.


alt="Your browser understandsthe  APPLET tag but isn't running the applet, for some reason." Your browser is completely ignoring the  APPLET tag! 

Applet: Simple algorithm of a skeleton


alt="Your browser understandsthe  APPLET tag but isn't running the applet, for some reason." Your browser is completely ignoring the  APPLET tag! 

Applet: Simple algorithm of a skeleton

Copyright (c) 1999-2017 Juraj Štugel. www.netgraphics.sk