The Jan-26-2007 posting, Pixel Classificiation Project generated quite a response (and some confusion!). Having received a number of responses, both in the Comments section, and via e-mail, I will answer questions and comments in the sections below. Many thanks for your (collective) interest!
Details About The Pixel Classifier's Operation
1. The pixel classifier assesses individual pixels, not entire images. It is applied separately to all pixels within a subject image. The fact that the training images were composed entirely of pixels from one class or the other was merely a logistical convenience since the analyst would not have to label areas of the training images by class. Ultimately, the pixel classifier operates on a small window of pixels, and predicts (the probability of the) the class of the pixel at the center of the window. This is why the new images (visible near the end of the original posting) are shaded in: the classifier is scanned over the entire image, evaluating each pixel separately.
2. While there is obviously a required component of craft in constructing the whole thing, the largest direct infusions of human knowledge to the actual classifier come from: 1. the manual labeling of images as "foliage" / "non-foliage", and 2. the construction of the hue2 feature. hue2 is not strictly necessary, and what the classifier knows, it has learned.
3. The hue-saturation-value (HSV) color components are a relatively simple transformation of the red-green-blue (RGB) color values already in the model. Although they do not bring "new" information, they may improve model performance by providing a different representation of the color data.
4. Which of the entire set of 11 input variables is "most important", I cannot say (although I suspect that the classifier is driven by green color and high-activity texture). As mentioned in the original posting, rigorous testing and variable selection were not performed. If I post another image processing article, it will likely be more thorough.
5. The edge detector variables measure contrast across some distance around the center pixel (I can supply the MATLAB code to any interested parties). The 5x5 edge detector summarizes the differences in brightness of pixels on opposite sides of 5 pixel-by-5 pixel square surrounding the pixel of interest. The other edge detectors consider larger squares about the pixel of interest. The varying sized edge detectors measure texture over different scales. There is nothing special about these particular edge detectors. I chose them only because they are fast to calculate and I already had built them. I would consider using other image processing operators (Sobel edge detector, Laws texture features, window brightness standard deviation, etc.) in any future pixel classifier.
(Possible) Future Developments
1. This process could indeed be applied to other types of data, such as audio. I was actually thinking about doing this, and given the interest in this posting, will consider either an audio project or a more thorough image processing project for the future (any preferences?). Reader suggestions are very welcome.
2. Detection of more complex items (people, automobiles, etc.) might be possible by combining a number of pixel classifiers. Much research has been undertaken in an effort to solve that problem, and the attempted solutions are too numerous to list here.
3. I strongly encourage readers to experiment in this field. Anyone undertaking such a project should feel free to contact me for any assistance I may be able to provide.
I will take up other potential applications of this idea with individual readers via other channels, although I will say that pixel-level classification is being performed already, both by governments (including the military) and in the private sector.
Some examples of other writing using this general follow. The nice thing about this sort of work is that even if one doesn't fully understand the white-paper or report, it is always possible to appreciate what the author has done by looking at the pictures.
Machine Learning Applied to Terrain Classification
for Autonomous Mobile Robot Navigation, by Rogers and Lookingbill
A Support Vector Machine Approach For Object Based Image Analysis, by Tzotsos
Interactively Training Pixel Classifiers, by Piater, Riseman and Utgoff
Texture classification of logged forests in tropical Africa using machine-learning algorithms, by Chan, Laporte and Defries
A Survey on Pixel-Based Skin Color Detection Techniques, by Vezhnevets, Sazonov and Andreeva
Feature Extraction From Digital Imagery: A Hierarchical Method, by Mangrich, Opitz and Mason
3 comments:
It's interesting that you in item 4 that you have no idea which variables is most important. I end up with the same situation in many applications (most often in the past it was radar and sonar types of problems) because it really didn't matter. All that mattered was a good prediction.
One further questino that wasn't clear to me: is there any other image preprocessing done to normalize the images, such as contrast stretches? Or are all the images taken "as is"? In the latter case, I would assume that problems due to how the images are collected would wash out if sufficient numbers and varieties were included in the training set.
The coefficients for hue2 (basically a fuzzy flag for "green-ness") and at least one of the edge detectors were high, although one would need to normalize the coefficients by the distribution of their corresponding variables for this interpretation to make sense. Like I said, this was just a fun project. If I perform a more sophisticated experiment and still use an "understandable" model (something linear, like logistic regression), I may look into this more closely.
I performed no "fixing" of the images for brightness, contrast, etc. Raw red-green-blue values were read off of the images and digested "as-is". Now that you mention it, the "foliage" class images I collected tended to be relatively uniform in their lighting conditions.
dear sir, i just want to know is there a matlab command that could calculate pixels to get a single integer representation??...i'm doing a final year project based on stereo vision system...and one of my results should show a disparity level of pixel vs distance graph...but unfortunately right now i couldn't seem to find the correct command to calculate all the numbers of pixel from an image in an integer...i have tried most of the commands about pixels i could find in the matlab but still haven't reached what i wants..could u help me out??
Post a Comment