ANALYSIS Forum (IPACO) Forum Index

Dedicated to the analysis of alleged UFO photos and videos

 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Some More about Imagery

Post new topic   Reply to topic    ANALYSIS Forum (IPACO) Forum Index -> Analysis: technical subjects -> Geoff's column
Previous topic :: Next topic  
Author Message
New member II
New member II


Joined: 21 Aug 2012
Posts: 12
Localisation: Tanzania

PostPosted: 02/11/2015, 03:39 pm    Post subject: Some More about Imagery Reply with quote

First a word of clarification . The word “spatial” is often used, in some publications, to refer to imagery taken from space, typically from an earth observation satellite. This is not what will be discussed here.
 In an earlier essay, I outlined the radiometric domain of an image, usually, in a digital  form, quantified as the gray levels ( plus color levels in a color image) of the individual pixels ( from picture elements) .
 We’ll now move  on to look at some basic radiometric enhancement concepts and  begin also considering information contained within the pixel matrix of  rows and columns ( typically  “x” and “y” axes ) whilst  examining  some of the terminology and processes involved in their analysis and interpretation.
Firstly, let us recap on the diagrammatic illustration of a digital display at the end of the “Radiometric Domain”  essay.


Fig.1 Schematic digital array.

         The  array can be seen to be arranged as a matrix of pixels (often referred to as a raster display) each with its own digital number (DN), more commonly known as its radiometric, radiance, “brightness” or gray scale value, which is the figure in each square (i.e. pixel) represented in the diagram . The pixel positions within the matrix are identified by their coordinates in the “x” (column) and “y”( row)  values, the imagery convention being ( x, y).
        The coordinates number from the top left of the image, the first being  (1,1) the next to the right (2,1) and so on. N.B. Confusingly some graphics conventions start at (0,0,)
        The third (z) axis in the digital  imagery construct is the brightness  value which is usually expressed in quantification of bits  e.g. 8-bit  (256 values) ,however, in this axis the values would be normally   expressed as 0-255.
Physical Elements  of the Digital Image.
 With most hand-held cameras an array of small sensors is situated in the “image plane”, which is where the film would be exposed in an analog camera.  However, some observation satellite and other families of imaging sensors do use somewhat  different systems which are separate cases for interpretation, not covered here.
           This array of sensors, called a Charge Coupled Device or CCD, contains a great number of  tiny semiconductors, each equating to a pixel in the output image. The size of the array is often expressed in Megapixels ( MP or millions of pixels ). An 8MP digital camera can, theoretically at least, capture twice as much information as a 4MP camera with an otherwise similar imaging system.  When this figure is multiplied by the number of gray levels, digital images often result in very large files, the number of bytes being usually represented in increments of 210 (1,024) or more:
            1 Kilobyte (KB) = 1,024 bytes
1 Megabyte (MB) = 1,024 KB
1 Gigabyte (GB) = 1,024 MB
1 Terabyte (TB) = 1,024 GB
          With the increasing capability of computers to handle such ever growing file sizes, it is now possible to perform quite sophisticated imagery calculations on a PC, which even a large mainframe computer would have struggled with a few years ago.
           As an aside, it should be noted that a CCD which is physically bigger can have bigger semiconductors, which in turn can potentially gather up many more photons and give a better quality pixel result. An analogy of this in analog silver film imaging is the good quality of some  very old images , obtained when both lens and film technology was less sophisticated. The reason for the quality was simply the large size of the photographic plates, and later the film formats. Strategic aerial reconnaissance cameras used the same concept for obtaining high quality imagery of distant targets, often by using film formats  of 9 or even 18 inches – or more, with  appropriate lens systems.
Each semiconductor in creating its pixel gathers up photons from a minute square section of the scene viewed through the lens and converts it into a tiny electric charge proportional to the  incoming  energy hitting it during the brief period of exposure. All of these charges are then, extremely rapidly,  stripped off of the CCD row by row, in a manner often likened to a line of men operating a fire bucket brigade, and dumped into a file in the storage medium ( typically a chip) , usually along with additional metadata  ( i.e. data describing data) that are supplied by the camera.
          Typically both the fineness of the spatial data (N.B. MP)  and the radiometric resolution of the pixels far exceed the limitations of the human visual system. In other words there is far more  information contained within  the digital image than can be directly perceived  by the observer. For example, very approximately, the  human eye has angular spatial resolution characteristics  which enables detection of objects subtending around  one minute ( i.e. 1/60 deg) of arc. In simple physiological terms  one degree of a scene is projected across 288µm of the retina by the eye's lens and the number of tiny sensors (rods and cones) contained within  this value  defines a physical constraint along with measured responses.   Radiometrically,  only around 50  brightness levels can be differentiated.  There are variations due to different contrast/illumination and duration parameters but the basic rules apply fairly well in empirical considerations of both radiometric and spatial domains.
      It should be noted that the number of output display pixels from a CCD device is usually much greater than the number of pixels ( screen pixels)  on a computer screen , each screen pixel typically being an average value of several display pixels covering its position in the screen matrix. Incidentally, this process is typically reversed by the computer when a part of the original output image is enlarged on the screen. Enlarging beyond the point where the original output pixels just become discernable on the screen, however, is generally a wasted exercise in direct visual interpretation.
To enable aspects of this information contained within a digital image to be made visually perceptible to the observer there are various processes that are applied to make this information more apparent.  We will start by considering how the data in the radiometric domain may be made more apparent or “enhanced”.
Density Slicing and Contrast  Enhancement
     In Fig.1 above we saw how an individual pixel  is identified by its unique  “x” ,”y” and “z” coordinates in an image.  If we now imagine these pixels hypothetically as  just being like a collection of marbles in a bag with their gray levels written on them,  we could collect them all up and in the case of a 0-255 gray level spread  count them  into discrete groups,  identified  by  their gray level values . These would come out with a distribution of  gray level values from  0 to 255. We can represent these in a number of ways, one of which is commonly used is a distribution histogram .
Fig 2.Example of a Distribution Histogram.

         In this diagram the pixel gray levels can be seen to be distributed in a pattern with most pixels in the middle of the gray levels ( “mid gray”) dropping away to fewer pixels to the left ( towards  zero or “black” ) and  similarly to the right  towards saturated or “white). The  red line is a curve drawn through the histogram levels and is referred to as a distribution curve.
         So if the overall curve is displaced to the left  you have a dark image and if to the right a light image. In addition, if a large number of pixels have a similar gray level the distribution will be a bit like the Empire State Building in profile. With the pixels bunched so close together in brightness level  the human visual system, with its limitation of only being able to discern around 50 Brightness levels, will not be able to discern many levels of gray which are contained within the image.
This practical example will perhaps illustrate the point:

Fig 3. Digital Image of a Submarine and Histogram of Conning Tower Area. .

       It can be seen that there is a bunching of the pixel  values around about the 80 – 100 gray level  range ( the  dotted curve  is just a cumulative representation of the same data).  It can also be seen that it is not easy to pick out detail on the hull around the conning tower as the eye cannot readily detect the differing gray levels in such an electronic “soft copy” screen display, or “tonal” values in a “hard copy”  image version such as a print.
         However if we now take a slice of  the  distribution histogram in the area of the peak of gray values (density slicing)   and pull it out across the “x” axis , ( the yellow colored histogram values below ) we keep the relative values of gray level intact  but open them up across the distribution histogram, increasing the  separation  levels between them to a point at which the human eye can  now differentiate . The image has been contrast enhanced.
Fig 4. Enhanced Submarine Image and Matching Histogram.

         N.B. No additional  information has been added to the image. The  information already  held within the image has merely been  re-displayed to  aid the observer in visualisation.  This is the essence of  Imagery Enhancement , including all of the routines developed and implemented  in IPACO.
          As well as the radiometric ( z) domain,  imagery enhancement routines are also employed to display less clear  data in the spatial (x, y,) domain ( i.e across the image in any and all  directions). To understand the basis of these routines it is helpful to first of all  understand the concept of  spatial frequency.
Spatial Frequency
        When considering the term  “frequency”, the first idea that comes to mind is of some repeating variable with time,  such as  for a example  50hz  electicity mains frequency, or a “high frequency train service”. 
       With imagery we use the concept of spatial frequency to refer to how  the  brightness values in pixels  change  as we scan across the image.  So in simple terms we are  looking at changes in radiometry ( or gray level) compared to  units of measurement across the image.
       In this image of pebbles on a beach, note how the rapid changes in gray level across the scene give it a rough “texture”. A  scan across a row or column of the image pixels would give a result somewhat similar to the diagram on the right.

Fig.5 High Spatial Frequency Dominant Image.

      In comparison, the image of clouds below shows predominantly gradual changes in gray level across the scene, giving it a “ smooth” texture. With a scan  along a row or column of pixels in this case  giving a result somewhat similar to the diagram on the right.

Fig.6 Low Spatial Frequency Dominant Image.

     In practice, images  comprise a large and complex mixture of  high and low spatial frequency components and the data within these components can be  extracted and analysed   by tools working in the spatial frequency domain.
     The basic theory and use of these tools in imagery  enhancement will be  outlined in part (ii) of this article.

Back to top

PostPosted: 02/11/2015, 03:39 pm    Post subject: Publicité

PublicitéSupprimer les publicités ?
Back to top
Display posts from previous:   
Post new topic   Reply to topic    ANALYSIS Forum (IPACO) Forum Index -> Analysis: technical subjects -> Geoff's column All times are GMT + 2 Hours
Page 1 of 1

Jump to:  

Index | Create free forum | Free support forum | Free forums directory | Report a violation | Conditions générales d'utilisation
Powered by phpBB © 2001, 2005 phpBB Group