Fundamentals of Computer Science I: Media Computing (CS151.02 2007F)
Primary: [Front Door] [Glance] - [Academic Honesty] [Instructions]
Current: [Outline] [EBoard] [Reading] [Lab] [Assignment]
Groupings: [Assignments] [EBoards] [Examples] [Exams] [Handouts] [Labs] [Outlines] [Projects] [Readings] [Reference]
Reference: [Scheme Report (R5RS)] [Scheme Reference] [DrScheme Manual]
Related Courses: [CSC151.01 2007F (Davis)] [CSC151 2007S (Rebelsky)] [CSCS151 2005S (Stone)]
Summary: We consider other techniques for storing images efficiently. Our primary technique is to use color palettes that permit us to represent each color as a single small integer, rather than the three integers we traditionally use.
In writing algorithms to solve problems, most computer scientists and computer programmers follow a progression of steps to arrive at their desired solution. The first question is usually “Can the problem be solved?” (It turns out some problems aren't even solvable.) The answer to this question usually involves the construction of an algorithm that does, in fact, solve the problem. However, the designer's/programmer's task is not yet done. She should next ask herself some other questions, including:
In many cases, the first focus is to make the solution more efficient. Nonetheless, it is also useful to think about elegance and generality. Even the question of efficiency can present multiple perspectives. Do we emphasize the amount of time the algorithm takes or do we emphasize the amount of space the algorithm uses? In our first exercises in analysis, we emphasized the speed of the algorithm. However, now that we are creating files, we are looking at the second issue, and trying to create smaller files.
Often, to get more compact representations, we need to sacrifice something. Most frequently, we end up sacrificing some run-time efficiency to get smaller representations, as we can use computation to compress and expand data. In our exploration of pixel maps, we sacrificed human readability of the files produced. In many media storage applications, we even sacrifice precise accuracy in the representation. That is, we may accept an approximate representation of an image, sound, or video, if that approximate representation is “close enough” and we can save a lot of space using that representation. For the case of sound, you probably know that mp3 files (particularly those at lower bit rates, such as 128kbps) are less good approximations of the same tracks as they would appear on CDs. (In fact, the files on CDs are themselves only approximations of the original wave forms.) For images, many of the most efficient representations, such as the JPEG format, also approximate the original image.
We currently have a fairly efficient technique for representing colors: We use one unit of data storage for each component in the color, meaning that each pixel requires us to store three small values. (As we saw in the previous reading, this representation is between a factor of two and a factor of four better than previous representations.) Can we do better? Well, we can try to think about ways that we can use only one unit of data, rather than three, for each color. How? We might start by approximating colors.
As you may recall, our current representation of colors gives us a lot of possible colors, 16,777,216 to be precise. However, the human eye cannot easily distinguish many of those colors from each other. For example, 127/64/190 and 128/63/189 look much the same. Hence, we might choose a smaller set of colors, and “round” each color to a nearby color.
It turns out that the byte, the basic unit of information on most computers, can represent numbers between 0 and 255. Hence, in trying to find a smaller set of colors, we should limit ourselves to no more than 256 colors. If we break the color space more-or-less equally, that allows us about six red values, about six green values, and about six blue values, as 6x6x6 = 212. (We could use 6x6x7 values, but it doesn't make much sense to have more options for one component.)
We can reflect the approximation process as a pair of procedures, one that converts a color to a value used to encode the approximation, and another that converts the encoded approximation back to a color. How do we join the three components together? We convert each component to a number between 0 and 5. We then multiply the red component by 36, the green component by 6, and the blue component by 1, and then add the three scaled components together. (Unfortunately, even though the result should be an integer, DrFu needs to be told explicitly to coerce the number from a real to an integer.) Since no scaled component can be more than 5, we can then distinguish the three components by a clever combination of division and modulo.
;;; Procedure: ;;; rgb->approx ;;; Parameters: ;;; color, an rgb color ;;; Purpose: ;;; Convert color to an encoded approximation. ;;; Produces: ;;; approx, an approximation ;;; Preconditions: ;;; [No additional] ;;; Postconditions: ;;; 0 <= approx < 256 ;;; (approx->rgb approx) is relatively near color. (define rgb->approx (let ((scale (/ 5 255))) (lambda (color) (->int (+ (* 36 (round (* scale (rgb.red color)))) (* 6 (round (* scale (rgb.green color)))) (round (* scale (rgb.blue color)))))))) ;;; Procedure: ;;; approx->rgb ;;; Parameters: ;;; approx, an integer ;;; Purpose: ;;; Convert an encoded approximation to a color. ;;; Produces: ;;; color. ;;; Preconditions: ;;; 0 <= approx < 256 ;;; approx was created by rgb->approx ;;; Postconditions: ;;; color is close to the original value ;;; (approx->rgb color) = approx. (define approx->rgb (let* ((scale (/ 255 5)) (component (lambda (c) (round (* scale c))))) (lambda (approx) (rgb.new (component (/ (- approx (remainder approx 36)) 36)) (component (/ (- (remainder approx 36) (remainder approx 6)) 6)) (component (remainder approx 6))))))
If you use these procedures to build an approximation of an image and then to restore an image from the approximation, in many cases, you'll find that the result is a reasonable, but not ideal, approximation of the image.
The set of colors representable exactly using this technique (that is, the colors for which you get the same color back after approximating and then converting approximation back) are what are often called the “Web-safe colors”. The name comes from the idea that these colors were usually rendered accurately by web browsers on old, 256-color displays.
While the approximation technique given above works reasonably well for images with unknown colors, you can make a much better approximation if you know something about the colors in the image. For example, suppose the image has only the colors 64/64/64, 128/128/128, 192/192/192. If we use the technique above, we end up approximating these as 51/51/51, 153/153/153, and 204/204/204. However, if we simply used the rules that “0 represents 51/51/51, 1 represents 153/153/153, and 204/204/204”, we can more accurately represent our image, still using only one small value per color.
In fact, many programs generalize these two techniques. That is, instead of picking a particular formula to use to convert colors to their approximations, they instead work with an ordered collection of colors, which we call a color palette. Once you have a defined color palette, you can identify any color in the palette by the index of the color. For palettes of 256 or fewer colors, the index is a small enough number to permit us to efficiently represent colors. What if we want to represent a color not in the palette? We find the closest approximation in the palette and then find the index of that approximation. As in our first example, this means that we only approximate images, but that suffices for many purposes, particularly if we choose a good palette.
;;; Procedure: ;;; palette.encode-color ;;; Parameters: ;;; palette, a nonempty list of RGB colors ;;; color, an RGB color ;;; Purpose: ;;; Determine a number that appropriately encodes color using ;;; the given palette. ;;; Produces: ;;; encoding, an integer ;;; Preconditions: ;;; [No additional] ;;; Postconditions: ;;; 0 <= encoding < (length palette) ;;; (rgb.distance color (list-ref palette encoding)) <= ;;; (rgb.distance color (list-ref palette i)) ;;; for all i, 0 <= i < (length palette) (define palette.encode-color (lambda (palette color) (list.index-of palette (rgb.closest color palette)))) ;;; Procedure: ;;; palette.decode-color ;;; Parameters: ;;; palette, a nonempty list of RGB colors ;;; code, an integer ;;; Purpose: ;;; Converts an encoding (created by palette.encode-color) back ;;; to a color. ;;; Produces: ;;; color, an RGB color ;;; Preconditions: ;;; 0 <= code < (length palette) ;;; code was created with (rgb.encode some-color) ;;; Postconditions: ;;; color is close to some-color (define palette.decode-color (lambda (palette code) (list-ref palette code)))
palettes.encode-color
relies on a variety of other
procedures, many of which you've seen or written before. Those procedures
are reproduced at the end of this reading for your convenience.
The use of palettes to encode images provides an efficient representation of images, as long as we are willing to accept approximations. Clearly, the choice of palette affects the quality of the approximation. So, how does one choose an appropriate palette for an image? While a complete answer to that question is beyond the scope of this class, there are some techniques that you might think about.
One question to ask yourself when designing a palette is whether you want each color in the palette to correspond to some pixel in the image. While it seems tempting to answer this question in the affirmative, there are times that it may be useful to choose a palette color that is not in the image, but close to a large number of pixels in the image. For example, if we have both 127/127/127 and 129/129/129 in the image, we may be best of having 128/128/128 in the palette, since it provides a close approximation of both colors, even though it doesn't appear.
If you're willing to spend a lot of processing time computing the palette, you can start by building a table of all the colors that appear in the image and the number of times each color appears. Some statistics can be used to optimize the average error in approximation.
But that's a lot of effort, both by the programmer and by the program. A simpler technique is to randomly sample the image until you've gathered a large enough palette. This technique won't do well for images that have a few scattered colors that are very different than the rest, but it can work well for many images.
One could also build that color palette while writing the image. For each pixel you see, if there is a color in the palette that is close, you just use the index of that color. If there is no close color, you add the color of the current pixel to the palette. While this technique is appealing in that we build the palette while writing, and it seems relatively straightforward, it also has some significant disadvantages. If you choose “close enough” too large, you'll end up approximating more than you have to. If you end up choosing “close enough” too small, you'll end up running out of slots in the palette before you reach the end of the image.
The following procedures may be useful as you design and use palettes.
;;; Procedure: ;;; rgb.distance ;;; Parameters: ;;; color1, an RGB color ;;; color2, an RGB color ;;; Purpose: ;;; Compute the distance between two colors ;;; Produces: ;;; distance, a number ;;; Preconditions: ;;; [No additional preconditions] ;;; Postconditions: ;;; distance provides a reasonable metric for color distance. In ;;; particular, two colors likely to be perceived as similar will ;;; have a smaller distance than two colors likely to be perceived ;;; as different. (define rgb.distance (lambda (color1 color2) (+ (square (- (rgb.red color1) (rgb.red color2))) (square (- (rgb.green color1) (rgb.green color2))) (square (- (rgb.blue color1) (rgb.blue color2)))))) ;;; Procedure: ;;; rgb.closer ;;; Parameters: ;;; color, an RGB color ;;; estimate1, an RGB color ;;; estimate2, an RGB color ;;; Purpose: ;;; Determines which of estimate1 and estimate2 is closer to color. ;;; Produces: ;;; closer, an RGB color ;;; Preconditions: ;;; [No additional] ;;; Postconditions: ;;; closer is either estimate1 or estimate2 ;;; (rgb.distance color closer) <= (rgb.distance color estimate1) ;;; (rgb.distance color closer) <= (rgb.distance color estimate2) (define rgb.closer (lambda (color estimate1 estimate2) (if (<= (rgb.distance color estimate1) (rgb.distance color estimate2)) estimate1 estimate2))) ;;; Procedure: ;;; rgb.closest ;;; Parameters: ;;; color, an RGB color ;;; colors, a list of RGB colors ;;; Purpose: ;;; Determine the element of colors that is closest to ;;; color ;;; Produces: ;;; closest, an RGB color ;;; Preconditions: ;;; colors is nonempty. ;;; Postconditions: ;;; closest is an element of colors. ;;; (rgb.distance color closest) <= (rgb.distance color c) ;;; for all c in colors (define rgb.closest (lambda (color colors) (let kernel ((remaining-colors (cdr colors)) (closest-so-far (car colors))) (if (null? remaining-colors) closest-so-far (kernel (cdr remaining-colors) (rgb.closer color closest-so-far (car remaining-colors))))))) ;;; Procedure: ;;; list.index-of ;;; Parameters: ;;; lst, a list of values ;;; val, a value ;;; Purpose: ;;; Find an index of val in lst. ;;; Produces: ;;; index, a number (or #f) ;;; Preconditions: ;;; [No additional] ;;; Postconditions: ;;; If there exists a value in lst equal to val, then ;;; (list-ref lst index) equals val ;;; If no such value exists, then ;;; index is #f (define list.index-of (lambda (lst val) (cond ((null? lst) #f) ((equal? val (car lst)) 0) (else (+ 1 (list.index-of (cdr lst) val))))))
Primary: [Front Door] [Glance] - [Academic Honesty] [Instructions]
Current: [Outline] [EBoard] [Reading] [Lab] [Assignment]
Groupings: [Assignments] [EBoards] [Examples] [Exams] [Handouts] [Labs] [Outlines] [Projects] [Readings] [Reference]
Reference: [Scheme Report (R5RS)] [Scheme Reference] [DrScheme Manual]
Related Courses: [CSC151.01 2007F (Davis)] [CSC151 2007S (Rebelsky)] [CSCS151 2005S (Stone)]
Copyright © 2007 Janet Davis, Matthew Kluber, and Samuel A. Rebelsky. (Selected materials copyright by John David Stone and Henry Walker and used by permission.)
This material is based upon work partially supported by the National Science Foundation under Grant No. CCLI-0633090. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
This work is licensed under a Creative Commons
Attribution-NonCommercial 2.5 License. To view a copy of this
license, visit http://creativecommons.org/licenses/by-nc/2.5/
or send a letter to Creative Commons, 543 Howard Street, 5th Floor,
San Francisco, California, 94105, USA.