From awpaeth@watcgl.waterloo.edu Mon Jan 25 14:37:22 1988 Path: leah!itsgw!nysernic!rutgers!clyde!watmath!watcgl!awpaeth From: awpaeth@watcgl.waterloo.edu (Alan W. Paeth) Newsgroups: comp.graphics Subject: Re: Advanced Dither Needed Message-ID: <3036@watcgl.waterloo.edu> Date: 25 Jan 88 19:37:22 GMT References: <3703@ames.arpa> <2741@masscomp.UUCP> <2863@watcgl.waterloo.edu> <2602@dciem.UUCP> Reply-To: awpaeth@watcgl.waterloo.edu (Alan W. Paeth) Distribution: na Organization: U. of Waterloo, Ontario Lines: 80 The discussion was a follow-on to my previous posting, in which I suggest splitting 256 color map entries as a Cartesian product of RGB in which the Green had the greatest precision, because of the eye's sensitivity to green. >It is not strictly true that the eye is least sensitive to colour >variation in the neighbourhood of blue. In fact, the MacAdam discrimninatio >ellipses are smallest near blue and largest near green. What IS true >is that the eye does not discriminate blue-yellow contrasts in small >areas, nor does it have any blue-yellow edge detectors. Hence, blue >is a good candidate for colour dither... (discussion of some possible dithering techniques) >...You actually want better blue colour resolution >and worse blue spatial resolution than for red and green, which is >what this scheme would give you. But I have no idea how computationally >expensive it would be to have these non-rectangular dither areas which >had different boundaries for all three primary colours. In reverse order: [2] This really hits the nail on the head. The preliminary design of the "Dandelion" by Butler Lampson at Xerox PARC in the late 70's went for a video chain which had r3g3b2 precision, but the b2 channel was clocked in such a fashion so as to produce pixels of four bits resolution (in pixel quantization), but of double the width, thus losing some spatial resolution. This makes sense, as the eye not only has better contrast discernment on the red-green channel (as was pointed out), but cannot even focus well on blue, being from -1D to -2D out as the wavelength decreases, owing to chromatic aberration. Such a viedo design wouldn't be that hard a retrofit for many systems, as the total number of blue bits clocked per scanline (and thus, per screen) remains unchanged -- just a two bit buffer is needed to build up the full blue pixel from the memory "subpixels", plus some logic gates. [1] Yes, the MacAdam ellipses (and those by Stiles) demonstrate empirically that hue changes are most discernable for BRG, in that order, even when the perceptual hue non-linearities of the CIE x'y' chart (on which the ellipses are often drawn) are taken into account. When dithering a subject of largely varying blue, we choose this order of bit allocation -- as for instance when doing a sunset against a blue sky. In general, though, the high blue precision is most relevant only when a strictly blue color is to be rendered; the extra bits might not allow for a large perceptual color shift when the other two primaries are active. The real problem here stems from treating the creation of a suitable color space. The previous posting suggests an opponent space. I've tried Floyd- Steinberg techniques in this space, as well as LUV, YIQ and HSV with varying (and noticeably distinct) results. These spaces are largely a change in basis, or straight-forward non-linear transformation of RGB. As such, they are highly symmetric, so forcing a degree of high precision to some corner of the space by bit allocation almost always gives rise to a "conjugate" space of increased precision in an area which is uninteresting (the precision increase in this conjugate area is not very perceptual and is thus wasted). What is really needed is a Floyd-Steinburg algorithm which dithers against a candidate list of target output colors. These colors would be defined to be either perceptually "equidistant", or alternately they would be derived from the input image by using a more general histogramming technique (cluster analysis), which would not treat the input primaries as separate channels. Although this potentially means histogramming against 2^24 bins, note that most images seldom exceed 2^20 pixels in size, a reduction of 16:1. The real difficulty with such a program is doing the "nearest colored pixel" function, given, say, an input of r8g8b8 data and an output of 256 distinct pixels. For small output (eg, 16 colors for dithering onto a PC), one can search the output pixel space in linear fashion. For output pixels of large precision, say, 16 bits, a r5g6b5 separable Floyd scheme with equalization (Heckbert, SIGGRAPH '80) is more than adequate. The interesting case is when the output device is of intermediate size -- in practice this is also the most common case, as when the output device has an upper limit of 256 output pixel colors. This size is too large for linear brute force, but small enough that sublinear algorithms with asymptotically better bounds (probably involving 3D Voroni diagrams) are large and not necessarily faster given the expense in building and probing the colorspace datastructure. They are also a royal pain to code. /Alan Paeth University of Waterloo Computer Graphics Laboratory