Question: 6 . 1 The Image The visualization is rendered to a 5 1 2 5 1 2 sRGB video buffer array. The values in this

6.1 The Image
The visualization is rendered to a 512512 sRGB video buffer array. The values in this array indicate the light illuminating the cone receptors of the retina. We take the red, green, and blue values to correspond to the intensity of light at the L, M, and S wavelengths. When this array is displayed to a 133 dpi monitor, the visual angle of one array element is approximately equal to one foveal cone when viewed at 140 cm. One degree of angle corresponds to 128 pixel elements, or 0.008 degree/pixel.
6.2 The Retina
The first neural layer of the model, the retinal layer, models the perceptual processing of the visual scene done by the retina. The opponent-process mechanism produced by the bipolar cells of the retina is modeled with a conversion to L a b perceptual coordinates, according to the CIEL a b transformation. The black-white luminance dimension, the red-green chromatic dimension, and the yellow-blue chro- matic dimension are referred to in the model by IL, Ia, and Ib, respectively.
The retina also computes a center-surround field (Fig.4) by combining the center signal of bipolar cells with a surround signal from horizontal and amacrine cells. This center-surround signal is output by the retinal ganglion cells of the retina to subsequent stages of the visual system. The center surround receptive field output of the retinal ganglion cells is defined in the model as a Difference-of-Gaussians ((1), Fig. 4). The black-white, red-green, and yellow-blue center-surround retinal responses are defined as
Fig. 7. The V1 Gabor kernels used by the model. Neurons in light areas produce an excitatory effect, neurons in dark areas produce an inhibitory effect.
(Fig.7). Edge detection results from the pattern of synaptic connections from the retina, defined by
X
Rwb14XDoG IL
i;j
4
12x0;y014 cos sin x : 8sin cos y
V1 Edge Enhancement
x;y
rgX a
6.4
Ri;j 14 Ryb 14
i;j
5 : 6
Columns of V1 exhibit enhanced activity when they correspond to an edge that lies along a continuous contour. The pattern of synaptic connections (Fig.8) used in the model to produce this behavior is defined by
E014 G x02 y02: 9 x;y; x;y;
Yielding the enhanced V1 column activity
6.5
x;y
X
x;y
DoG
x;y;1 ;2
x;y;1 ;2
ix;jy
DoGx;y;1 ;2I
The values of 1 and 2 specify the size of the center and the surround fields. We use values of 1 and 2 for these parameters, respectively, to produce a center-surround receptive field like the one shown in Fig. 4. For the values of 1 and 2 we use 1 and 0.5, yielding a receptive field with center-surround characteristics, but produces a positive response to a uniform field.
6.3 V1 Edge Detection
Hypercolumns within the V1 component of the model exhibit both edge detection and edge enhancement behavior. As in Lis model, we use 12 columns per hypercolumn,
responding to orientations varying by 15 degrees increments
0X0
ix;jy Ib
ix;jy
Authorized licensed use limited to: IUPUI. Downloaded on January 20,2024 at 02:04:10 UTC from IEEE Xplore. Restrictions apply.
V 1i;j;14
x;y
Gaborx;y;Rwb : 7 ix;jy
The black-white luminance signal, which is responsible for form perception, is used here. We use a value of 7 for (3), which gives a spatial frequency of 8.9 cycles/degree for our medium scale resolution, roughly corresponding to the parafoveal processing in the 2 to 5 degree eccentricity range found by Foster [8]. At the high and low resolution scales this corresponds to 17.9 and 4.4 cycles/degree, respectively. We use 142 to define the Gaussian envelope, which was chosen to encapsulate a single cycle of the sinusoid. The absolute value causes the column to respond positively to both light-centered and dark-centered edges. x0 and y0 are found by rotating x and y by degrees
V 1i;j;14
Ex;y;V 1ix;jy;: 10
x;y;
GPU Model Implementation
The model was implemented in nVidias CUDA GPU programming environment, and run on an nVidia GTX470 graphics processor. The highly parallelizable nature of the neural network model allowed for efficient use of the GPU parallel processing capabilities, yielding performance far beyond what would be possible on conventional CPU
processors. The model required approximately 50 ms to
PINEO AND WARE: DATA VISUALIZATION OPTIMIZATION VIA COMPUTATIONAL MODELING OF PERCEPTION 315
Fig. 8. The V1 edge enhancement kernels used by the model. Light areas produce an excitatory effect, dark areas produce an inhibitory effect.
run, and consumed nearly 300 MB of graphics memory. The neuron layers were stored in GPU memory as arrays of single precision floating point numbers, and operated upon by graphics kernels
Rewite without plaigarism

Step by Step Solution

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Students Have Also Explored These Related Databases Questions!