The Minicomputer and
Neurophysiological Techniques

W. S. Rhode
University of Wisconsin

rhode@physiology.wisc.edu

(from: Use of Minicomputers in Research on Sensory and Information Processing, M.S. Mayzner and T.R. Dolan, eds., Erlbaum Press, Hinsdale, NJ, 1978, pp.229-260)

INTRODUCTION

The relatively recent development of recording electrodes, either surface, gross, or microelectrodes, alone, with the necessary electronics to amplify and filter the signals, resulted in a vast increase in the amount of data that could be collected in a neurophysiological study. One early analysis technique, which readily points up the need for automation, was the analysis of neural spikes or discharges, recorded from neurons. This involved illuminating a point on an oscilloscope with the time of the neural discharge encoded as the distance from a left-hand marker and taking photographs of the screen. The time of occurrence of the discharge could then be measured using a ruler, and any of a number of simple statistical computations could be performed. This was extremely tedious and severely limited the productivity of the neural scientist. The inability to deal with the vast amounts of data and a need to quantifv observations led to a quick acceptance and use of computers in neurophysiology.

An important milestone was achieved in the early 1960s with the development or a small computer that was designed for use by biologists as a laboratory system. It was intended to be connected to equipment that was either already present in the lab or special purpose devices that would be designed and "interfaced" to the computer. A computer, the "LINC" or Laboratory Instrument Computer (Clark & Molnar, 1965) included many architectural innovations that were to become standard in the minicomputer industry in the years to follow. The prototype was built and demonstrated in 1962 at MIT's Lincoln Laboratory. It was promoted by NIH through its Biotechnology Resource Branch in an evaluation program that gave 11 LINCs to various groups, including the University of Wisconsin's Laboratory of Neurophysiology in 1963.

The goals of the LINC development (listed because they are still very much applicable to the neural sciences) are to build a machine that: (1) is small enough in scale so that the individual research worker or small laboratory group can assume complete responsibility for all aspects of administration, operation, programming, and maintenance; (2) provides direct, simple, effective means whereby the experimenter can control the machine from its console, with immediate displays of data and results for viewing or photographing; (3) is fast enough for simple data processing "on-line" while the experiment is in progress and logically powerful enough to permit more complex calculations later if required; (4) is flexible enough in physical arrangement and electrical characteristics to permit convenient interconnection with a variety of other laboratory apparatus, both analog and digital, such as amplifiers, timers, transducers, plotters, special digital equipment, while minimizing the need for complex intermediating devices; and (5) includes features of design that facilitate the training of persons unfamiliar with the use of digital computers.

Fifteen years have passed since the LINC development, and a great deal of progress has been made in computer technology both in hardware and software. CPUs more powerful than the LINC are available in a single integrated circuit for less than $100, and the continual decline in the price of memory, a factor of 100 in the last 15 years, argues that larger computers will be the norm of the future. As large-scale integration of logic circuits permits more functionality in the hardware, perhaps we will soon see computers that again embrace the original goals of the LINC program.

Speed and memory size are both important features for any computer since they can limit the applications that can be handled. For example, a microprocessor may be sufficient to control and record responses in a behavioral laboratory but it very likely will be inadequate for digitizing (converting an analog signal to digital number) and analyzing 16 channels of EEG. Another example is picture-processing operations requiring the graphical manipulation of images (e.g., neural structures such as a impregnated neuron). These structures can be rotated on a graphical display to create an illusion of depth or the image of a 3-dimensional object. This requires a reasonably powerful system to perform the necessary transformation computations and manipulations.

There are a wide variety of systems available for use in laboratories today.

While the choice of systems is usually dictated by the number of dollars available, many considerations do come into play: (1) the data rates of the experiment; (2) the knowledge of the users about digital technology (e.g., can they interface their laboratory apparatus to the computer? What programming languages do they know? (3) the size of programs used to analyze data; (4) past experience and availability of computers; (5) local support for maintenance, programming, spare parts, peripherals, etc.; and (6) what kind of computer do their friends use for related research?

But before considering the role of computers, let us ask, what are the goals of the neural sciences? In studying the central nervous system, we would like to elucidate its operation at an electrochemical, anatomical, or physiological level. The explanation of the action of the brain or nervous system must be guided by some general principles (Szentagaothai & Arbib, 1975); (1) the theory must be action oriented, that is one seeks to explain the interactions with the environment that the animal engages in; (2) perception is not only of "what" but is related to the environment in which interaction occurs, that is "where"; (3) an adaptive system, such as the brain, must be able to correlate sensory data in a manner that facilitates the evolution of an internal model of the experience; (4) the organization must be hierarchical with adequate feedback loops to coordinate the many subsystems; and (5) the brain can be thought of as a layered (cortical layers) somatotopically organized computer.

These principles cover simple animal reactions, perception, memory, and the adaptive nature of the organism. The investigative strategy for these principles varies with a multidisciplinary approach often required. There is no one best way to investigate and explain these basic principles.

One area that is being intensively pursued is the investigation of sensory processes. For example, electrophysiology, biochemistry, neuroanatomy, psychoacoustics, etc., have all been employed in pursuit of the explanation of how sound is encoded within the inner ear. In the course of the last few years, many techniques have been used to determine the motion of the basilar membrane that in large part determines the excitation of the hair cells in the cochlea or inner ear. Direct observation techniques have included the use of the light microscope, the Mössbauer effect, the capacitance probe and the laser interferometer. Indirect observations of the motion of the basilar membrane have used auditory nerve recording, spiral ganglion recording, cochlear potentials, psychoacoustics, and mathematical models to infer the form of vibration. This is typical of any contemporary investigative area. Multidisciplinary approaches often using a computer to assist in handling the large amounts of data are commonplace.

Since much of my own experience is in hearing research, most of the following discussion of the use of computers in neurophysiology will involve hearing, although many of the hardware and software items I discuss are common to visual and somatosensory areas. In addition, the use of some of these techniques and devices is not limited to sensory research but have been applied in other areas of neuroscience.

DEVICES FOR NEUROPHYSIOLOGICAL RESEARCH

Those physical devices that are used for stimulus generation and control, data collection, and data analysis are called hardware. Experience has unequivocally shown that any device that can be bought should not be built. Considerations of time, personnel, documentation and maintenance all point to purchasing well-worn devices. Nevertheless, there are many instances when a particular experiment requires something not available off the shelf, and devices must be designed and built. Timing subsystems, event digitizers, analog-to-digital (A/D) converters, and displays are the devices most commonly constructed.

The form of these devices varies depending on the use. The sensory system to be studied (e.g., vision, audition, and somatosensory system) will affect the selection of stimulus generating and/or delivering equipment. If it is a behavioral experiment, then devices for detecting and recording, the responses may involve levers, lights, buzzers, buttons, etc. If a mapping of part of the cortex or other part of the nervous system is planned, then one has to decide on whether the evoked response is to be recorded with gross electrodes, microelectrodes that permit multiple-unit recording, or microelectrodes that permit only single unit recording. A multiple-electrode configuration could also be used with any of these techniques and would require additional equipment and possibly increased computer capability. EEG recording with 16 or more channels at sampling rates of up to several hundred samples per second could require special handling of the A/D system since programmed input/output (I/O) rates greater than 1000/sec are usually infeasible.

In order to provide a better idea of the range of equipment that has been used in neurophysiology, I use an auditory lab as an example. Other examples can be found in Brown (1976).

Stimulus Generation and Control

Timers. One would think that a device as simple and basic as a digital timer would be commercially available, and some colors and flavors are. But in any particular implementation of an experimental setup, individual requirements (such as the number of timers, logic interconnections, logic level outputs, displays of current or preset time, the ability to use recurrently without resetting, the ability to set manually or via computer, timer resolution, and range) may prevent a simple-purchase solution. Any of these may not be compatible with readily available solutions. Available off-the-shelf integrated circuits have made simple timers a piece-of-cake. Nevertheless, individual requirements or preferences usually complicate things and increase the cost, often to the point of being exorbitant.


One system implementation we have used consists of three of the basic timers shown in Fig. 9.1. Each has a four-decade range and a time base that can be adjustable from 1 m sec to 1 sec. A set of three of these timers can be interconnected (Fig. 9.2) to provide timing sequences, such as those illustrated in Fig. 9.3, where a second timing chain may control a second event.

The basic timing system has proven to be sufficient for our use. It is by no means the only approach to the problem. For example, it is possible to implement timing schemes with microprocessors. The microprocessor can count clock pulses or utilize timing integrated circuits (ICs) to implement arbitrarily complex timing sequences and multiple timers. The programmability and standard output register of this microprocessor offers a great deal of power in implementation, but caution should be exercised so that the microprocessor does not trap one in a project with an uncontrolled cost and time overrun.

Stimulus Generation Equipment. Depending on the sensory system being investigated, the form and content of the stimulus can vary considerably. While the investigation of temperature, taste, or smell may require only simple stimulus control, touch, vision, and audition often require special devices to fit the experimental paradigm (although this is only a generalization).

 

In auditory neurophysiology a basic exercise is the determination of the response to a set of tones over a range of varying intensity and frequency, though it need not be limited to these parameters. A method of controlling the signal that is usually a sinusoid, is to use an oscillator that has digital stepper motor (one that rotates a fixed number of degrees when given a pulse) to the frequency selector knob. The limiting features of this approach are the accuracy, speed, and repeatability of frequency setting. It can take several seconds to set a frequency with a 0.1% accuracy that may be inadequate in many instances. An all-digital waveform synthesizer was designed and built that overcomes the drawbacks of the stepper motor system.

A simplified example of the technique for digitally synthesizing a sinewave using a table look-up procedure is shown in Fig. 9.4. The contents, F, of a 4-bit frequency register are added to q, the contents of the sine address register, which at time T equals F- T modulo (26) where T= 0,1,.,¥ , and 26 = the clock frequency. A table of 64 values of the sine function is stored in a memory for each of the 64 possible values of q. The value of F (1 to 15) determines the size of the step through the sine table. If F = 1, then each of the 64 values of the sine (q) are read out each sec, whereas if F = 2 then every second value of the sine function is read out twice a second. It should be obvious that the value of F is the frequency of the synthesized sinewave. The results of synthesizing 4 and 8 Hz sinewaves are shown in Fig. 9.5. The effect of quantizing a signal both in time and amplitude is obvious and can be reduced by shortening the sample time (higher clock rate) and using more bits to represent the signal.


The actual waveform generator is part of a Digital Stimulus System (DSS) and has a 16-bit frequency register and a 19-bit sine address register; therefore, it has a frequency range of 64,000 Hz and a clock rate of 219 or 524,000 Hz. If the sine (6) was stored for each of the 219 values of the sine address register at an accuracy of 16 bits, and 8,000,000 bit memory would be necessary. But only one-quarter of the sine function needs to be stored, and trigonometric identities can be used to reduce further the size of the Read Only Memory (ROM) to 16,000 bits (Rhode & Olson, 1975). A small sacrifice in accuracy is made to accomplish this savings in memory size; the table is accurate to 1 part in 215 for 217 arguments of q. The distortion of the waveform had been determined to be less than 0.02% for frequencies below 10 kHz; this distortion is due to the digital-to-analog (D/A) converter. The DSS can also generate triangular waves, squarewaves, sawtooths, and reverse sawtooths. The frequency can vary from 2-16 to 2+16 Hz. It is specified in two parts, a 16-bit integer and a 16-bit fraction. The initial phase angle of the signal can also be specified with 16-bit accuracy. Two or more systems can be interconnected to produce frequency modulated signals, amplitude modulated signals, or to repetitively sweep a range of frequencies.

Another component of the DSS is due to the necessity to reduce the amount of energy frequencies other than the one being generated during the period when the signal is turned on and off. This is accomplished by multiplying the signal by a trapezoidal waveform that has a programmable rise/fall time. A 16 x 16 bit digital multiplier is used to perform the multiplication. One advantage of this approach over the use of an analog multiplier is that it does not introduce any distortion when the signal is fully on since the signal waveform is merely multiplied by a constant value. The rise/fall time of the DSS can be varied from 0 to 125 msec. There are 14 settings that change by a factor of 2 except for the zero rise/fall time. The trapezoidal gate signal is generated by a pair of 16-bit counters. The first counter generates the rise-time delay, and the second counter generates the gate signal, the trapezoidal waveform, after the expiration of the rise-time delay as shown in Fig. 9.6. Therefore, the generated signal is always displaced in time from the stimulus duration timer by an amount equal to the rise-time.

The final step in the synthesis is the D/A conversion. This is the step that limits the accuracy of waveform production. Theoretically, a 16-bit system should result in a system that has harmonic distortion in the neighborhood of 2-16 or -96 dB. The present system achieves about -74 dB (0.02%) distortion that is quite acceptable for most auditory experiments and is comparable to the best transducers for sound production. The limiting system component is the digital-to-analog converter or DAC. Any DAC can produce "glitches" or large undesirable pulses in output when undergoing transitions in input addressing that involve a change in the state of a large number of bits (e.g., 001111111 to 010000000). A deglitcher amplifier is used to suppress these transients; this is a fast sample-and-hold circuit that maintains the output of the DAC at its previous value until the input address has had time to change and the output of the DAC has stabilized.

A philosophy incorporated into the design is to make most of the functions of the system capable to both manual and computer control. This allows initial exploration of neural unit responsibility without the need for computer interaction. It is also useful for maintenance of the system.

Waveform Generator. In certain auditory experiments, it is desirable to combine 3, 4, or more harmonics of a given frequency with specifiable amplitude and phase. The cost of building a separate sinewave synthesizer for each harmonic becomes prohibitive and has resulted in alternate methods of synthesizing harmonic complexes (Wolf & Bilger, 1972). One method of accomplishing this is to have a "variable" length circulating buffer that stores one cycle of the desired waveform. For example, if the 9th, 10th, 11th, and 13th harmonics of 100 Hz were to be combined, the basic or fundamental period of the complex wave would be 1/100 sec or 10 msec. Therefore, a 10-msec sample of this waveform must be stored and the desired stimulus duration achieved by repeating the 10 msec waveform. If the maximum size of the buffer is to be 16,000 words, at a sample rate of 64 kHz, the maximum period obtainable is 250 msec. This period can be increased by lengthening the buffer or by decreasing the sample rate. This device can also be used as a simple buffer for delivering stimuli of arbitrary duration and shape by merely transferring data from a computer disc file to the buffer (e.g., speech, phonemes, or animal sounds). There is no restriction on the contents of the buffer, and waveforms of arbitrary complexity can be generated.

An alternate approach to the fabrication of a special device to generate waveforms is to use a computer to compute and present the waveform. If the sample rate is not beyond the capacity of the machine, this may not only be the least expensive approach but also the quickest, easiest, and most flexible. Many of the applications in visual and the somatosensory system could be handled along with many in auditory systems. As mentioned previously, the economics of computers are still undergoing rapid change. Each experimental requirement needs to be evaluated under the current technological conditions.

Other Stimulus Generation Devices. The previously mentioned devices are, in a sense, generic. They find application in investigating every sensory system at any level of the nervous system. The types of stimuli depend on the sense being explored, but the "electronic" devices are often nearly the same. The range of frequencies necessary to investigate the auditory system is from 10 to 100,000 Hz, while the tactile system does not require much more than 200 Hz. The types of waveforms are often the same. The delivery of the stimulus varies mostly between sensory systems. To give a few examples, audition requires speakers or earphones; vision, a tangential screen or TV display; touch, a mechanical stimulator; and taste, a delivery system for liquids. It should be restated that many of the digital devices for stimulus generation and control will be similar regardless of the sense system and the specifics of the experiment. An example of this is that individual DACs find application in many areas in controlling magnitudes of stimulus parameters. Also, pseudorandom noise generators (Anderson, Finnie, & Roberts, 1967) are used in a variety of applications where the properties of the noise stimulus can be utilized to aid in characterizing the system under investigation.

Data Collection Facilities

The principal data collected in our laboratory consist of trains of all-or-none neural spikes or discharges. Frequently encountered data types include averaged evoked responses and continuous physiological signals, such as EEG recordings, respiratory variables, body temperatures, expired CO2. Several devices have been designed that are used to collect these separate date types, including a multiple-event timer, event histogram binners, a programmable A/D system, and a digital spectrum analyzer.

Event Timer. The conversion of the all-or-none unit activity to a discrete sequence of "times of occurrence" is by far the most important activity in the lab. Each time the voltage recorded from a microelectrode exceeds a specified threshold value, the time is recorded as indicated in Fig. 9.7. The event timer designed to perform this task resolves events by using a time base that can be varied from 1 m sec to 1000 m sec under program control. The use of a 24-bit register permits an 8.3-sec period to be timed with 1 m sec resolution without overflow in the counter. The event timer can time up to 16 events and includes a 32 word buffer (a FIFO, or First-In-First-Out buffer) to store event times. This relaxes the need for rapid response to timer events by the computer system. An alternative to a FIFO is to use a direct memory access (DMA) channel to read in the event times into the computer's memory.

The event timer can be turned on and off by computer command or by a selectable synchronization pulse that can arise in any of the three DSSs or an external source. The sync pulse occurs whenever the signal is first turned on. The event timer is stopped by a terminate pulse that occurs whenever all the stimuli have been presented, and the repetition interval time has elapsed. The elapsed time for any events such as the neural discharge time or a bar pressing time are all recorded and stored. The event timer has another mode that only records the Nth event time. This is useful for determining the period, hence, frequency of an oscillator. The desired accuracy of the measurement determines the number of timer counts, hence the N to be used.

Analog-to-Digital Conversion System. The design objectives of the A/D converter system are simplicity of use and flexibility. It is a 16-channel system having sample rates as high as 100 kHz with 12-bit accuracy. The A/D inputs are single-ended, differential, or pseudodifferential. The channels to be sampled are program selectable, as is the sample rate and number of samples to be taken. The device is connected to the computer on a DMA channel so that the high sample rates can be accommodated by the system. No handling of the data is necessary, except to move it out of its buffer in the computer's memory.

Three sampling techniques are available in the A/D system: (1) rapid-scan mode in which conversion of each channel occurs as soon as the previous channel is converted until all selected channels have been sampled; the A/D then waits for the next clock or scan pulse, (2) uniform-interval mode in which the next channel is converted after a clock pulse occurs, and (3) fast-sample mode in which only one channel is sampled, permitting samples to be taken at a 160 kHz rate.

The rapid-scan technique allows one to sample effectively all channels at the same instant in time. This is especially advantageous for low sample rates, although it is desirable in general, An alternative to the quick-scan mode is to use a separate sample and hold amplifier for every channel. Due to the recent advances in this technology, this is probably the method of choice today. The uniform-interval mode is the sampling technique most commonly employed due to ease of implementation. In this mode, the rate is the channel sample rate multiplied by the number of channels to be sampled.

Advances in A/D technology make it feasible to use inexpensive A/Ds in dedicated applications. An example might be the continuous recording of body temperature where a simple IC timer would provide sampling times, and the A/D would be enabled when sampling is to occur. A dedicated program could collect and store all the data with no interaction with any other program currently being executed.

Digital Spectrum Analyzer. There are a number of data-collection problems that can overload the I/O capabilities of a computer. Fast sampling A/D, multiple event timing or event timing at a high rate can all cause problems especially if some simultaneous processing is required. An example would be obtaining an averaged evoked response in 8 to 16 channels with a high-sampling rate. A solution for this problem is a special purpose device that takes the individual samples from the A/D and adds the value to a location in its memory, then increments its address register and waits for the next sample without intervention from the computer.

This device is named a digital spectrum analyzer because in one of its modes a periodic signal can be sampled, and averaged and subsequently operated on to obtain its Fourier spectrum after the average has been read into a computer. It can handle inputs at a 200 kHz rate and average up to one-half million 12-bit samples without overflowing its 32-bit word. It can function as a histogram computer whereby a count is added to the current address when an event occurs.

While it has been constructed with 4-bit bipolar logic, it is entirely possible that in the near future, 16-bit bipolar processors on a single IC may be fast enough to perform these tasks. The lab of the future will undoubtedly use microcomputers quite liberally.

Event Histogram Binner. A supplement to the event timer is a histogram binner that is an extra benefit of the design of the DSS. One of the very common operations in auditory neurophysiology is the computing of a cycle histogram. This involves recording the instantaneous phase of the simulating waveform. In experiments where two or three sinusoids are simultaneously presented, multiple-cycle histograms are often desired. A fringe benefit of the digital implementation of the stimulus system is the availability of the instantaneous phase in digital form. When an event occurs, the current phase is stored in a FIFO buffer. A single event can cause all three DSSs to record their phases, thereby saving the repetition of the stimulus conditions that would be necessary if only one cycle histogram could be computed at any one time.

DATA ANALYSIS

Histograms

One of the most common techniques for analyzing spike distributions that are empirically derived is the histogram. The histogram displays the number of events in a range of the metric being used. For example, one could display the number of students with heights, weights, or ages as the abscissa of the display. For convenience, one "groups" this discrete data as a function of the independent variable. The independent variable, which could be weight, height, or age, is discretized by letting a range of the variable be defined as a bin. For example, one year, pound, or inch could be the bin width. The number of events that occur within a specified range determine the height of the bin.

The computing of a histogram is basically a counting process. The domain (independent variable, x) is divided into a contiguous set of uniform intervals of length, Dx. Each of these intervals is called a bin. We define Nj to be the contents of the jth bin of x. We also define an integer function, INT, such that INT(y) = the integer portion of the argument y. Then, computing a histogram is defined as counting the number of events within each interval of the range of the histogram where the range is NDx. That is,

xi = the measure of the i'th event

Note that the lst bin covers the range 0 £ x < D x, and the 2nd bin covers the range x £ x < 2Dx, etc. A random process is illustrated in Fig. 9.8 with two types of histograms computed from it. The form of a histogram can vary from the bar graph presentation, to an outline of the histogram, to the use of crosshatching to indicate two or more superimposed histograms.

 

There are several types of histograms used in various applications, including poststimulus time histograms, latency histograms, cycle histograms, that will be discussed in conjunction with the analysis of the discharge of neurons.

 

Neuronal Spike Trains in the Absence of Stimuli

 

Interspike Interval Histograms. It is a working hypothesis (1) that there is a wealth of information about the structure and function of the nervous system that can be derived from a study of the timing of spike events; (2) that analysis of these signals can shed light on the mechanism of spike production within the observed cell, on the presynaptic input to the cell, and on the mechanisms by which the latter is transformed into a postsynaptic output; and (3) that observations of multiple units can reveal details of interconnections and functional interactions (Glaser & Ruchkin, 1976; Perkel, Gerstein, & Moore, 1967a,b).

We are primarily concerned with information processing by the nervous system. The use of some simple statistical measures allow a relatively concise characterization of the output of the neuron, which may be useful in the description, comparison, and classification of nerve cells. The underlying neuronal processes have a degree of randomness that gives rise to its characterization at a stochastic point process and the subsequent use of statistical methods of analysis. The methods of analysis will vary depending on the experimental circumstances and whether we are concerned with intra- or interneuronal phenomena.

In a neuronal-spike train, the interspike-interval histogram serves as an estimate of the actual pdf (probability density function) of the stochastic point process. In order to construct the interval histogram, the time axis is divided into m uniform intervals of length d. Each event (as shown in Fig. 9.6) is analyzed to determine which bin of the interval histogram it should be placed in by using the following relation.

where Ni is the number of events in the j'th bin, d is the bin width, and Ti the i'th interspike interval. There are N intervals when the number of spikes is N + 1. This is due to the fact that the first interval (0,T1) is not included in the analysis, only the interspike intervals. The ratio Nj/N is a smoothed estimate of the pdf, f(t). Equation (3) is the probability that the duration of a randomly chosen interval lies between (j- 1)d and jd time.

A peak in the interval histogram shows a preferred periodicity in the spike train. Certain neuronal systems will show phase locking to a stimulus that is manifest by a multimodal appearance in the interval histogram. The auditory periphery demonstrates this at low frequencies as shown in Fig. 9.9. The importance of the proper bin width to reveal the detail in the histogram is clear from the appearance of a large number of modes when the time axis was expanded.

Order-Dependent Statistical Measures. It is of interest to determine whether successive interspike intervals are independent in the statistical sense and, therefore, whether the spike train can be described as a realization of a renewal process. The joint-interval histogram, JIH, that was introduced to neurophysiology by Rodieck, Kiang, and Gerstein (1962) is displayed in the form of a scatter diagram as shown in Fig. 9.10. Each pair of adjacent intervals in the spike train is plotted as a point. The (i-1)'st interval is the distance along the abscissa; the i'th interval is the distance along the ordinate. If successive intervals are independent, then the joint-interval histogram will be symmetric about a 450 line through the origin.

Proof of statistical independence requires that the joint probability be equal to the product of the individual probabilities. That is, Eq. (4) must be satisfied.

In practice, a substitute is used for this test that involves computing the mean of each row and each column in the JIH. The column means are plotted vs. row, and the row means are plotted vs. column. If the intervals are independent then the means will be parallel to the axes. This is considered necessary but not sufficient for independence.

Spike Trains in the Presence of Stimuli

Poststimlus Time Histograms. Much of the previous discussion involved characterizing, a stochastic point process on the basis of a spike train observed in the absence of any stimuli; that is, only the spontaneous activity is observed. In experimenting, different types of stimuli with various patterns are used. The stimuli are appropriate to the sensory modality or nervous system function being investigated. The stimuli could be patterned repetitions of sounds, a spatially modulated video display, a sequence of stepped increases in temperature, a random variation in angular acceleration for vestibular stimulation, the application of a specified concentration of a chemical, or various tactile stimuli. The Poststimulus Time Histogram, or PSTH, is an average of the spike trains due to a repeated stimulus. For example, acoustic clicks, tone pips, or light flashes may be repeated hundreds of times to obtain a PSTH that is relatively smooth.

An example of the use of an acoustic click to determine a transient response (Robles, Rhode, & Geisler, 1976) involves the mechanical events that precede the generation of neural spikes in the inner ear. Hair cells in the organ of Corti are responsible for transducing acoustic or mechanical energy to an electrochemical form that eventually results in pulses or spikes in auditory nerve fibers. The excitation of the hair cells is largely determined by the motion of the basilar membrane that is one boundary of the organ of Corti. These structures rest in a fluid environment and vibrate 10 to 100 Ao at normal sound levels, A small radioactive foil (Mossbauer source) can be placed on the basilar membrane. The gamma rays emitted by the foil pass through another foil (Mossbauer absorber) and are detected with a proportional counter. The number passing through the absorber is modulated by the velocity of the membrane. A PSTH of the resulting gamma-ray activity is shown in Fig. 9.11. The acoustic click was repeated as many as 800,000 times. A computer was essential for controlling stimulus presentation and performing the data collection and analysis.

In general, the bumps and wiggles or the PSTH reflect the underlying excitatory process, and one must decide if they are significant or merely random fluctuations. The distribution of mean square deviations from mean bin level can be computed or a control case can be constructed using fictitious times of stimulus presentation and portions of a record where no actual stimulations were presented. In general, any features that are to be interpreted as meaningful should have a width of several bins. Note that one can always recompute the PSTH with a smaller bin width to search for greater detail. Usually, the response is obvious at all but threshold and subthreshold levels of stimulation.

Latency Histogram. The general technique using PSTHs can be used in other ways; one of which is the n'th spike-latency histogram, LH. When the first spike latency is calculated, it is used to determine the average time at which a neuron will discharge after the stimulus onset. The mean and variance of the LH distribution are usually calculated. The LH is useful for studying the travel time of spikes through the nervous system. It helps to determine whether there are any synapses interposed between two recording sites, as spikes require a significant amount of time to traverse each synapse. Neuroanatomical studies of fiber size and length are a help in estimating the expected travel time between the two sites.

Cycle Histogram. Another very common analysis technique is the use of cycle histograms, CH, which have also been called period or phase histograms. They are a type of PSTH where the synchronizing event is the zero crossing or initiation of a cycle of the stimulus. It is used to reveal phase-locking behavior in the system under investigation. Cycles could be based on diurnal patterns (1 day), traffic flow (hour, day, weeks, months, year), hamburgers eaten (day), etc. The resulting histogram allows calculation of the degree of response to some underlying variable that is the stimulus.

The CH can be developed from the distribution of events around the unit circle. The sequence of events x1, x2, x3, ... are transformed to a sequence of events q i, with values on the unit circle (0, 2p ). The mean direction, , of q 1,q 2,q 3, ... is defined to be the direction of the resultant of the unit vectors,.... as shown in Fig. 9.12. The cartesian coordinates of Pi are (cosq i,

sinq i) so that the center of gravity of the points is where

Then is the length of the resultant, and is the solution of Eq. (6).

A problem of interpretation of q arises in trying to assign a degree of confidence to the experimentally determined value. If N is small, we clearly cannot conclude anything about because of having too small a sample. This is overcome through the use of the Rayleigh test (Mardia, 1972). It is used to test the hypothesis that the distribution around the unit circle is uniform versus the hypothesis that it follows a von Mises distribution.

The CH has been used extensively in the study of the auditory nerve fibers whose discharges "lock" to the stimulus (Hind, 1970); that is, they show a preference for a certain phase of the stimulus in their discharges as shown in Fig. 9.13 for a stimulus frequency of 2500 Hz. This locking becomes less prominent as the stimulus frequency increases (Fig. 9.13, 3400 Hz) or as the intensity decreases. CHs have also been used to analyze the generation and distribution of distortion products in the inner ear. A system is said to be nonlinear if it does not obey the principle of superposition. When the input to a nonlinear system consists of two components at frequencies, fL and fH, then new components will be generated at frequencies that are combinations of these frequencies. In the auditory system, the most prominent distortion component is at a frequency, 2fL-fH. It can be measured psychoacoustically in the response of eight nerve fibers (Goldstein & Kiang, 1960). The CH has been used to determine the amplitude and phase of the primary and any distortion components in the neural response. The search for the origin of these distortion products in the auditory system continues even today.

Additional Analysis

Various Stimulus paradigms will require the systematic explorations of one or more independent variables, such as frequency, sound pressure level, tile phase relation between two signals, and the contrast and spatial frequency of a visual display). An example of a 2-dimensional stimulus space for an auditory experiment is shown in Fig. 9.14. The coordinates of the points correspond to individual frequency and intensity combinations to be presented. They can be presented systematically or in random order. Usually, intensities are presented in a low to high sequence to avoid biasing the response to less intense stimuli by recovery effects. If the data collected at each "point" consists of the spike timings, any of the analyses previously described could be performed. An example of PSTHs for a part of the response area is shown in Fig. 9.15, where it can be seen that the nerve responded best at 4600 Hz.

Some summary statistics are often used to analyze the behavior of a neuron. A very common one is the spike rate during some interval of the stimulus sequence. The number of spikes recorded for each frequency and intensity of sound in the response area is shown in Fig. 9.16. The resulting series of curves tell us what region of the stimulus space the neural unit responds to. An alternative way of looking at this data is the isorate curve. It is formed by plotting the intensity necessary to produce a fixed rate of discharge at each frequency. It defines the filter characteristic for the auditory nerve fiber by showing what part of the stimulus space causes the unit to respond at the specified rate.

This is very important in trying to understand the process by which sound is converted from acoustic energy to neural energy in the ear. The response area and isorate curves have been compared with the results of investigation of the mechanics of the inner ear and seem to indicate the need for an additional stage of filtering to sharpen the mechanical response (Geisler, Rhode, & Kennedy, 1974). The origin of the exquisite frequency selectivity seen in auditory nerve fibers is still a puzzle to researchers.

In addition to the techniques applied to the study of single units, a number of techniques have been evolved to deal with multiple units. As the tip size of the electrode is increased, spikes from multiple neurons are received. If the tip is too large, a "hash" is seen that is useful for mapping regions of the brain but may not be useful in understanding the action of individual neurons. There is a middle ground where techniques can be applied to separate individual neural waveforms (Glaser, 1970). Additionally, multiple electrodes can be used to record from several units. The resulting multiple spike trains are then analyzed in an attempt to determine the neuronal circuitry and interactions; that is, what dependencies exist between the members of the neuronal population and what is the simplest hypothesis regarding the physiology that the data will support. There are several methods of investigating these relationships (Gerstein & Perkel, 1972; Moore, Segundo, Perkel, & Levitan, 1970; Perkel, Gerstein, & Moore, 1967b). They usually rely on some cross-correlation calculation or some graphical display of mutual interaction. Often a display is interpreted by comparing it to the results of what happens in a simulation of simple neuronal networks. Although this network simulation does not permit definitive statements to be made, it does often give the simplest available explanation.

Before leaving the subject of neural unit analysis, it is perhaps appropriate for me to apologize for the lack of coverage of other sensory systems and approaches. There are many other approaches that the reader could benefit from. The extensive work of Mountcastle (1976) in the somatosensory system and recent development of behavioral techniques is especially worthwhile. In vision an example of a particularly innovative technique is a computer-controlled TV display to search for the optimum visual stimulus for each neural unit studied (Harth & Tzanakou, 1974). The display is divided into a 32 x 32 grid. The intensity of each section is varied according to how the unit responded to the last change along with a random variation in each intensity. The final visual pattern should be the one the unit is most sensitive to. Stimulus motion could be incorporated in the technique. Another example of quantitative studies in the visual system is the work of Schiller, Finley, and Volman (1976). They used a computer-controlled apparatus and applied various statistical analyses culminating in neural models to explain the response patterns that they observed.

Other techniques of neural-signal analysis are still evolving. One in particular has its genesis in the area called systems identification. An attempt is made to describe the properties of a system in a mathematical fashion that represents the response characteristic as a series of "kernels" (McCann & Marmarelis, 1975). This is being applied in the investigation of nonlinear systems. One of the drawbacks of the technique is that the resulting model does not relate well to the physiology, therefore leaving the biologist with an uncomfortable feeling. There are many approaches to systems identification dealing with both linear and non-linear systems. The use of white noise as the stimulus is often dictated, as in the use of the reverse correlation techniques (DeBoer & Kuyper, 1968).

AER - Averaged Evoked Responses

Although the first recording of evoked potentials in mammals occurred in 1875, it was not until the introduction of the electronic amplifier that the recording of brain potentials through the unopened skull was demonstrated by Hans Berger.

There were problems in analyzing evoked responses that are obscured by the larger EEG activity. It was decided to enhance the low signal-to-noise (S/N) level of the evoked activity by averaging responses, evoked by a repetitive stimulus.

The basic idea is very simple. Activity or electrical signals that are time locked to an event will sum according to a linear relation whereas incoherent signals, e.g. noise, will sum in a root mean square (RMS) manner. This is the basis for S/N enhancement as M repetitions of the response are added. If M signals, x(t), are added, S = Mx(t) while M segments of noise n(t) will grow as Therefore, the S/N improvement is equal to . That is, adding 4 repetitions together will improve the S/N ratio by a factor of 2. If 100 repetitions are necessary to obtain a noticeable evoked response, then 400 responses would be necessary to improve S/N by an additional factor of 2, whereas 1600 responses would have to be averaged for a second factor of 2 improvement. The point is that the fastest improvement occurs for small Ms, If S/N is not sufficient for some reasonable M, then further improvements will come only at great expense. The additional problem that must be faced is whether the system under investigation will be constant (stable) over the averaging period.

The details of recording have been discussed in sufficient detail elsewhere (Goff, 1974). Whether nonpolarizable electrodes are necessary is dependent on the necessity of D.C. recording, the measurement of interference effects, and on the required ease of placement. Needle electrodes reduce skin surface potentials but exhibit an impedance that is inversely proportional to frequency.

In order to facilitate the comparison of EEGs, a standard electrode placement was proposed in 1947 and named the International 10-20 electrode placement. Some AER investigators realized the need for standard placement and have suggested the use of the 10-20 system as a reference. In general, the purpose of the experiment and the type of subject should determine electrode placement. The primary considerations are: (1) the sense mortality being studied, (2) the study of the early or late response, and (3) the minimization of contamination by "extracranial potentials". Topological studies of VER (visual evoked response), AER (auditory evoked response) and SER (somatosensory evoked response) are available as guides.

SER stimuli often consist of electrical shocks to the median nerve at the wrist or legs. VER stimuli are often flashes of white light, and AER stimuli are usually clicks presented via earphones.

If short-latency AER components represent neural activity in the primary auditory receiving area, one might expect their focus to be in the temporal areas of the cortex. Several researchers (Goff, 1974) have found them to be at maximum in the vertex region. This leads to two questions: Are they of neural origin? Are they generated in the primary auditory cortex? The use of scalp and subdural electrodes at points near the vertex demonstrate that the AERs have the same waveform and latency at both sites. A similar waveform in the temporal region supports the conclusion that the early AER components are cochleoneurogenic. The latency, duration, and configuration of scalp potentials are not comparable to those recorded directly from the human primary auditory cortex. Barbiturate anesthesia suppresses early AER components at the vertex, supporting the idea that they are not primary auditory components.

Another issue in AER recording is whether monopolar or bipolar electrodes should be used. Bipolar recording looks at the algebraic difference between two electrodes. The basic problem is the interpretation of the bipolar records. By recording both bipolar and monopolar signals and varying stimulus intensity, it can be seen that monopolar recordings are important in the interpretation of the bipolar recordings because of differences in the topographic distribution of various AER components and intersubject variability. Assuming that the "indifference" of the reference electrode is good, then the interpretation of records is simpler in the monopolar case. The polarity of the signal recorded with bipolar electrodes is usually meaningless without independent assessment from monopolar recordings. In addition, if the monopolar recordings are stored, simple subtraction will produce the equivalent bipolar records whereas the opposite is not true.

The AERs are used to correlate brain activity with sensory and motor behavior. We seek information on the timing, magnitude, and location of neural events that take place in the brain during some sensory of behavioral sequence. Since scalp-recorded brain potentials provide a substantially degraded indication of intracranial processes, evidence of these events may be ambiguous. Timing information is the most unequivocal form of data. The magnitude of neural activity is must less secure and the interpretation of it is ambiguous.

The recorded brain potential, V(t), can be represented as the sum of an evoked response, E(t), and the background EEG plus noise, or G(t) as in Eq. 8a.

V(t) = E(t) + G(t) ......................(8a)

We want to characterize E(t) and its variability and describe the statistical features of the EEG.

In sampling these signals, we are subject to the same constraints as are applicable to any signal-analysis problem; that is, the sampling rate must be at least twice the highest frequency present in the signal. The sample interval will be Dt, and V(t) will be sampled at ti = iD t, i = 1, ... ,m.

The mean and variance of V(t) are given in Eqs. 8 and 9.

This gives a measure of correlation between points on the same record. A useful method of displaying averaged evoked responses is to plot the standard deviation, , which is the root-mean-square deviation from the signal mean and is expressed in the same units as the data. As s decreases, we can begin to trust the bumps and wiggles present in . Remember, the reduction in the standard error, S, is proportional to . A general guideline for choosing M is to reduce the fluctuation in S to 10% of , the average.

There are several problems that are being addressed using AERs. due to the availability of computers for data analysis. They include: (1) the detection of evoked potentials at psychophysical thresholds that requires the application of statistical detection methods to the evaluation of the AER. A nonzero and an increase in the RMS voltage or a change in the autocovariance could all signal a significant change in the AER, (2) The second problem is the evaluation of differences in AER waveform in those instances where the reliability of differences in the AER is important. The two main approaches to evaluating differences are using the t-test for the significance of differences between means and assessing differences in correlation of averages obtained under different conditions. The latter technique employs a product-moment correlation that yields a single number. Unfortunately, the nature of waveform differences is completely obscure. (3) The resolution of AERs into simple component waveforms is important since a major objective of brain-potential investigation is to define the physiologic origin and functional significance of the AER components. The problem of component identification (Donchin & Herning, 1975) is essentially a physiologic problem. (4) Waveshape sorting (Bartlett, John, Shimokochi, & Kleinman, 1975) has been used to sort single ensembles according to their likelihood of containing one or another of two predefined mean components.

Signal Analysis and the EEG

In EEG as in art, one person's signal is another person's noise. Whereas in AER the EEG was a confounding signal that interfered with the evoked response and had to be averaged out, it has been utilized to provide corroborative information for suspected clinical conditions. It is a record of more or less rhythmic fluctuations in electrical potentials that arise in the brain. These potentials are greatly attenuated by virtue of the fact that they are recorded from the scalp. The attenuation is undoubtedly greater for cortical potentials that are very localized in origin. Although there are many concerns about the recording process itself (the need to use nonpolarizable electrodes, placement of electrodes, monopolar vs. bipolar recording, patient state, movement, and a variety of artifacts that must be screened for by the EEG technician), the focus here is the analysis of the EEG. Reading EEGs is still somewhat of an art form. Certain signs are regarded as abnormal, such as the absence or reduction of rhythms, the presence of abnormal waveforms, slow waves, sharp waves, and spikes. The description of these abnormal waveforms usually includes the period, amplitude, incidence, location, sites of probable origin, reactivity to eye opening, effect of hyperventilating, and photic stimulation. The EEG has been used to classify the stage of sleep and is an indicator of the level of anesthesia in surgery. The lengthy evaluation time and lack of a quantified evaluation of the EEG has led to attempts to develop alternate, objective methods that could result in compressing the typical one-half-inch-thick stack of 14-channel recordings to a summary description. The techniques of signal analysis that have been applied are of interest in themselves and have application in other areas (Rabiner & Gold, 1975). The literature dealing with EEG analysis (Brazier & Walter, 1973; Gevins, Yeager, Diamond, Spire, Zeitlin, & Gevins, 1975) is very extensive, and the interested reader is encouraged to review it.

The analysis of EEG signals employs many fundamental techniques of signal analysis. The most fundamental is Fourier analysis, which is the determination of the frequency content of a signal and is predicated on being able to represent a signal as a sum of sine waves and cosine waves. The procedures used to analyze a lot of data was often tedious and expensive until the relatively recent introduction of the Fast Fourier Transform or FFT (Brigham, 1974). This clever algorithm has revolutionized signal processing by permitting once infeasible computations to now be performed.

A randomly varying signal can be represented in the form of Eq.(11).

Normally, the quantity we are interested in is the magnitude,

where the fundamental frequency is 1/T, and T is the length of the segment we are analyzing. All the components of x(t) are multiples of the fundamental frequency. In Fig. 9.17 a signal, x(t), and its spectrum are shown. It is clear that there is a prominent frequency component near 3 Hz. There are problems that arise with the analysis of signals such as the EEG that vary in their basic characteristics as a function of time. Signals of this type are called nonstationary; that is, their statistical properties are different depending on the interval of time analyzed. To overcome this problem, small time segments, 1 to 8 sec, are often analyzed, and the individual spectra are graphed in a 3-dimensional display called a compressed spectral array (Bickford, Fleming, & Billinger, 1971). This has the advantage of showing changes in the frequency content of a signal as a function of time. These spectra are sometimes averaged across time or are smoothed across frequency in order to reduce the variability of the estimate of the amplitudes of the frequency components. These are basic operations that may be useful in a variety of problems. One can divide the frequency axis in several bands in which the energy is summed and then plotted as a function of time. This makes it obvious when any major shift in frequency content of the signal occurs. Various statistics (Matousek & Peterson, 1973) can be developed using the sum, difference, and ratio of these energies in an attempt to discriminate between patient populations. There are many other manipulations that individuals have applied in analyzing EEGs where two channels are analyzed simultaneously. For example, a cospectra is a correlation of the individual spectra and shows what frequency components are common to both signals along with their mutual phase relations.

An important measure of signal correlation is the coherence. If the power spectrum of a signal is defined as

where Z is the Fourier transform of the time series and * denotes the complex conjugate. The cross-spectrum is defined in Eq. (14).

The coherence in Eq. (15) gives a measure of the squares of the correlation between two time series for each frequency component.

These are but a few of the techniques that have been applied to EEG analysis. Autocorrelation, period analysis, toposcophy, amplitude analysis, and several others have all been used. The understanding of the EEG and its origin is yet to come. Perhaps the use of models (Wennberg & Zetterberg, 1971) will facilitate experiments that may lead to a further advance of our understanding of the source and nature of the EEG.

NEUROANATOMICAL ANALYSIS

In recent years, computers have been used to assist in the analysis of neuroanatomical material in a number of ways (Lindsay, 1977). One of the simplest is the use of a graphics tablet to enter either cell counts or contours of the cell body and nucleus. An example is shown in Fig. 9.18. From this rather simple process, the cell-size distribution can be computed along with several other measures such as cell circularity and orientation. The advantage of using, this rather simple device is that it is easy to use and facilitates quantitative studies. One can easily test hypotheses regarding cell size, number, and distribution in brain developmental studies.

One can readily guess that it would also be very important to be able to study the entire 3-dimensional structure of neurons. One of the problems of working with anatomical material is the lack of contrast between very densely packed structures. In order to circumvent this, a variety of histological staining techniques have been evolved. One of these, the Golgi staining technique has the characteristic that only a very small percentage of the neurons in a section of brain tissue accept the stain. Another of its characteristics is that usually the entire dendritic tree is stained. Techniques for injecting dyes into cells also permit the entire cell to be visualized. The anatomical material is sliced into thin sections, with structures of interest often being distributed in several sections. Even when a cell is contained in one section, its entire 3-dimensional structure cannot be seen at any one time under the microscope. To overcome these deficiencies, techniques that allow the entry of 3-dimensional structures have been evolving over the past few years. The data can be derived from photographs and light microscope and electron microscope images (Macagno, Levinthal, Tountas, Bornholdt, & Abba, 1976; Shantz, 1976), with different techniques required to handle each one. The techniques are becoming quite sophisticated, and many use a graphic display to show the resulting neural structure. Stereo pairs and rotating stick figures have been used to impart the impression of 3-dimension.

Another use of the computer in anatomical processing is to count silver grains in a photographic emulsion. The technique of autoradiography is used to study neural connectivity in the brain. It involves injecting a radioactive material that is absorbed by certain neurons and then transported by their axons. When the photographic emulsion is developed, the density and location of the silver grains is studied. The automation of this technique (Wann, Price, Cowan, & Agulnek, 1974) has helped in digesting vast amounts of data normally gathered in a very tedious way.

It is worthwhile pointing out that the data explosion that occurs in processing neuroanatomical data suggests an obvious need for computer-assisted analysis. In using an electron microscope to study tissue ultrastructure, researchers section the material into slices that are often on the order of 500Å. For example, a 1-mm block of tissue could be cut into 20,000 sections. If a 1 mm2 is magnified 10,000 times, it becomes 10 m2. Multiply this by 20,000, and it is obvious that to digitize and store this amount of material is impossible. The material needs to be selectively analyzed and the data compressed for storage. These are problems that will continue to challenge us for some time.

DATA-BASE MANAGEMENT SYSTEMS

Out of the brief considerations of neural-unit analysis and neuroanatomical structural analysis, a feeling of inadequacy in dealing with the data sometimes arises. The program development of large analysis systems is expensive and often a long process. It is important to use the very best of tools, both hardware and software, to facilitate these developments. Computer science has advanced operating systems, compilers, real-time computing along with the area called Database Management Systems or DBMS. It is worth devoting much attention to this latter area (Date, 1975), due to the importance and expense of collecting, maintaining, and analyzing data. Unfortunately, one often collects vast amounts of data without giving consideration to analysis. The data often become virtually inaccessible, rendering them incomplete and useless. It is worthwhile to be aware of DBMS technology, because it may enhance one's programming system design.

In each area, Soni and I have developed an integrated system that usually includes a nonresident command processor that is responsible for interpreting commands and initiating the required programs. An example of this is the response area program (Rhode & Soni. 1976) that performs the statistical analyses described previously. We applied some DBMS concepts by using a Data Description Language to specify the type of data and its organization within the data files. A separate description of the parameters that are required by each analysis program is also provided. A processor then "reads" both of these descriptions and passes data from file to program, performing any necessary formatting during the process. This results in a system that is easily modified and extended when the needs arise.

A well-documented approach also facilitates the ease of transferring the maintenance of the analysis systems to new programmers. The primary objective, however, is easier program development and interfacing to analysis packages for various statistical analysis. A consideration is the development of data files that contain summary information about the neural units investigated. That is, we often select certain features of the data, such as characteristic frequency, threshold, spontaneous discharge rate, and dynamic range, to provide a characterization of the neural unit. We may apply statistical analyses to this data to determine whether any distinguishable populations exist, such as cluster analysis or discriminant analysis.

In a comprehensive data-base system, methods of passing data to analysis pro-rains are provided. Data manipulation programs and, occasionally, query programs with a few simple statistical routines are provided. The ideas are worth study, but their actual implementation is difficult and often expensive.

SUMMARY

Many of the hardware and software techniques used in neurophysiology have been briefly described. They are all important to an understanding of current computer use in this area. The literature references will provide a substantial entree to other uses and techniques. Unfortunately, one chapter could not cover all the important analysis and instrumentation techniques.

REFERENCES

Anderson. G. C., Finnie, B. W., & Roberts, G. T. Pseudo-random and random test signals. Hewlett Packard Journal, September 1967, 1-12.

Bartlett, F., John, E. R., Shimokochi, M., & Kleinman, D. Electrophysiological signs of readout from memory, II. Computer classification of single evoked potential waveshapes. Journal of Behavioral Biology, 1975, 14, 409-449.

Bickford, R. G., Fleming, N. I., & Billinger, T. W. Compression of EEG data by isometric power spectral plots. Electroencephalography and Clinical Neurophysiology 1971, 31, 631-636.

Brazier, M. A. V., & Walter, D. O. (Eds.). Evaluation of bioelectrical data from brain, nerve and muscle, 11. Handbook of electroencephalography and clinical neurophysiology. Amsterdam: Elsevier, 1973.

Brigham, E. O., The Fast Fourier Transform. Englewood Cliffs, N.J.: Prentice-Hall, 1974.

Brown, P. B. Computer technology in neuroscience. New York: Halsted Press, 1976.

Clark, W. A., & Molnar, C. E. A description of the LINC. R. W. Stacy & B. D. Waxman (Eds.), In Computers in biomedical research. New York: Academic Press, 1965.

Date, C. J. An introduction to database systems. Reading, Mass.: Addison-Wesley, 1975.

DeBoer, E., & Kuyper, P. Triggered correlations. Institute of' Electrical and Electronics Engineers Transactions on Biomedical Engineering, 1968, 15, 169-179.

Donchin, E., & Herning, R. I. A simulation study of the efficacy of stepwise discriminant analysis in the detection and comparison of event related potentials. Electroencephalographv and Clinical Neurophysiology, 1975, 38, 51-58.

Geisler, C. D., Rhode, W. S., & Kennedy, D. T. Responses to tonal stimuli of single auditory nerve fibers and their relationship to basiliar membrane motion in the Squirrel monkey. Journal of Neurophysiology, 1974, 37, 1156-1182.

Gerstein, G. L., & Perkel, D. H. Mutual temporal relationships among neuronal spike trains: Statistical techniques for display and analysis. Biophysical Journal, 1972, 12, 453-473

Gevins, A. S., Yeager, C. L., Diamond, S. L., Spire, J. P., Zeitlin, H. M., & Gevins, A. H. Automated analysis of the electrical activity of the human brain (EEG): A Progress Report. Proceedings of the Institute of Electrical and Electronics Engineers, 1975, 63, 1382-1399.

Glaser, E. M. Separation of neuronal activity by waveform analysis. In S. Fine & R. M. Kenedi (Eds.), Advances in biological and medical engineering (Vol. 1). New York: Academic Press, 1970.

Glaser, E. M., & Ruchkin, D. S. Principles of neurobiological signal analysis. New York: Academic Press, 1976.

Goff, W. R. Human averaged evoked potentials: Procedures for stimulating and recording. In R. F. Thompson & M. M. Patterson (Eds.), Bioelectric recording techniques, New York: Academic Press, 1974.

Goldstein, J. L., & Kiang, N. Y. S. Neural correlates of the aural combination tone 2fl-f2 Proceedings of the Institute of Electrical and Electronics Engineers, 1968, 56, 981-992.

Harth, E., & Tzanakou, E. Allopex: A stochastic method for determining visual receptive fields. Vision Research, 1974, 14, 1475-1482.

Hind, J. E. Two-tone masking effects in squirrel monkey auditory nerve fibers In, R. P. Plomp & G. F. Smoorenburg (Eds.), Frequency analysis and periodicity-detection in hearing. The Netherlands: Sijthoff, Leiden, 1970.

Lindsay, R. D. Computer analysis of neuronal structures. New York: Plenum, 1977.

Macagno, E. R., Levinthal, C., Tountas, C., Bornholdt, R., & Abba, R. Recording and analysis of 3-D information from serial sections micrographs. The Cartos System. In P. B. Brown (Ed,), Computer technology in neuroscience. New York: Wiley, 1976.

Mardia. K. V. Statistics of directional data. New York: Academic Press, 1972.

Matousek, M, & Petersen, J. Automatic evaluation of EEG background activity by means of age- dependent EEG quotients. Electroencephalography, and Clinical Neurophysiology 1973,35,603-612.

McCann, G. D., & Marmarelis.P.Z, Proceedings of the first symposium on testing and identification of nonlinear systems. Pasadena: California Institute of Technology, 1975.

Moore, G. P., Segundo, J- P., Perkel, D. H, & Levitan, H. Statistical signs of synaptic interaction in neurons. Biophysical Journal, 1970, 10. 376-900.

Mountcastle, V, B. The world around us: Neural command functions for selective attention. Neurosciences Research Program Bulletin, 1976,.14. (Supplement)

Perkel, D. H., Gerstein. G. L,, & Moore, G. P. Neuronal spike trains and stochastic point processes 1. The single spike train. Biophysical Journal, 1967, 7, 391 - 418. (a)

Perkel, D. H., Gerstein, G. L. & Moore, G, P. Neuronal spike trains and stochastic point processes 11. Simultaneous spike trains. Biophysical Journal, 1967, 7, 419 --440, (b)

Rabiner,, L. R., & Gold, B. Theory and application of digital signal processing. Englewood Cliffs, N.J,: Prentice-Hall, 1975

Rhode, W. S., & Olson, R. E. A Digital stimulus system (Monograph No. 2). Madison, Wisconsin, Laboratory Computer Facility, 1975.

Rhode, W. S., &, Soni. V. Neural unit data analysis system. In P. B. Brown (Ed.). Current Computer Technology in Neurobiology, Hemisphere, 1976.

Robles. L., Rhode, W. S.. & Geisler, C. D. Transient response of the basilar membrane measured in Squirrel monkeys using the Mossbauer effect. Journal of the Acoustical Society of America. 1976, 59,926-939.

Rodieck, R. W., Kiang, N. Y. S., & Gerstein, G. L. Some quantitative methods for the study of spontaneous activity of single neurons. Biophysical Journal, 1962, 2, 351-367.

Schiller. P. H., Finley, B. L., & Volman. S. F. Quantitative studies of single-cell properties in monkey striate cortex. Journal of Neurophysiology,, 1976. 39, 1288-1374.

Shantz, M. J. A minicomputer-based image analysis system. In P. B. Brown (Ed.), Computer technology in neuroscience. New York: Wiley, 1976.

Szenthagothai, J., & Arbib, M. A. Conceptual models of neural organization. Cambridge, Mass.: The MIT Press, 1975.

Wann, D. F., Price, J. L., Cowan, W. M., & Agulnek, M. A. An automated system for counting silver grains in autoradiographs. Brain Research, 1974, 81, 31-58.

Wennberg, A., & Zetterberg, L. H. Application of a computer-based model for EEG analysis. Electroencephalography and Clinical Neurophysiology, 1971, 31, 457-468.

Wolf, V. R., & Bilger. R. C. Generating complex waveforms. Behavior Research Methods & Instruction, 1972, 4. 250-256.