Who is responsible for operant conditioning




















There are two types of consequences: positive sometimes called pleasant and negative sometimes called aversive. These can be added to or taken away from the environment in order to change the probability of a given response occurring again. There are 4 major techniques or methods used in operant conditioning. Stimuli are presented in the environment according to a schedule of which there are two basic categories: continuous and intermittent.

Continuous reinforcement simply means that the behavior is followed by a consequence each time it occurs. Intermittent schedules are based either on the passage of time interval schedules or the number of correct responses emitted ratio schedules. The consequence can be delivered based on the same amount of passage of time or the same number of correct responses fixed or it could be based on a slightly different amount of time or number of correct responses that vary around a particular number variable.

This results in an four classes of intermittent schedules. Fixed interval -- the first correct response after a set amount of time has passed is reinforced i. The time period required is always the same. Notice that in the context of positive reinforcement, this schedule produces a scalloping effect during learning a dramatic dropoff of responding immediately after reinforcement.

Also notice the number of behaviors observed in a 30 minute time period. Variable interval -- the first correct response after a set amount of time has passed is reinforced. After the reinforcement, a new time period shorter or longer is set with the average equaling a specific number over a sum total of trials.

Notice that this schedule reduces the scalloping effect and the number of behaviors observed in the minute time period is slightly increased. Fixed ratio -- a reinforcer is given after a specified number of correct responses. This schedule is best for learning a new behavior.

Notice that behavior is relatively stable between reinforcements, with a slight delay after a reinforcement is given. Also notice the number of behaviors observed during the 30 minute time period is larger than that seen under either of the interval schedules. Variable ratio -- a reinforcer is given after a set number of correct responses.

After reinforcement the number of correct responses necessary for reinforcement changes. This schedule is best for maintaining behavior. Notice that the number of responses per time period increases as the schedule of reinforcement is changed from fixed interval to variable interval and from fixed ratio to variable ratio. In summary, the schedules of consequences are often called schedules of reinforcements because there is only one schedule that is appropriate for administering response cost and punishment: continuous or fixed ratio of one.

In fact, certainty of the application of a consequence is the most important aspect of using response cost and punishment. Using an intermittent schedule when one is attempting to reduce a behavior may actually lead to a strengthening of the behavior, certainly an unwanted end result. The Premack Principle, often called "grandma's rule," states that a high frequency activity can be used to reinforce low frequency behavior.

Access to the preferred activity is contingent on completing the low-frequency behavior. The high frequency behavior to use as a reinforcer can be determined by: asking students what they would like to do; observing students during their free time; or determing what might be expected behavior for a particular age group.

Analyzing Examples of Operant Conditioning. There are five basic processes in operant conditioning: positive and negative reinforcement strengthen behavior; punishment, response cost, and extinction weaken behavior.

See the green and red backgrounds above, which represent reinforcement and punishment, respectively. The stimulus used to reinforce a certain behavior can be either primary or secondary. A primary reinforcer, also called an unconditioned reinforcer, is a stimulus that has innate reinforcing qualities.

These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, touch, and pleasure are all examples of primary reinforcers: organisms do not lose their drive for these things. Some primary reinforcers, such as drugs and alcohol, merely mimic the effects of other reinforcers. For most people, jumping into a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off a physical need , as well as provide pleasure.

A secondary reinforcer, also called a conditioned reinforcer, has no inherent value and only has reinforcing qualities when linked or paired with a primary reinforcer. Before pairing, the secondary reinforcer has no meaningful effect on a subject. Money is one of the best examples of a secondary reinforcer: it is only worth something because you can use it to buy other things—either things that satisfy basic needs food, water, shelter—all primary reinforcers or other secondary reinforcers.

A schedule of reinforcement is a tactic used in operant conditioning that influences how an operant response is learned and maintained. Each type of schedule imposes a rule or program that attempts to determine how and when a desired behavior occurs. Behaviors are encouraged through the use of reinforcers, discouraged through the use of punishments, and rendered extinct by the complete removal of a stimulus.

Schedules vary from simple ratio- and interval-based schedules to more complicated compound schedules that combine one or more simple strategies to manipulate behavior. Continuous schedules reward a behavior after every performance of the desired behavior. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in teaching a new behavior. Simple intermittent sometimes referred to as partial schedules, on the other hand, only reward the behavior after certain ratios or intervals of responses.

There are several different types of intermittent reinforcement schedules. These schedules are described as either fixed or variable and as either interval or ratio. Fixed refers to when the number of responses between reinforcements, or the amount of time between reinforcements, is set and unchanging.

Variable refers to when the number of responses or amount of time between reinforcements varies or changes. Interval means the schedule is based on the time between reinforcements, and ratio means the schedule is based on the number of responses between reinforcements.

Simple intermittent schedules are a combination of these terms, creating the following four types of schedules:. All of these schedules have different advantages.

In general, ratio schedules consistently elicit higher response rates than interval schedules because of their predictability. For example, if you are a factory worker who gets paid per item that you manufacture, you will be motivated to manufacture these items quickly and consistently. Variable schedules are categorically less-predictable so they tend to resist extinction and encourage continued behavior.

Both gamblers and fishermen alike can understand the feeling that one more pull on the slot-machine lever, or one more hour on the lake, will change their luck and elicit their respective rewards.

Thus, they continue to gamble and fish, regardless of previously unsuccessful feedback. Simple reinforcement-schedule responses : The four reinforcement schedules yield different response patterns.

The variable-ratio schedule is unpredictable and yields high and steady response rates, with little if any pause after reinforcement e. A fixed-ratio schedule is predictable and produces a high response rate, with a short pause after reinforcement e. The variable-interval schedule is unpredictable and produces a moderate, steady response rate e. The fixed-interval schedule yields a scallop-shaped response pattern, reflecting a significant pause after reinforcement e. Extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule.

Among the reinforcement schedules, variable-ratio is the most resistant to extinction, while fixed-interval is the easiest to extinguish. All of the examples described above are referred to as simple schedules. Compound schedules combine at least two simple schedules and use the same reinforcer for the same behavior. Compound schedules are often seen in the workplace: for example, if you are paid at an hourly rate fixed-interval but also have an incentive to receive a small commission for certain sales fixed-ratio , you are being reinforced by a compound schedule.

There are many possibilities for compound schedules: for example, superimposed schedules use at least two simple schedules simultaneously.

Concurrent schedules, on the other hand, provide two possible simple schedules simultaneously, but allow the participant to respond on either schedule at will.

All combinations and kinds of reinforcement schedules are intended to elicit a specific target behavior. Privacy Policy. Skip to main content. Search for:. Operant Conditioning. Key Takeaways Key Points The law of effect states that responses that produce a satisfying effect in a particular situation become more likely to occur again, while responses that produce a discomforting effect are less likely to be repeated.

Edward L. Thorndike first studied the law of effect by placing hungry cats inside puzzle boxes and observing their actions. He quickly realized that cats could learn the efficacy of certain behaviors and would repeat those behaviors that allowed them to escape faster. The law of effect is at work in every human behavior as well. From a young age, we learn which actions are beneficial and which are detrimental through a similar trial and error process.

While the law of effect explains behavior from an external, observable point of view, it does not account for internal, unobservable processes that also affect the behavior patterns of human beings. Key Takeaways Key Points B. Reinforcement was electrical stimulation directly applied to the medial forebrain bundle MFB , which links the ventral tegmental area and nucleus accumbens. The animals learned the task very well, based only on internal stimulation serving as the S D and S R. Thus, the function of an external S D and appetitive S R was simulated within the organism.

Also, the reinforced response can be neural. In an experiment by Nicolelis and Chapin , rats and monkeys obtained appetitive reinforcers contingent on the emission of a neural activity pattern from the motor cortex, which had been correlated with a previously shaped operant motor response.

A noticeable increase in the frequency of such a pattern was recorded, which demonstrated reinforcement. Another relevant datum from the above experiment is that capturing activity only from a small neuronal population neurons was sufficient for an algorithm to code real-time neural activity correlated with the operant motor response and to transmit such information for the system to release reinforcement contingent on that neural pattern.

The precision of the algorithm was so great that it could anticipate the topography of arm movements performed by monkeys. Therefore, neural responses can be reinforced similarly to movements, and even the activity of a few dozens of neurons can be the response unit to be reinforced. When an associative cortical area is some synapses distant from the neural output of the operant response, its activity record tends to reveal aspects of response planning, rather than performance Hoshi, ; Scherberger and Andersen, Among the research on antecedent stimulus function and motor planning, the experiment by Musallam, Corneil, Greger, Scherberger, and Andersen must be highlighted.

These authors implanted electrodes in the brains of monkeys in a parietal area that intermediates visual and premotor cortices and precedes reaching movements by various synapses. In their procedure, each time a S D was shown on a screen, its specific position should be touched after approximately 1. An algorithm decoded neural activity prior to specific movements and thus could foresee during the 1. Thus, real movement could be dispensed, and the animal's "intention" could be reinforced.

Subsequently, two S D s indicated different quantities, qualities, and probabilities of the reinforcer, and the algorithm prediction became even more accurate when the S D signaled a preferred reinforcer variation.

In summary, operant neural responses can reliably precede operant motor responses, similar to the study by Schoenbaum et al. Moreover, neural responses also indicate the value of positive reinforcers. According to Musallam et al. The assumption of the neural collection of thoughts should be seriously considered because information from the parietal cortex still must traverse a long way to produce a motor response and must at least pass through the premotor and primary motor cortices.

A possible application derived from these methods for reinforcing or reading neural responses is the development of equipment for assessing people with motor dysfunction, who generally have vision-related areas preserved.

These areas can therefore provide signals of intentional movement. Neural knowledge turns to complex aspects of behavior, as in this case of concomitant coding, in associative cerebral areas of visual, motor, and motivational information Musallam et al.

As discussed below, other associative areas, such as the hippocampus and entorhinal cortex, also play a role in learning complex relations involved in cognition. The stimulus equivalence paradigm involves creating arbitrary categories formed by stimuli that do not have physical similarities. The paradigm is thus considered a basis for symbolic stimulus association and complex stimulus control.

In equivalent class formation, different pairs of sample and comparison stimuli e. Conditional to the presentation of a single sample stimulus in each trial e. Each sample presentation has at least two presented comparisons B1 and B2.

Only the choice of the correct comparison is reinforced if the sample is A1, then the choice of B1 is reinforced, and the incorrect choice of B2 is not. Certain stimuli, called nodes, are the link that allows grouping stimuli that were not paired during the reinforced training for reinforced AB and AC training, A is the node between B and C. After the training, tests are conducted under extinction conditions, and the establishment of pairs that were not directly reinforced is expected.

Through such a procedure, arbitrary stimuli of the same class become replaceable and share behavioral functions. An event may then be referred to through its substitute, and symbolic knowledge is said to emerge. In studies of neuroscience and symbolic behavior, much has been investigated about the function of the hippocampus and connected associative areas because they are involved in memory and appear to play a role in establishing symbolic relations between stimuli Mesulam, ; Miyashita, Relations emerging from symmetry and transitivity in rats with an injured hippocampus were studied by Bunsey and Eichenbaum Only olfactory stimuli were used because this modality is naturally involved when rats search for food.

During every trial, after digging for a cup containing cereal buried in sand treated with the sample odor, two other comparison cups treated with new odors were presented.

The odor of the first comparison for which rats began to dig considered the chosen comparison stimulus. In the extinction tests, injured rats neither learned symmetry CB nor transitivity AC, although control rats had accurate performance in tests of these emergent relations. Other ways to alter hippocampal function were also studied by H.

Eichenbaum and highlight the importance of the entorhinal cortex which has bulky interconnections with the hippocampus in procedures of conditional discrimination. Results similar to those described by Bunsey and Eichenbaum were obtained by destroying the cholinergic afferents to the entorhinal cortex and consequently suppressing information transmitted from the entorhinal cortex to hippocampus.

McGaughy, Koene, Eichenbaum, and Hasselmo studied conditional discrimination with odorized stimuli and verified that already learned conditional discrimination was maintained after lesions of cholinergic afferents in experimental subjects, even in intervals from 15 min to 3 h after training.

However, no learning occurred when new odors were presented in non-reinforced tests performed after surgical lesion, reiterating the importance of hippocampal pathways for recent memories. Additional consistent results by Bunsey and Eichenbaum and McGaughy et al. Buckmaster, Eichenbaum, Amaral, Suzuki, and Rapp trained monkeys in conditional discriminations AB and AC, in which sample and comparison stimuli were cookies of different colors and shapes.

The cookies used as sample and correct comparisons had the same appetitive flavor, whereas incorrect choices had a bitter flavor. Thus, the visual modality of cookies defined the antecedent stimuli sample and comparison , whereas the gustative modality served as the positive or negative reinforcer for choices made both during training and testing because flavor was inherent to stimuli that the monkeys received and ingested.

The authors verified that monkeys with an injured entorhinal cortex required longer for training and also did not show learning of transitive relations. Confirming the importance of the entorhinal cortex in conditional discrimination, Coutureau et al. After training with stimuli of the visual modality i.

Therefore, one class contained stimulus chamber with visual 1 V1 , stimulus chamber with temperature 1 T1 , and stimulus auditory 1 A1 , and the other class was formed by V2T2A2. Auditory stimuli were nodal stimuli from their respective classes. A substantial amount of free food was then provided in chamber V1, but not in chamber V2, and greater activity was observed in the chamber associated with plenty of food. When rats with an injured hippocampus were placed in chambers T1 and T2, they behaved as if they were in V1 and V2, respectively.

However, rats with entorhinal injury were not sensitive to differential reinforcement and, therefore, did not respond in chamber T as if they were in chamber V i. Regarding cellular measures, if learning of conditional discrimination corresponded to differential neuronal responses, specific neural pathways or processes can be suggested to codify meanings.

Sakai and Miyashita recorded responses from temporal cortex neurons of two monkeys in a matching-to-sample procedure, presenting arbitrary visual stimuli geometric patterns on a computer screen.

Because of the fact that the temporal cortex is intimately involved in memory processes, a 4 s delay was established between the end of the presentation of the sample and the presentation of comparison stimuli. The correct comparison choice would release fruit juice as the S R.

Relations among 12 pairs of stimuli were reinforced pairs ' to ' , as well as respective symmetric relations pairs 1'-1 to 12' Two patterns of neuronal electrical activity appeared in records after training. In the first pattern, some neurons responded consistently to both members of certain stimuli pairs. For example, neuron X responded to pairs ' and 12', and neuron Y responded to pairs ' and 5'-5 and also to ' and 6' In the second pattern, other neurons responded better to one of the members of the pair.

If, for example, the activity of neuron Z was greater for stimulus 7', then both response elicitation as soon as this stimulus was presented in pair 7'-7 and a gradual increase in the neuronal response during the delay of pair ' were verified. Such activity in this delay between the presentation of the sample and comparison stimuli was not attributable to anticipation of motor activity because monkeys could not foresee the position where the comparison stimulus would appear on the screen.

Discriminative responses occurred for various pairs of stimuli because individual neurons codified each element of a pair independently from the function it assumed in the contingency, either as a sample or correct comparison because neurons codified previously and correctly the presence of a particular member of the pair of stimuli.

By mentioning these results and indicating that behavior analysis cannot renounce neural analysis, depending on the research problem, Donahoe suggested that "direct effects of stimulus-stimulus relations can be observed only at the neural level" p. Future research might well record the activity of dopaminergic mesencephalic neurons of monkeys during matching-to-sample. Generally, the best accuracy in behavioral performance during training implies the best prevision of which comparison signals reinforcement.

Therefore, behavioral discrepancy would be less, and the response of dopaminergic neurons should decrease. Additionally, verifying the activity of these neurons in a test of emergent relations would be even more interesting. Would neural activity denote that new relations between stimuli cause surprise and a huge discrepancy? Or, conversely, could emergent relations show the same neural pattern observed for trained relations, revealing the absence of discrepancy?

A traditional question in the equivalence area concerns "whether emergent behavior exists before we actually see it" Sidman, , p. Does it already exist after training but before the test of the emergence of new relations, or does it emerge only because of the variables present in the test? For some behavioral theorists among them, M. Sidman himself , the assumption that classes of stimuli existed before the test contingency could dangerously fall into cognitivism.

With regard to this issue, Haimson, Wilkinson, Rosenquist, Ouimet, and McIlvane reported an interesting datum that strengthens Sidman's view of the need for testing. Similar to the normal case with humans, after the arbitrary matching-to-sample training, the participants of that study successfully performed the matching-to-sample test for emergent relations.

But before or after the test phase, related and unrelated stimulus pairs were alternately presented in non-reinforced trials. These stimulus pairs involved the potentially related and unrelated stimuli that would appear in the equivalence test.

The participants were asked to silently judge if each pair was related. Haimson and colleagues measured a brain wave pattern called N, which is a normal voltage drop that typically appears approximately ms after noticing that phrases or words are semantically mismatched.

The N pattern was immediately seen for unrelated pairs after the matching-to-sample testing phase. However, when the electrophysiological datum was collected prior to the matching-to-sample test, N tended to develop over the trials with unrelated pairs. Once the N pattern was clearly established, the participants immediately showed accuracy in the ensuing matching-to-sample test. This datum indicated that responding silently judging to stimulus pairs favored equivalence class formation.

Thus, Haimson et al. So the existence of a context of testing, regardless of whether it involves the matching-to-sample, is probably necessary for the emergence of equivalence relations, as Sidman argued. In practice, the N waveform might be a good neural candidate to predict accuracy in symbolic learning. The area of equivalence class formation opens multiple possibilities of research on the behavioral-neural interface.

This is especially true when considering, as suggested by Matos and Sidman , that antecedent stimuli, reinforcing stimuli, covert events such as drug effects, elicited responses such as skin conductance, and operant responses determined by reinforcement schedules may all become part of a stimulus class.

Such inclusion suggests that all of these elements may be part of environment-behavior units selected by reinforcement, according to the proposal by Donahoe and Palmer A potentially fertile field is revealed here for the investigation of the neurobiological and behavioral variables confluence.

The neural processes involved in stimulus control described in most articles selected for this paper parallel the conditioning paradigms. Such integration between the two paradigms demands careful analysis of the literature because almost all of the papers found were studies using either the classical or operant paradigm; they did not have to address, for example, the comparable features of experimental design.

Few of the cited authors dealt with both paradigms, particularly J. Apparently, even W. Schultz, one of the most quoted researchers in the present work, did not show concern in comparing paradigms. His analyses are simply descriptions of relations in terms of predictor stimuli CS or S D and predictable stimuli US or S R , without mention of any possible interference between classical and operant contingencies.

For example, in classical procedures, Schultz neither suggested nor controlled the possibility that superstitious operant responses temporally contiguous but not causally contingent on consequence interfered in the intervals between presentation of the CS and US Fiorillo et al. Schultz also did not consider that although the antecedent stimulus normally serves as an S D in operant procedures it is impossible to assuredly know, at the neural level, whether the pathway of the antecedent stimulus would not actually elicit processes that culminate in the measured neural activity Hassani et al.

The S D for an operant task, in fact, could also be a CS for neuronal activity. A well-timed summary of the discussed findings will now be presented to clarify the similarities and differences between the mechanisms of both conditioning paradigms. The present paper also discussed the similarities with regard to the following points:.

The CS also evokes neuronal discriminative responses that vary according to US probability i. Their neurons respond accordingly to stimuli that precede reinforcers preferred by monkeys Hassani et al. The data reviewed so far, with respect to stimulus control, suggest that the boundaries between reflexive and operant behavior are feeble.

Nevertheless, an important difference was found by Lorenzetti et al. The plasticity shown by neuron B51 of Aplysia took distinct courses in operant and classical conditioning. In the former, neuronal excitability increased, whereas it diminished in the latter. The case of classical conditioning appears apparently incongruent because diminished excitability tends to produce minor CR elicitation.

However, a concomitant increase was found in the excitatory input of the presynaptic neuron that compensated for the diminished excitability of B51 in classical conditioning. The study of neural variables may in fact be potentially applied to the entire research program of behavior analysis. For example, stimulus equivalence is an obvious field ripe for exploration. In addition to the promising use of waveforms such as N in anticipating accuracy in complex learning Haimson et al.

If these pathways are found, then studying how neural convergence is created for different stimuli assembled in a class is possible. Some important methodologies were not included in this paper because they would add an untenable volume of text.

Among them are studies on biofeedback and neuroimaging techniques. Finally, we hope the presented analysis has caused some positive behavioral discrepancy in the reader. If so, then some theoretical progress in the understanding of the biology of reinforcement has been achieved.

Received 13 September ; received in revised form 21 November ; accepted 22 November Available on line 28 December Abrir menu Brasil. Abrir menu. Introduction I. A proposal for a unified principle of conditioning Donahoe and Palmer suggested that the UR produced by the presentation of an intense or biologically relevant US sensitizes the organism to new sources of learning.

The unified principle of conditioning settles on a neural basis Donahoe and Palmer and Donahoe, Palmer, and Burgos inferred that antecedent stimuli always have a causal function in evocated responses.

Role of neural events in behavioral contingencies The traditional view of behavior analysis of contingencies avoids intraorganic relations and considers only interrelations between the organism and external environment as truly behavioral phenomena. Description and analysis of neural events in conditioning Brief description of the reinforcement circuit In the historic experiment by Olds and Milner , non-deprived rats tirelessly worked for electrical stimulation of the limbic septal area.

Dopaminergic mesencephalic afferents of the nucleus accumbens and conditioning The knowledge of dopaminergic function in conditioning processes has substantially improved with the electrophysiological research of W. In addition to supposedly acting as a global signal of unexpected reinforcement, dopamine has other important functions in conditioning, demonstrated by the examples below that were extracted from experiments that measured, in real time, the activity of individual neurons of monkeys: 1 Dopaminergic mesencephalic neurons showed an increased firing rate in response to an unpredicted appetitive US during the initial trials of classical conditioning.

Cortical afferents of the nucleus accumbens and conditioning The prefrontal cortex greatly intervenes in behavioral processes because of its important projections to the nucleus accumbens and massive interconnections with associative cortical areas. In vitro operant conditioning in mammalian neurons The operant response extensively studied in intact organisms may also be emitted by single and isolated neurons.

In vitro and in vivo operant and classical conditioning in mollusk neurons The cellular basis of learning has begun to be well established. Neural events can replace behavioral events in operant contingency Technical developments in neurosciences simulated, directly in the brain, the elements of a contingency.

Cerebral structures and neural events in symbolic relations The stimulus equivalence paradigm involves creating arbitrary categories formed by stimuli that do not have physical similarities. Final Considerations The neural processes involved in stimulus control described in most articles selected for this paper parallel the conditioning paradigms.

Alves, C. Barbas, D. An aplysia dopamine 1 -like receptor: molecular and functional characterization. Journal of Neurochemistry , 96 , Baum, W. Understanding behaviorism: science, behavior, and culture. New York: HarperCollins. Baxter, D. Feeding behavior of Aplysia : a model system for comparing cellular mechanisms of classical and operant conditioning. Learning and Memory , 13 , Brembs, B. Operant reward learning in Aplysia : neuronal correlates and mechanisms.

Science , , Extending in vitro conditioning in Aplysia to analyze operant and classical processes in the same preparation. Learning and Memory , 11 , Buckmaster, C. Entorhinal cortex lesions disrupt the relational organization of memory in monkeys. Journal of Neuroscience , 24 , Bunsey, M. Conservation of hippocampal memory function in rats and humans. Nature , , Butter, C.

Perseveration in extinction and in discrimination reversal tasks following selective frontal ablations in Macaca mulatta Physiology and Behavior , 4 , Chiesa, M. Radical behaviorism: the philosophy and the science. Boston: Authors Cooperative. Chilingaryan, L. Pavlov's theory of higher nervous activity: landmarks and developmental trends. Neuroscience and Behavioral Physiology , 31 , Chudasama, Y.

Dissociable contributions of the orbitofrontal and infralimbic cortex to Pavlovian autoshaping and discrimination reversal learning: further evidence for the functional heterogeneity of the rodent frontal cortex.

Journal of Neuroscience , 23 , Cools, R. Neuropsychopharmacology , 32 , Coutureau, E. Acquired equivalence and distinctiveness of cues: II. Neural manipulations and their implications. DeLong, M. The basal ganglia. In: E. Kandel, J. Jessell Eds. New York: McGraw-Hill. Deutch, A. The nucleus accumbens core and shell: accumbal compartments and their functional attributes.

In: P. Barnes Eds. Boca Raton, F. In: M. Zigmond, F. Bloom, S. Landis, J. Squire Eds. San Diego, C. DiFiore, A. III, Wilkinson, K. Studies of brain activity correlates of behavior in individuals with and without developmental disabilities. Experimental Analysis of Human Behavior Bulletin , 18 , Donahoe, J. On the relation between behavior analysis and biology.

Behavior Analyst , 19 , In: K. Chase Eds. Learning and complex behavior. Boston: Allyn and Bacon.



0コメント

  • 1000 / 1000