The Psychology of Learning: A Dictionary of Terms Frank A. Logan The vocabulary approach to the study of a subject such as the Psychology of Learning is based on the proposition that learning is best reflected in the ability to participate actively and cogently in a conversation about the subject. Such behavior would provide the best possible evidence that a person really understands that subject. The principal prerequisite to being able to do this is knowledge of the meanings of the terms that are used in that field of study by people who are knowledgeable about it. There are two classes of terms of science: empirical concepts and theoretical constructs. Empirical concepts refer to real objects or events and their meaning is given by operation definitions. Such definitions specify the things you have to do (the operations) in order to observe the object or event. For example, a light stimulus can be defined by physical measurement of its color and brightness. Theoretical constructs refer to imaginary objects or events, constructed by a theorist in an effort to explain empirical phenomena, and their meaning is given by anchoring definitions. Such definitions relate (anchor) the theoretical constructs to empirical concepts. For example, a theorist might contend that the aforementioned light stimulus initiates some kind of a sensory trace inside the organism that decays over time. There is a second sense in which scientific terms have meaning. This is the relationship that a term has with other terms, and the more interrelations a term has, the greater its functional meaningfulness. For example, classical conditioning' is formally (operationally) defined as a paradigm in which two stimuli are presented in temporal sequence without regard to the organism's behavior. What makes the concept interesting is that it results in an association being formed between the stimuli. What makes the concept especially interesting scientifically is that the strength of the association and the various manifestations of it depend on various particulars about the nature of the stimuli and the temporal parameters employed. And what makes the concept really interesting, in a functional sense, is that comparable situations often arise in everyday life. Accordingly, this dictionary has been constructed with the intention of presenting the meanings of all of the terms that one is likely to encounter in the context of the Psychology of Learning. It goes beyond formal definitions to include elaborations and examples. Frequently, terms used in a definition are also defined in this dictionary; where appropriate, such terms are marked with an asterisk to facilitate cross-referencing. The objective of this dictionary is to provide more than a reference for looking up unfamiliar words. By systematically studying the information in this dictionary, one can acquire a basic understanding of the Psychology of Learning. DICTIONARY Act A way of defining responses in terms of their consequences without regard to topographical details. Almost all of our conventional response terms, from bar-pressing' in rats to watching TV' in human, are defined by what they accomplish and are not concerned with the actual postures and muscle movements used in achieving those ends. An important issue in the Psychology of Learning is whether organisms learn acts or whether learning is restricted to the particular movements that are practiced. Act, receptor-orienting The response of orienting one's receptor(s) so as to maximize exposure to some stimulus. Receptor-orienting acts are common in discrimination learning because the organism may have first learned to look at the relevant part of the stimulus before being able to respond appropriately. You have probably learned to sniff briskly in trying to detect an odor, and to roll a substance around your tongue in order to savor its flavor. Adaptation (see Habituation) Adaptation, sensory A basically physiological process in which a stimulus energy temporarily loses its effectiveness as a result of continued exposure. Light adaptation (as when going outside into the sunlight) and dark adaptation (as when going into a theater) are among the many familiar instances. Association A hypothetical relationship between two events such that the occurrence of the first tends to call forth the second. In general there are two kinds of associations, S-S (association of a stimulus with another stimulus) and S-R (association of a stimulus with a response). Most theories posit only one or the other of these kinds of associations, attempting to subsume all learning in terms of association formation. An example of an S-S association is when the sound of the music of a song makes you think of the setting in which you first heard the song. An example of an S-R association is when the sound of the music of a song elicits emotional reactions experienced when you first heard the song. Attention A response which may be overt in the form of receptor orienting acts*, or covert and hence only inferred from the fact that only some of the potential stimuli in a learning situation may actually gain control over behavior. This is presumably a result of the greater intensity, salience, or perspicuity of certain stimuli that command the organism's attention. Attention, selective As with the more general concept of attention*, selective attention is inferred from the fact that only part of a total stimulus complex may gain control over behavior. But in this case, attention cannot be attributed to conspicuous features of the stimulus, and hence it is presumed that the organism has selected particular stimuli to attend to. In this context, an instructive exercise is to press the fingernail of one index finger into the palm of the other hand. You will note that you can selectively attend either to the mild pain produced by the finger on the palm, OR to the mild pressure produced by the palm on the finger. Avoidance (see Conditioning, avoidance and Response, avoidance) Avoidance, passive An expression sometimes used to refer to punishment because, in that paradigm, the organism can avoid the aversive event (punishment) passively by not making the punished response. Awareness A descriptive term that applies to a human subject who is able to verbalize the contingencies prevailing in an experiment. In our attempt to get at the basic processes, we frequently attempt to deceived the subjects about the true purpose of an experiment. Even so, some subjects figure out what the contingencies are and indeed, there are some theorists who contend that awareness is necessary for learning to occur. Behavior chain An integrated sequence of responses that comprise a molar act*. Responses are integrated when the feedback from earlier responses set the occasion for the next response in the sequence. In a homogeneous chain, the responses are the same but must be repeated some number of times. Walking, viewed as repetitively putting one foot in front of the other, is a homogeneous behavior chain. In contrast, a heterogeneous chain involves a sequence of different responses. Speaking, viewed as uttering sequences of sounds, is a heterogeneous behavior chain because each sound is at once a response and also a cue for the emission of the next sounds. Behavior modification Although the expression would be appropriate in any situation in which learning principles are used to change an organism's behavior, behavior modification typically refers to clinical treatment procedures based on established laboratory methods. The basic goal of behavior modification is to extinguish or otherwise eliminate undesirable, maladaptive behavior and replace it with newly learned desirable, adaptive behavior. Opponents of such techniques contend that they are only symptomatic treatments, without dealing with the underlying cause of the original behavior. Blocking A phenomenon observed when a stimulus is combined with another stimulus that has already been conditioned. The added stimulus may gain little or no control even though it would normally be an effective stimulus with the amount of training given. The pretrained stimulus apparently overshadows the added stimulus and blocks it from becoming conditioned. Blocking does not occur if, at the time the stimulus is added to conditions of reinforcement are changed. Such a change apparently unblocks the added stimulus from being overshadowed by the trained stimulus. Chain, behavior (See Behavior chain) Cognition Symbolic interaction with the environment; in humans, the symbolism is predominantly verbal. Although we are all intimately familiar with our own cognitive processes (thinking, reasoning, planning, coding, interpreting, daydreaming, contemplating . . . one's mental life), we cannot directly observe covert activities of another organism (animal or human). We might, or course, accept a person's verbal report as accurately describing his or her thoughts, but such subjective reports are often suspect. However, there are objective types of behavior situations from which we can reasonably infer the occurrence of cognition in the subject. Conditioning, avoidance A learning paradigm in which the occurrence of an aversive event can be prevented by the occurrence of an avoidance response. In cued avoidance a warning signal precedes the aversive event, and typically, the avoidance response also terminates the warning signal. In noncued avoidance, there is no warning signal although the aversive event typically occurs with some temporal regularity. Conditioning, backward A classical conditioning paradigm in which the US is presented before the CS. Although there have been a few reports of reverse connections,' the majority of studies have found that there is no conditioning' of a CR to the CS when the stimuli are presented in 'backward' order. Conditioning, classical A learning paradigm in which two stimuli are repeatedly presented in a fixed temporal order and where the second of the stimuli (the Unconditioned Stimulus, US) reliably elicits a reflex response (the Unconditioned Response, UR). The first stimulus (the Conditioned Stimulus, CS) can be any event within the sensory capacity of the organism, but it is normally one that is initially neutral with respect to the UR. The outcome of this operation is that a response (the Conditioned Response, CR) usually bearing at least a family resemblance to the UR is elicited by the CS. Conditioning, delayed A classical conditioning procedure distinguished by the fact that the CS persists throughout the CS-US interval. Hence, in the delayed conditioning procedure, the CS is physically contiguous in time with the onset of the US. Conditioning, differential A learning paradigm involving the separate presentation of 2 (or more) stimuli with different contingencies prevailing depending on which stimulus is presented. Differential conditioning can occur in classical, operant, or instrumental procedures and in each case leads to differential responding that is more-or-less appropriate to the contingencies associated with each stimulus. Most commonly, one stimulus (called S+) is reinforced and the other stimulus (S-) is nonreinforced, in which case the organism learns to respond in the presence of S+ and not in the presence of S-. Conditioning, higher-order A variation on the classical conditioning theme in which, in place of a US with an unlearned reflexive tendency to produce a UR, a previously-conditioned CS is used. So long as the old' CS reliably elicits a CR, it can be used instead of a US to condition the response to a new' CS. Conditioning, instrumental A learning paradigm in which a response is aperiodically enabled, and its emission is shortly followed by a reinforcing state of affairs. The defining feature of instrumental conditioning is that the response is in fact instrumental in obtaining the reinforcement, and it is distinguished from operant conditioning by the fact that a response can only occur on discrete trials. Hence, a hungry rat running down a short straight alley for food reward is prototypical of instrumental conditioning. Conditioning, operant A learning paradigm in which a response is freely available to the organism and is emission is closely followed by reinforcement. The defining feature of operant conditioning is that the response operates on the environment to produce the reinforcement, and the distinguishing feature is that the response is continuously available. Hence, a hungry rat pressing on a freely-available bar for food reward is prototypical of operant conditioning. Conditioning, sensory A variation on the classical conditioning theme in which both of the stimuli are neutral. It involves the pairing of two stimuli in a regular order, but neither stimulus is a US with a reflex UR. As a result, there is no overt CR to monitor the occurrence of sensory conditioning. If the subjects are human adults, we can obtain verbal reports that the presentation of one stimulus tends to elicit an image' of the other stimulus. An objective technique for evaluating sensory conditioning involves a three-stage experimental design called sensory preconditioning*. Conditioning, simultaneous A conditioning paradigm in which the S and the US begin at the same time (although they may end at different times). Typically, little or no conditioning, in the sense of an overt response to the S presented alone, is observed with the simultaneous procedure. Conditioning, temporal A conditioning paradigm in which a US is presented at regular intervals without regard to the organism's behavior and without any explicit antedating CS. Provided the intervals are no too long (up to several minutes), a CR occurs shortly before each US occurrence. The interpretation of temporal conditioning is that each US serves not only to elicit a UR but also as a (Trace) CS for the next occurrence of the US. Perfect examples of temporal conditioning are not very common in everyday life because there are usually regular CS's associated with temporal regularities in the environment. But if you have had occasion to listen to a leaky faucet dripping in the sink, you will be familiar with temporally conditioned anticipations of the next drip. Conditioning, trace A classical conditioning paradigm in which the CS is brief and terminates before the US occurs. Because the CR tends to occur just before the US, the CS is not physically present at the time the CR occurs. Hence, the CR is presumably associated with the (memory) trace of the CS. In fact, conditioning is typically found to be just as good with the trace procedure as with the delayed procedure. Actually, most everyday instances of classical conditioning fit the trace paradigm. For example, if a child is given candy after being told his or her behavior was good, the actual sound of the word 'good' is no longer present when the candy is received. Nevertheless, the word 'good' will acquire secondary reinforcing properties as a result of this experience. Context The background stimulus environment in which a learning experience occurs. Context includes the interoceptive as well as the exteroceptive environment, and also includes stimulus events in the recent past. It is clear that an organism's response to a stimulus can be made conditional on the context in which it occurs. For example, a dog can learn to salivate to a buzzer in one room but not in another room, just as you have learned to react differently to words such as foot,' light,' turkey,' etc., depending on the verbal context. Of greater practical significance is the fact that learning necessarily takes place in some context and is somewhat specific to that context. Contiguity, S-S Two stimulus events that occur reasonably close together in space and/or time. Insofar as learning involves S-S associations, S-S contiguity is a necessary condition for such learning and most S-S theorists also contend that it is sufficient. Contiguity, S-R A response occurring reasonable close in space and/or time to a stimulus. Insofar as learning involves S-R associations, S-R contiguity is a necessary condition for such learning, and most S-R theorists also contend that it is sufficient. Contrast In general, contrast is a phenomenon in which the reaction to a stimulus is influences by other stimuli in the environment. When the reaction is stronger than it would normally be, we speak of positive contrast; conversely, when the reactions is weaker than it would normally be, we speak of negative contrast. Contrast, behavioral In differential operant conditioning (a multiple reinforcement schedule) response rate in the two components may differ from that which is generated by each of the schedules if experiences separately. Response rate may be higher in the preferred component (positive behavioral contrast) and lower in the nonpreferred component (negative behavioral contrast). The latter is sometimes concealed by a floor effect*; for example, if the nonpreferred schedule is extinction, the subject can exhibit no lower than a zero response rate. Behavioral contrast may also be concealed when the schedule exerts strong control over response rate. For example, if two DRL schedules are combined in a multiple schedule, the requirement for spaced responding may prevail over contrast effects. Contrast, incentive In differential instrumental conditioning performance to the two stimuli may differ from that which would obtain were the conditions of reinforcement experienced separately. The response may be stronger (faster, more vigorous) to the stimulus associated with the preferred conditions (positive incentive contrast) and weaker (slower, less vigorous) to the stimulus associated with the less preferred condition (negative incentive contrast). Contrast, simultaneous When contrast effects are observed during differential conditioning*, so that the two stimuli are repeatedly presented, we refer to simultaneous contrast. The effects are not strictly simultaneous since only one stimulus is presented at any one time, but they are recoverable over a number of repetitions and hence are conceptually overlapping. Contrast, successive A contrast effect is frequently observed when an organism has had extensive training with one schedule/condition of reinforcement and then is shifted to a new schedule/condition for continued training. In successive contrast, there is typically no change in any discriminative stimulus, and there is a single shift and hence, a single opportunity to observe contrast effects. Control, stimulus Behavior is said to be under stimulus control when a change (offset, onset, increase or decrease in intensity, or a change in some qualitative dimension) in a specified stimulus event is reliably shortly followed by a change in some aspect of behavior, and there is some specificity of the effect to that stimulus. Insofar as the goal of Psychology is the prediction and control of behavior, a major objective is to discover how to bring behavior under stimulus control. The Principles of Learning are a major component of that objective. Control, self When behavior is controlled by stimuli emitted by the organism tself, we can say that the organism has self control over that behavior. A special and especially interesting case is a person's control of his or her own behavior, usually by means of verbal self-instructions. Although we commonly think of self-control in terms of restraining response tendencies, the more general concept also includes response emission. (See Volition). Cue A discriminative stimulus. It is conventional to use the term 'cue' in discrimination learning contexts and then to distinguish between relevant cues (those that are explicitly correlated with reinforcement) and irrelevant cues. In many learning situations, a major aspect of the task is learning to attend to the relevant cues. Decrement, stimulus generalization The decrease in the level of performance that results if the test situation is different from the original learning situation. Although learning tends to generalize to similar stimuli, the response is weaker, and more so the less the similarity. Differentiation, response Learning to emit a response with particular quantitative and qualitative properties. Response differentiation applies to all of our skills. For example, the pianist must not simply strike the right keys but must do so with the right timing and force. Similarly, the ballet dancer must execute the steps rhythmically and gracefully. Even a lecturer should adjust the speed and intensity of his or her voice in relation to the nature and size of the audience. In general, response differentiation is learned as a result of differential reinforcement of the relevant response dimensions. Hence, where maximal speed is desired, maximal reward should be given for faster and faster speeds. Discrimination, stimulus Learning to select from among several simultaneously-presented stimuli on the basis of differential reinforcement associated with those stimuli. In the laboratory, a rat might be reinforced for choosing the black rather than a white stimulus regardless of position. Among the multitude of discriminations that we all make every day, choosing and correct answer on multiple-choice tests is especially relevant for students. As might well be the similarity of the stimuli, organisms can learn finer discriminations when the stimuli are presented simultaneously than when they are presented separately in differential conditioning*. Disinhibition An increase in the strength of a response occasioned by the presentation of a novel stimulus in the situation. Typically, novel stimuli lead to external inhibition*, i.e., a decrease in the strength of the response. If, however, the response strength is low because it is bing inhibited as a result of extinction*, the novel stimulus may lead to a reappearance of the response. The term itself suggests that the novel stimulus is able, somehow, to remove the inhibition. For example, a person may have stopped using double negatives (e.g., "I don't have no money"), but slip up now and then under pressure. Drive A hypothetical process given the role of energizing the organism into performance of learned responses. "Drive" and "motivation" are often used interchangeably, although drive is typically a more specific construct (e.g., the hunger drive, the thirst drive, etc.). However, "drive" is to be distinguished from "need". . . a need is a physiological necessity for survival whereas a drive is a psychological force. Although it is true that when we really do not need more food (especially if an appealing dessert is offered) and we may not feel hungry even when we actually need food (especially particular vitamins and minerals). Drive, boredom (See Curiosity) Drive, exploratory (See Curiosity) Drive, irrelevant A motivational state that is not appropriate for the nature of the reinforcement. For example, fear is an irrelevant drive when taking an exam. Such drives are, of course, not irrelevant to the organism; one might just as well refer to irrelevant reinforcement. Drive, primary Drives that are unlearned. The primary drives are based on deprivation of commodities necessary for survival, such as food and water, or presentation of injurious events, such as electric shock. (Same as Motivation, primary). Drive, secondary A drive that has been acquired as a result of learning experiments. The least controversial secondary drive is fear which can become associated through classical conditioning to any of a variety of originally neutral stimuli. For example, pairing the word "bad" with punishment will result in the word having effective secondary motivating properties. (Same as Motivation, secondary). Drive, stimulus A hypothetical cue aroused in conjunction with any drive state. Many theorists appeal to the construct of drive stimulus to enable to organism to detect which drive is present and in what degree. Thus, you can discriminate being hungry from being thirsty, and being hungry from being famished. The drive stimulus is postulated to enable the organism to select a relevant response. Dynamic transmission A phenomenon with human subjects in which a response conditioned to a physical object can be elicited by the name of that object, or vice versa. For example, a person might be conditioned to blink when a picture of a dog is flashed on a screen; subsequently the person may blink if the word "dog" is played through a speaker. There is much everyday evidence of dynamic transmission. For example, the word "snake" is enough to make many people cringe, and the effectiveness of calling people names (even though they ostensibly "never hurt") is dramatic testimony to the generality of the phenomenon. Effect, easy-to-hard Learning a hard discrimination between very similar stimuli is facilitated by preliminary training on an easy discrimination between stimuli that are more different along the relevant stimulus dimension. For example, in training an apprentice gemologist, one begins with stones that are conspicuously different in quality and gradually develops a trained eye'. In some laboratory studies, the hard discrimination could not even be learned unless preceded by exposure to easier problems of the same type. Effect, frustration The increased vigor of responses that occur shortly after an organism has experienced frustrative nonreward. If an organism expects reward because of past experiences, nonreward is frustrating and the motivational property of frustration may be revealed in an over-reaction to subsequent stimuli. Specifically in the laboratory, a rat that has failed to receive an accustomed reward for one response will run faster if immediately placed in a runway. In analogous fashion, if you fail to get thegood grade you expected for a paper, you may find that, for a while, you talk louder and act more vigorously than usual. Effect, Law of There are two parts to the original Law of Effect: Responses that are followed by a satisfier' are likely to be repeated (stamped in), an responses that are followed by an annoyer' are likely not to be repeated (stamped out). As an empirical generalization about performance, the Law of Effect is generally accepted although there may be limitations about whether all rewards and punishers are equally effective for all responses. The Law of Effect has also been interpreted as a theoretical proposition, that learning is effected by reinforcement. That proposition has been at the center of many controversies in the Psychology of Learning. (See Theory, reinforcement.) Effect, overlearning reversal (ORE) The reversal of a discrimination proceeds faster if training on the original discrimination has been extended well beyond the point where there is no further measurable improvement in performance. Effect, overlearning extinction (OEE) The extinction of a response proceeds faster if training has been extended well beyond the point where there is no further measurable improvement in performance. Effect, partial reinforcement acquisition (PREA) The terminal performance level of an instrumental response (such as a rat running down a straight alley) that has received PRF is superior to (Faster than) that following CRF in the early and intermediate parts of the response although performance is inferior in the last part of the response. Effect, partial reinforcement extinction (PREE) The resistance to extinction of a response is greater if that response has received PRF than if it has received CRF. In general, the lower the percentage of times that the response was reinforced during acquisition the more persistent the response will be when no further reinforcement are given (i.e., extinction). The PREE occurs in all conditioning paradigms. It also tends to generalize to some extent from one situation to similar situations. That is to say, PRF for responding to one stimulus will increase the resistance to extinction of that response to other stimuli that have been the occasion of CRF. Indeed, it is possible that early exposure to stringent schedules of reinforcement promote later persistence in the face of frustration (failure). Empiricism A scientific strategy in which all terms refer to observable events and all relationships are empirically verified by experimental analysis. Empiricism is atheoretical; it disclaims any appeal to hidden causes, imaginary processes, that is, to hypothetical constructs. A purely empirical Psychology of Learning seeks to discover the variables of which learning is a function and, at most, to attempt to interrelate those functions systematically. All experimental psychologists are empiricists and accept the laboratory as the ultimate court of appeal but only some of them presume to go beyond the purely empirical level and create theories that purport to explain the empirical phenomena. Expectancy A hypothetical cognitive process in which an organism expects or anticipates the occurrence of a stimulus event. Specifically for example, an expectancy analysis of Pavlovian conditioning is that the antecedent stimulus such as a metronome makes the dog think about (anticipate, expect) the delivery of the food. Extinction, experimental A decrease in the strength of a response as a result of nonreinforcement. Fear Theoretically, an emotionally-negative state associated with responses that are innately elicited by painful, aversive stimuli*. According to fear theory, the fear response (rf-sf) can also become conditioned to originally neutral stimuli and is the mechanism of secondary motivation*. An important implication of this theory is that, when an acquired fear is inappropriate, such as fear of social situations, rf-sf can be extinguished by exposure to the feared stimulus without aversive consequences. Feedback, negative When the effect of response-produced feedback is to decrease the tendency to emit the response, it is described as negative feedback. Negative feedback is not necessarily emotionally-negative. An especially important instance of negative feedback is error information leading to corrections. Feedback, response-produced At the most primitive level, feedback refers to the way a response feels, or looks, or sounds when it is being performed. Thus, when you move your arm, you feel your arm moving (technically know as kinesthetic and proprioceptive sensations from the muscles and joints of your arm) as well as any changes in the pressure of your clothing or any objects that you may touch as a result. Similarly, when you talk you feel your mouth and tongue moving and you hear the words you are uttering. Such feedback is essential for the normal performance of the response. If the feedback is delayed or distorted in some way, performance deteriorates dramatically. The term "feedback" is also used more generally for any kind of differential stimulation that is contingent on the emission of different responses. Thus, knowledge-of-results and reinforcement are forms of feedback because they inform the organism about the adequacy of the response. Fixation A phenomenon in which a response is extremely resistant to extinction, typically resulting from the fact that the response was not only partially reinforce during acquisition, but was also punished. Insufficient punishment, that is, punishment that does not actually eliminate the behavior may fixate the response so that it persists even when it is not effective. Fixed action pattern An unlearned behavior chain which, once initiated by a releasing stimulus*, runs through to completion without any further exteroceptive stimulus support. These are frequently observed in lower animals in conjunction with mating behavior. Frustration A hypothetical response (rf) occasioned innately by the failure to receive an anticipated reward. Frustration is assumed to have motivational (potentiating) properties and it is also assumed to control behavior by the innate or learned response associated with its response-produced sf. Generalization, cross-modal The generalization of a learned association from one sense modality, such as vision, to another, such as hearing. The most positive evidence of cross-modal generalization comes from training people with a visual shape discrimination which they can then perform tactually (feeling the stimuli). Another approach is to use non-specific stimulus features such as rhythm; Morse code can be presented in flashing lights or in audible clicks. Generalization, response The tendency to emit responses that are similar to the reinforced response if it is for some reason blocked. A particular instance of response generalization is bilateral transfer. You probably make most of your responses with your preferred hand but you typically can, albeit clumsily, make many of those responses with your nonpreferred hand if required to do so. Conceptually response generalization is based on the similarity of response feedback*. You will do much more poorly with your nonpreferred hand if deprived of visual feedback so that you cannot see and correct errors as they occur. Generalization, stimulus A response acquired in one stimulus situation will tend also to occur in similar stimulus situations. The strength of a generalized response tendency depends on the strength of conditioning to the original stimulus; generalized responses are naturally weaker than the response to the original stimulus. Within that limit, the generalized response tendency is stronger the more similar a new test situation is to the original learning situation. Although stimulus generalization is typically adaptive (similar situations usually require similar behavior), many common difficulties result from over-generalization. Prejudices and stereotypes are among them. Gradient, generalization The gradient depicts graphically the way in which ti stimulus generalization of a response varies with the difference between the training and test situations. A steep' gradient indicates that the response is narrowly confined to the training stimulus; a flat' gradient means that the response generalizes broadly to a wide range of stimuli. Among the variables that affect the slope of generalization gradient is the amount of training. As depicted in the figure the range of generalization increases as training progresses from a low to intermediate amount, but the gradient gets steeper with continued training. Gradient, goal In a spatial or temporal behavior chain culminating in a reward, both learning and performance tend to increase the closer one gets to the goal. The goal gradient is presumably based on the within-chain delay of reinforcement because, the closer one gets to the goal, the shorter the delay before the reward is received. However, the empirical goal gradient of performance measures such as speed typically reach a peak somewhere beyond the middle of the response chain and then decrease. One interpretation of the terminal decrease is that the end of the behavior chain is typically followed by performing some other, incompatible response (such as consuming the reward) which becomes anticipatory. For whatever reason, the terminal decrease in performance near the end of a behavior chain is greater with partial reinforcement than with continuous reinforcement. (See Effect, partial reinforcement acquisition) Gradient, postdiscrimination The stimulus generalization gradient obtained after differential reinforcement of two stimuli along the stimulus continuum. The postdiscrimination gradient is steeper than the gradient obtained without prior differential reinforcement, and more so the closer the stimuli given differential reinforcement are to each other. (See also Peak shift) Habituation A basically psychological process whereby responsiveness to a stimulus is reduced as a result of repeated exposures without any significant consequences. Habituation is initially revealed in a decrease in the orienting response to a stimulus; adapted stimuli are also relatively ineffective in a learning context. (Sometimes called adaptation*.) Helplessness, learned A descriptive term characterizing the non-adaptive behavior of an organism who, having previously been exposed to unpredictable, unavoidable, inescapable aversive events, fails to learn in a new situation in which it is possible to avoid an aversive event. It is as if, in the former environment, the organism learns that nothing can be done to control what happens, and this response of giving up' generalizes to one of not event trying in a new situation. If an unfortunate child early on encounters a teacher who is not responsive to the child's efforts, the child may learn to feel helpless in all school situations. And many is the child who has said, "What difference does it make? No matter what I do, my parents are never satisfied." Hope Theoretically , an emotionally-positive state associated with the anticipation of reinforcement*. Hope functions as a "go" mechanism in incentive theories*, which may be opposed by the "no-go" mechanisms of frustration (anticipation of nonreinforcement) or fear (anticipation of punishment*). Hypothesis An empirical hypothesis is a prediction about the outcome of an experiment, ideally derived from a systematic analysis. A theoretical hypothesis is a proposition about the possible conceptual basis for an established phenomenon. In both cases, the singular criterion of a good hypothesis is that it be empirically testable. An hypothesis can never be proven true; the best that can happen is that the result of the test is consistent with the hypothesis and hence supports it. If the result is inconsistent with the hypothesis, it can be rejected. Hypothesis, aftereffects An analysis of the PREE based on the assumption that there are persisting aftereffects (memories) of the reinforcement or nonreinforcement from the preceding trial(s). The proposition is that partially reinforced organisms are sometimes reinforced during acquisition for responding in spite of having been nonreinforced on the immediately preceding one or more trials. Hence, such organisms should persist in responding when they are nonreinforced in extinction. In contrast, organisms who have received CRF encounter nonreinforcement for the first time during extinction. Hence, such organisms show a large stimulus generalization decrement on the second extinction trial. Hypothesis, discrimination An analysis of the PREE based on the assumption that an organism must discriminate between acquisition and extinction in order for behavior to extinguish. The difference between continuous reinforcement and the continuous nonreinforcement in extinction is greater than the difference between partial reinforcement (which is, of course, equally well described as partial nonreinforcement) and continuous nonreinforcement. In short, organisms trained with PRF persist longer because it takes them longer to detect that the schedule of reinforcement has changed to extinction. Hypothesis, drive-reduction The strong form of the drive-reduction hypothesis is that drive reduction is both a necessary and a sufficient condition for reinforcement . . . all reinforcers entail the reduction of some drive state. The weak form of the drive-reduction hypothesis is that drive reduction is sufficient, but not necessary for reinforcement. Hence, all drive-reductions are reinforcements but not all reinforcements are drive-reductions. Hypothesis, frustration An analysis of both the PREA and the PREE*. The basic assumptions are that (1) the failure to receive an expected reward is frustrating, that (2) frustration (rf) has motivational properties, that (3) rf is a learnable response and becomes anticipatory in a behavior chain followed by frustrative nonreward*, and that (4) the innate response to rf-sf is incompatible with the instrumental response but that new compatible responses can become associated with rf-sf. Hence, to account for the PREA, occasional nonreinforcement conditions rf-sf to cues early in the behavior chain, and compatible responses become associated with rf-sf because making the instrumental response in spite of anticipatory frustration is sometimes reinforced. This means that rf-sf actually potentiates the instrumental response through its motivational properties. Later, in extinction, because partially reinforced organisms have had compatible responses associated with rf-sf, they persist in making the instrumental response whereas continuously reinforced organisms, who experience frustrative nonreinforcement for the first time during extinction, make the innate incompatible response which interferes with the instrumental response. Although somewhat complicated, the frustration hypothesis has a certain intuitive appeal. If you sometimes, but not always, receive a good grade for your school projects you are likely to embark on a new project with more vigor than the person who always or never received good grades. Past successes have taught you to give it another try and past failures have taught you to put out extra effort in your new venture. The person who always gets good grades doesn't have the added impetus of the fear of failure, and the person who always fails, is disinclined to even start on another inadequate project. Hypothesis, need-reduction The contention that all primary reinforcements entail a decrease in a biological survival need of the organism. The need-reductio hypothesis antedated the drive-reduction hypothesis*, and was clearly refuted by evidence that a substance such as saccharine, which is nondigestible and hence cannot affect the organism's need state, nevertheless can serve as a very effective reinforcer. Hypothesis, prepotent response The hypothesis that operant/instrumental reinforcement results when a high-probability (prepotent) response follows a lower-probability response. For example, eating is a high-probability behavior for a hungry organism so enabling the organism to eat by giving food is reinforcing of lower probability behaviors such as running or pressing bars. More generally, letting a person (including yourself) do something he or she wants to do can reinforce doing things that should be done. Imitation The attempt to emit the behavior observed in another organism. Many species appear to have an innate tendency to imitate the behavior of similar organisms, but imitation is also a learnable response. The imitated behavior must already be substantially available in the organism's repertoire for imitation to occur. (See Learning, vicarious.) Imprinting A phenomenon observed in various species of fowl, such that an infant organism develops a strong attachment to whatever object it encounters during a particular time (the critical period) after hatching. The infant's attachment is revealed in following the object around and in emitting distress calls if the object is removed. Stronger evidence of imprinting is maturity. Imprinting reflects a powerful effect of experience, but does not readily fit into any conventional learning paradigm*. Inhibition A general term for a hypothetical process that opposes positive excitatory tendency. Inhibition is usually said to develop as a result of nonreinforcement. Inhibition, conditioned An inhibitory (decremental) property of an initially neutral stimulus that is acquired as a result of signaling nonreinforcement when it is combined with a normally positive (reinforced) stimulus. Verbal stimuli, such as a parental threat to deny children some fun experience unless they behave, are common conditioned inhibitors with humans. Inhibition, external If something unusual happens shortly before the presentation of the stimulus for a learned response, there will be a decrement in the performance of that response. As might be expected, the sooner the unusual event is to the conditioned stimulus and the more distracting the unusual event is, the greater the disruption in performance. This phenomenon can be observed in any of the learning paradigms and can be understood as a special case of a stimulus generalization decrement*. The unusual event naturally changes the context thus changing the total stimulus situation and leading to poorer performance. Inhibition, latent The decremental effect of pre-exposing the organism to a stimulus before it is used in a conditioning paradigm. If the orienting response to a stimulus has been habituated as a result of repeated exposure in isolation future learning with respect to that stimulus is retarded and not very persistent. Innate A result of genetic factors, inherited, unlearned. Reflexes and illustrate innate behavioral tendencies. Instinct An unlearned disposition to respond in a particular way when exposed to appropriate (releasing*) stimuli. The word instinct' is preferred to reflex' when the behavior in question is of a molar nature. The most elaborate instincts in animal behavior revolve around reproduction but even in that context instinctive behavior can be shaped to some extent by practice. Intensity, stimulus A quantitative dimension of a stimulus event, e.g., the brightness of a light, the loudness of a tone, etc. Interval, interstimulus (ISI) In classical conditioning*, the interval of time between the onset of the CS and the onset of the US*. For many response systems (such as the eyeblink), the optimum ISI is about one second or less, but for other systems (such as salivation), conditioning can be obtained with ISI's of several minutes. In all cases the ISI can be too long to be effective. Learning A relatively persistent hypothetical process resulting from experience and reflected in a change in behavior under appropriate circumstances. By hypothetical' we mean that learning itself cannot be directly observed; we can only infer that learning has occurred if we see some change in an organism's behavior. Thus, I cannot see what you've learned about the Psychology of Learning, but if you do better on the final exam than you could have done without studying the subject, I can infer that you have learned something and, presumably, the better you do on the exam, the more you have learned. To be a genuine reflection of learning, the change in behavior must be relatively persistent; indeed there are good reasons to believe that most learning is, for all practical purposes, permanent. It is also necessary to rule out other possible reasons for the change such as improved muscle tonicity, fatigue, or motivational factors. Learning, differentiation (See Differentiation, response) Learning, discrimination (See Discrimination, stimulus) Learning, escape A paradigm in which an organism must learn a response in order to terminate an aversive state of affairs. Aversive stimuli typically elicit unlearned responses and these may be successful; for example, an untrained person who has fallen overboard from a boat might manage to stay afloat by hectic movement of arms and legs. Escape learning involves acquiring new responses to cope with an aversive situation, at least reducing its aversiveness. Although swimming is also an enjoyable sport, it is initially learned as an escape from the aversive experience of sinking under water. Learning, latent Learning that is not apparent in an organism's behavior. There may not be any immediate change in behavior in spite of exposure to some potential learning situation, but an appropriate change may later be observed when the organism is adequately motivated to display what has been learned. Learning, observational (See Learning, vicarious) Learning, vicarious A change in behavior as a result of observing another organism being exposed to the contingencies that prevail in a learning paradigm*. The use of the term, "vicarious," implies the presumption that the observer reacts emotionally to the emotionally-significant events experienced by the model. Learning set If a primate is exposed to a series of learning tasks of the same type (e.g., discrimination between objects, matching-to-sample*, PA learning, etc.), the number of trials required to learn decreases. This is true even though the actual stimuli involved in each task are different. Functionally, the subject has learned how to learn that particular type of task. A learning set is specific to the type of task practiced. However, educated adult humans have learned to learn many types of tasks. Mediation Making one or more responses to an initial stimulus, whose response-produced feedback cue the designated response, with human, natural-language verbal mediators are most common. For example, the designated association black-Christmas might use "white" as a mediator (black-white, white-Christmas). However, any response including an image can serve a mediating function, and there may be a fairly long chain of responses involved in the mediation of a single association. Memory A hypothetical process enabling the recall of information to which an organism was previously exposed. Learning and memory are often said to be opposite sides of the same coin, learning being the acquisition of associations and memory being their utilization. Motivation Motivation is the generic term for any temporary state property of the organism that serves to arouse the organism and energize (potentiate) the organism's behavior. Motivation, opponent process A state, resulting from the occurrence of an emotionally-significant event, that is opposite in hedonic (affective, emotional) value from the original event. For example, the "high" resulting from drug use is followed by a "low" that motivates further consumption of the drug. Motivation, primary Motivational states that arise without learning. There are two classes of primary motivational states, those produced by deprivation of some commodity and those produced by aversive stimulation. (Same as Drive, primary.) Motivation, secondary Motivational states that depend on appropriate learning experiences for their existence. The principal basis of secondary motivation is classical conditioning*; an originally neutral stimulus if paired with a primary aversive stimulus, acquires secondary motivating (fear*) properties. (Same as Drive, secondary.) Need, biological A condition arising from deprivation of any commodity that is necessary for survival of the organism. Omission training A learning paradigm in which an event, usually an unconditioned stimulus is scheduled to occur provided the organism does not make a response. (Avoidance conditioning is a special case of omission training where the scheduled event is aversive.) Operant level The rate at which a freely-available response is emitted without any explicit reinforcement*. For example, you might watch for meaningless mannerisms made by a lecturer and count how frequently they occur. Overshadowing When a compound of two stimuli is used as a CS*, only one of them may become conditioned, it is usually the more intense, conspicuous stimulus that is conditioned, presumably because it overshadows perception of the weaker stimulus. Preconditioning, sensory A method for determining whether sensory conditioning can occur in animals. First, two neutral stimuli such as a tone and a light are paired; this is the preconditioning phase. Then one of them is conditioned with an effective unconditioned stimulus such as food. Finally, the other stimulus is presented. If a CR occurs to the test stimulus that was never paired with the US, it is inferred that the two neutral stimuli became associated during the preconditioning phase. Predifferentiation, stimulus Exposing an organism to stimuli later to be used in a discrimination learning paradigm. During predifferentiation, no explicit contingencies of reinforcement are involved, but preexposure may nevertheless facilitate the subsequent discrimination. Punishment The occurrence of an event shortly following a response that leads to a decrease in the probability of recurrence of that response. (The word punishment' is also frequently used simply because the intent is to eradicate some undesirable behavior. The student must be careful in deciding whether the word is being used in this more casual sense.) As with reinforcement*, there is no implication that the response actually produced the punisher or that the organism be aware of the contingency. Punishment, primary negative Following a response with the removal of an innately emotionally-positive event. Candy is such an event for most children, so taking candy away from a child for misbehaving would be an instance of primary negative punishment. Note that the operation involves taking away something that the organism already has; the operation of not giving an emotionally-positive event is extinction*. Punishment, primary positive Following a response with the presentation of an innately emotionally-negative event. Electric shock is the most common punisher used in the laboratory, but any painful event (such as a physical blow, a bite, a very hot object, etc.) can be used as a positive punisher. Generally speaking, any events that function as positive punishers can, by their removal, serve as negative reinforcers*. Punishment, secondary negative Following a response with the removal of an event that has acquired emotionally-positive properties. Such events are secondary reinforcers*, having acquired their properties by being paired with primary reinforcers. Hence, taking money away from a person, as is done with fines for breaking laws, constitutes secondary negative punishment. Punishment, secondary positive Following a response with the presentation of an event that has acquired emotionally-negative properties. The event is a secondary motivator as a result of having been paired with primary aversive events. Thus, after the word "bad" has been paired with painful consequences, the word itself can be used as a secondary positive punisher. Punishment, varied Although paltry little research has been done on the topic, it should be clear that, logically there are many schedules and conditions of punishment as there are schedules and conditions of reinforcement. Varied punishment means that some dimension of the punishment, such as its intensity, is varied from occasion to occasion. Varied punishment is less effective in suppressing behavior than is constant, intense punishment, and varied punishment may actually increase the persistence of the behavior. Reflex A simple, unlearned response elicited automatically by an appropriate stimulus. The knee-jerk to a tap on the knee cap is a familiar reflex in humans. Reflex, orienting/investigatory The unlearned response of turning toward a source of stimulation. Orienting responses are most commonly visual, and are elicited by novel stimuli in direct proportion to their intensity. Reinforcement (Classical conditioning) The occurrence of the US. Reinforcement (Operant/instrumental conditioning) The occurrence of an event shortly after a response that leads to an increase in the probability of recurrence of that response in the future. (Note 1. The event need not be produced by the response for reinforcement to occur.) (Note 2. The organism need not be aware of the contingency for reinforcement to occur.) Reinforcement, amount of The quantity of reinforcer, such as the number of pellets of food given as reward. In general, operant/instrumental performance is an increasing function of the amount of reinforcement. This includes secondary reinforcement*; the more lavish you praise for another person's good performance, the more likely that person is to continue performing well. Reinforcement, conditioned (See Reinforcement, secondary) Reinforcement, correlated A condition in which some dimension of the reinforcer (amount, delay, etc.) is systematically related to some dimension of the response (speed, vigor, etc.). Presumably, a student's grades are correlated with the effort expended in studying. In general, organisms tend to maximize reinforcement when exposed to conditions of correlated reinforcement, but may not expend the extra effort if the greater reward is not "worth it." Reinforcement, delay of The time after an operant/instrumental response has been made before a reinforcer is given. In general, performance is lower the longer the delay of reinforcement although the effect depends importantly on what the organism does during the delay interval. Delay is most detrimental if incompatible responses occur during the delay whereas delay may have little or no effect if the organism can keep repeating the response during the delay. Reinforcement, nondifferential Providing two stimuli with the same schedule and condition of reinforcement. Not only does nondifferential reinforcement lead to nondifferential behavior but it also tends to retard any future occasions for learning with differential reinforcement*. Reinforcement, partial (PRF) A reinforcement schedule in discrete-trial paradigms (classical and instrumental conditioning) according to which reinforcement occurs on only part of the trials. (Note: It is not a case of giving only part of the reinforcement but rather of giving reinforcement part of the time.) The most common schedule is 50% PRF, meaning that half of the trials are reinforced, and normally, the trials to be reinforced are randomly determined. However, reinforcing every other trial (alternating 50% PRF) is partial reinforcement schedule leading with sufficient training to appropriately alternating levels of performance. Reinforcement, primary negative The termination of a stimulus event that is innately aversive*. Thus, the offset of an electric shock in escape learning is a common instance of primary negative reinforcement in the laboratory, as is the removal of some foreign object in your eye a common everyday example. Reinforcement, primary positive The presentation of a stimulus that is innately positive, pleasant, or desirable. Giving food to a hungry rat is a common laboratory example, as is giving candy to a child a familiar everyday example. Reinforcement, secondary negative The termination of a stimulus that has acquired aversive properties by having been paired with a primary aversive event. Turning off a light that signals an impending shock in an avoidance conditioning context is a typical laboratory example, as is the sight of a patrol car turning off into another street a familiar everyday instance. Reinforcement, secondary positive The presentation of a stimulus that has acquired positive properties by having been paired with a primary positive event. Turning on a light that has previously signalled food delivery is a laboratory instance, as is receiving money a common everyday example. Reinforcement, varied A condition in which some dimension of the reinforcer (amount, delay, etc.) is varied from occasion to occasion. Behavior that has received varied reinforcement is more persistent than continuously reinforced behavior. (Note: PRF is a special case of varied reinforcement.) Relearning Reacquisition of a response that has been extinguished*. Relearning is much more rapid than original learning, possible because reintroduction of reinforcement reinstates part of the stimulus context that prevailed during original acquisition. A person who has acquired the smoking habit may quit but will relearn all of the habitual responses almost immediately upon resuming the behavior. Respondent A type of behavior that can be elicited reflexively by an appropriate stimulus. Respondents are especially adaptable for classical conditioning*. Response (R) Formally, we define a response as any glandular or muscular activity of an organism that can be reliably observed by an experimenter. Although this definition is a good one, in principle, it may be too inclusive for our present purposes because not all glandular/muscular activities of all organisms are learnable. Accordingly, we actually work with a FUNCTIONAL definition: A response is any activity of an organism that obeys the principles of learning. This is, of course, a circular approach but it is useful because it bypasses some very knotty problems. For example, we frequently refer to the "bar-press" response of rats by which we mean that a switch connected to a bar protruding into the rat's environment was somehow closed by the rat. Now actually, the bar-pressing response is a rather long behavior chain involving approaching the bar from some direction, raising up on the hind legs, placing one or both front paws on the bar, pressing down with at least the minimum required force, releasing the bar, and turning toward the reward hopper. Yet it is not entirely clear when the response begins and ends but the functional approach simply finesses this question altogether. Since we obtain lawful relationships by counting the closure of the switch as being a response, we can proceed with an experimental analysis directly. Response, alimentary Behaviors associated with ingestion such as sucking, chewing, salivating, and swallowing. Response, anticipatory A response that occurs before its original time as a result of being regularly preceded by a stimulus to which it has become conditioned*. The CR in classical conditioning is an instance of an anticipatory response but such responses also occur in many other situations. A racer "jumping the gun" is a common instance of anticipatory responding. Response, avoidance (AR) A response that precludes or postpones the occurrence of an aversive event. Avoidance responses fall within the larger categories of operant or instrumental responses fall within the larger categories of operant or instrumental responses because they affect the organism's environment, but rather than causing an event to happen, they cause a scheduled event not to happen. (See Conditioning, avoidance) Response, conditioned (CR) A learned response occasioned by an originally neutral stimulus. The term can legitimately be used in the context of any conditioned procedure but it is preferable to restrict its usage to classical conditioning where the CR is elicited by the CS prior to or in the absence of the US. Response, conditioned emotional (CER) A hypothetical CR presumed to result from pairing an originally neutral stimulus with a noxious, painful stimulus. Although the CER cannot be directly observed, it can be monitored by its effect on other behavior. Specifically, if an organism is emitting a reinforced operant response, and a CS paired with an aversive stimulus is presented, the rate of operant responding is suppressed. This conditioned suppression is observed even if the CS has been paired with an aversive stimulus in another situation and without the operant response occurring at the time. The CER is essentially the same as the fear response except that no additional properties such as motivation, are ascribed to the CER. Response, consummatory The terminal response appropriate to deprivation drives. Eating, copulating, and drinking are specific consummatory responses. Response, defense An unconditioned defense response is the unlearned, reflexive response elicited naturally by an aversive stimulus. A blink to a puff of air to an eye, and a tear to a cinder in your eve, are familiar unconditioned defense responses. Conditioned defense response are ones elicited by a CS that antedates the occurrence of an aversive stimulus*. Such responses do not avoid the aversive stimulus, but may to some extent reduce its aversiveness. For example a conditioned eyeblink reduces the minor discomfort from a puff of air used as the US in a classical defense conditioning situation. Response, escape (ER) A response that successfully terminates an aversive stimulus*. Many escape responses are unlearned, consisting of withdrawal (e.g., jerking the had away from a hot stove) or flight (e.g., running away from a spreading fire). In some situation, however, escape responses must be learned. For example, you have probably learned no to rub your eye if there is a cinder in it, and have learned other maneuvers designed to remove the painful object. (See Learning, escape.) Response, incompatible Responses are said to be incompatible when they cannot be performed at the same time. Typically, we think of incompatibility as being physical; you cannot read and watch television concurrently. However, incompatibility can also be psychological; you cannot concentrate on your studies and worry about your personal problems simultaneously. People sometimes try to break undesirable habits by substituting an incompatible response such as eating or chewing gum instead of smoking. Response, observing A response that has no direct instrumental value but that obtains information that is relevant to the prevailing contingencies. For example, looking up the program schedule in the T.V. section of the newspaper is an observing response that may or may not lead on to watching T. V. Response, unconditioned (UR) An overt reflex elicited by an unconditioned stimulus*. In this context, the term "unconditioned" can be translated as "unlearned," in the sense of not being acquired through experience. Note, however, that being"unconditioned" is not a property of the response itself . . . it is a property of the association of a response with a US. Hence, an eyeblink is a UR when it is elicited by a US such as a puff of air, but an eyeblink can also be a CR or a voluntary response (Sometimes called "unconditional") Reward A class of reinforcers that include desirable objects of some kind. We sometimes loosely use reward' and reinforcement' interchangeably, but not all reinforcers are rewards, and reward is better used to refer to the object itself. Schedule of reinforcement A schedule is a rule or a program determining when a designated event will occur. In general, schedules can be based on time (as in an airplane schedule), or on enumeration (as in a tax schedule). Schedules of reinforcement are based on the time elapsed since the preceding reinforcement*, or on the number of responses emitted since the receding reinforcement. Schedule of reinforcement, continuous (CRF) A schedule in which every response is reinforced. Schedule of reinforcement, fixed interval (FI) When the availability of reinforcement is determined by elapsed time since the preceding reinforcement it is an interval schedule, and when that interval is constant, it is a fixed interval schedule. A person who likes to hear a grandfather clock chime the hour is on a FI-1-hour schedule. Schedule of reinforcement, fixed ratio (FR) When the availability of reinforcement is determined by the number of responses emitted since the preceding reinforcement it is a ratio schedule, and when that number is constant, it is a fixed ratio schedule. A person on piece-rate wages is working on a FI of some specified number of pieces. Schedule of reinforcement, varied interval (VI) When the availability of reinforcement is determined by elapsed time since the preceding reinforcement, an interval schedule is in force, and if that interval varies from occasion to occasion it is a varied (or variable) interval schedule. The amount of time a hitch-hiker must wait before being offered a ride illustrates a VI schedule. Schedule of reinforcement, varied ratio (VR) When the availability of reinforcement is determined by the number of responses emitted since the preceding reinforcement it is a ratio schedule, and if that number varies from occasion to occasion it is a varied (or variable) ratio. The number of times that you might have to pump on the accelerator of an old car in order to get it started illustrates a VR schedule. Setting operation The motivating and instructing operations performed by an experimenter before actually beginning an experiment. With animals, setting operations typically include deprivation of some commodity, familiarization with the apparatus, and gentling to the experience of being handled. With humans, setting operations are typically verbal, telling the subject about the nature of the task, the stimuli to be experienced, assurances that no painful events are involved, and solicitation of cooperation. Shaping A technique for obtaining an operant response by reinforcing successive approximations to the desired behavior. Shaping is like the game of "you're getting warmer," and is based on the Principle of Response Generalization. Reinforcing one response will lead the organism to emit a variety of behaviors of a similar kind and the objective is to reward another response that is still closer to the one designated for study. Similarity, response A hypothetical dimension of "sameness" of reposes. Presumably, response similarity is related to the similarity of the response-produced cues (feedback*) coincident with the response. Hence responses that "feel" similar, or sound or look similar, or have comparable effects on the environment are conceptually similar responses. Similarity, stimulus Stimulus similarity can often be measured in physical terms such as pitch of tones, or colors of lights. Similarity is sometimes measured in terms of the number of elements that complex stimuli have in common. But in the last analysis, stimulus similarity is measured functionally . . . events to which organisms respond similarly are similar. Normally dissimilar events can acquire functional similarity if the organism learns to make the same response to them. The analysis is that the feedback from the common response mediates acquired similarity of cues. For example, learning a concept such as "dog" increases stimulus generalization of responses learned with one dog to other dogs (Fido-dog-bites so Spot-dog-bites). Spontaneous recovery If a rest interval away from the situation is given following experimental extinction there is usually a reappearance of the conditioned response*. Spontaneous recovery is an increasing function of the amount of recovery time with a usual practical limit of about half the performance level that prevailed before extinction. However, with very prolonged intervals, almost complete recovery has been reported, leading to the interpretation that extinction does not eradicate the excitatory process built up during acquisition. Stimulus Any event acting on a suitable receptor. Stimulus, antecedent A stimulus that precedes an unconditioned stimulus in classical conditioning*, or occurs before the response in instrumental conditioning*. The more conventional term is "conditioned stimulus*", and although that term has frequently been used in this book, it seems gratuitous to identify a stimulus as a CS before it has even been presented in a learning context. Stimulus, compound A combination of two (or more) identifiable stimulus events as antecedent (or conditioned*) stimulus. Typically, the elements of a compound are presented simultaneously and are also coterminus (end at the same time) but other timing arrangements are possible in temporal compounds. Stimulus, conditioned (CS) A stimulus that precedes an unconditioned stimulus in classical conditioning, or that antedates the response in instrumental conditioning. The term "conditioned" can roughly be translated as "learned in this situation." That is to say, the CS is the event that is expected to acquire )or has acquired) the capacity to evoke the CR. (Also called "conditional.") Stimulus, drive Hypothetical cue properties associated with each drive state. It is presumed that interoceptive stimuli inform the organism whether a drive is present and, if so, how intense it is. For example, you generally know if you are anxious about an upcoming exam. The particular importance of drive stimuli is that they are a part of the stimulus complex in which learning occurs and subsequently help guide the organism toward adaptive behavior. Stimulus, exteroceptive A stimulus event originating outside the organism and that stimulates an appropriate receptor on the surface of the skin. In addition to sights and sounds, tastes and smells are exteroceptive stimuli even though the receptors are inside the mouth. Stimulus, functional The stimulus as it is perceived by the organism. As distinct from the nominal stimulus which is the event as it actually occurs in the world, the functional stimulus depends on the orientation of the organism's receptors, and also on any attention effects. Stimulus, interoceptive Stimuli arising from within the body. These include kinesthetic and proprioceptive feedback from the muscles and joints and also various "gut" stimuli associated with various states of the organism (e.g., hunger pangs, headache, etc.) Stimulus, neutral A stimulus that initially has no tendency to elicit the response of interest. (If the intent is emotionally-neutral, it is normally stated.) Stimulus, unconditioned (US) A stimulus event that automatically elicits a reflexive response, the unconditioned response*. (Also called "unconditional".) Stimulus, asynchronism The separation in time between the onsets of two stimuli, usually the antecedent (conditioned*) stimulus and the unconditioned stimulus in the classical conditioning paradigm*. Stimulus trace A hypothetical process that results from the occurrence of a stimulus and that persists with decaying strength for some period of time even after the stimulus has terminated. Superstition The continued performance of a response that chanced to be followed by reinforcement*. Because reinforcement increases the likelihood of antedating responses without regard to whether they actually produced the reinforcement, adventitious reinforcement is just as effective as deliberate reinforcement. Suppression, conditioned A decrease in the rate of emission of a positively-reinforced operant response during the time of action of a stimulus event that signals the impending occurrence of an aversive event. A dramatic example occurred during World War II when the sound of an approaching rocket suppressed talking by people in an air-raid shelter in England. Theory, continuity The proposition that learning is a gradual, cumulative process, with some learning occurring on every trial. Specifically in a discrimination learning situation, the presumption is that the organism learns about the correctness of the relevant cues*, little by little, on every trial even if performance appears to be hovering at the chance level. A common assumption is that learning progresses on each trial at a rate that is a constant fraction of the difference between the level of learning before the trial and the upper limits. The following table illustrates this assumption. Theory, noncontinuity A theory that learning is not a gradual, cumulative, continuous process but instead occurs in a sudden, insightful manner. Noncontinuity analyses are typically applied to discrimination learning where it is assumed that the organism tests hypotheses about the solution. Theory, reinforcement The proposition that reinforcement*, whatever its nature, is necessary for learning to occur. Such a theory may contend that learning occurs so long as some reinforcement occurs, but a pure reinforcement theory postulates that the amount of reinforcement directly determines the amount of learning. (Note that reinforcement pertains to the effect of reinforcement on learning; all theories accept the role of reinforcement in determining performance.) Reinforcement theory has generally been abandoned by contemporary learning theorists but is still practiced in many everyday contexts. Transposition Responding to a new set of stimuli on the basis of the same relation that was learned with another set of stimuli. Having been reinforced for choosing the larger (or brighter, heavier, etc.) Of a pair of stimuli, the subject is likely to choose the larger of a new pair of stimuli, Transposition may be thought of as a special case of stimulus generalization*, except there are several stimuli involved on each presentation and the choice response is relative. Transposition has been observed in many species. You are probably most familiar with transposition in music, since you recognize a particular melody when played in a different key. Tropism An learned tendency to approach (positive tropism) or move away from (negative tropism) some natural stimulus event. Moths are positively phototropic (they approach light) as are many insects. Other common tropisms are geotropic (approach the earth) and heliotropic (approach the sun).