The Basic Learning Processes: A Glossary of Terms Frank A. Logan There are two general classes of scientific terms: empirical concepts and theoretical constructs. Empirical CONCEPTS refer to real objects or events and their meaning is given by operational definitions. Such definitions specify the things you have to do (the operations) in order to observe the object or event. For example, a light stimulus can be defined by physical measurement of its color and brightness. Empirical terms may be ABSTRACT in the sense of subsuming specific instances (e.g., "color" is an abstract term that subsumes such stimuli as "red," "blue," etc.) but they remain reducible directly to observable (point-at-able) events. Theoretical CONSTRUCTS refer to imaginary objects or events, constructed by a theorist in an effort to explain empirical phenomena, and their meaning is given by anchoring definitions. Such definitions relate (anchor) the theoretical constructs to empirical concepts. For example, a theorist might contend that the aforementioned light stimulus initiates some kind of a sensory trace (an "image") inside the organism that decays over time. There is a second sense in which scientific terms have meaning. This is the relationship that a term has with other terms, and the more interrelations a term has, the greater its functional meaningfulness. For example, 'classical conditioning' is formally (operationally) defined as a paradigm in which two stimuli are presented in temporal sequence without regard to the organism's behavior. What makes the concept interesting is that it results in an association being formed between the stimuli. What makes the concept especially interesting scientifically is that the strength of the association and the various manifestations of it depend on various particulars about the nature of the stimuli and the temporal parameters employed. And what makes the concept really interesting, in a functional sense, is that comparable situations often arise in everyday life. Accordingly, this glossary has been constructed with the intention of presenting the meanings of all of the terms that one is likely to encounter in the context of the basic learning processes. It goes beyond formal definitions to include elaborations and examples. Frequently, terms used in a definition are also defined in this glossary; where appropriate, such terms are marked with an asterisk to facilitate cross-referencing. The objective of this glossary is to provide more than a reference for looking up unfamiliar words. By systematically studying the information in this glossary, one can acquire a basic understanding of the basic learning processes. The following glossary includes information about the location of relevant root abstracts in the "Laws" of the Quad-L analysis. (See the preface to the laws for further instructions.) NOTE: To locate a term in the glossary, precede the term by an underline if you wish to preclude locating that term in the context of other terms. _Abnormal Organism Behavior Indep Current O-Var (C-E)(D-E) Some humans are identified as being "abnormal" psychologically (e.g., manic-depressive, paranoid, etc.) and one may study whether the basic principles apply to them differently from "normal" people. _Acquisition (acq) Systematic Analyses: Processes Acquisition In(ter)dep Current C-Var (All paradigms-E) Browsing the Quad-L Database: A_ Systematic Analyses: zA_ and ZA_ A generic term for those learning processes that result in an increase in the probability of the occurrence of some response. All of the basic paradigms include acquisition. Acquisition, Easy-to-Hard Interdep Historic C-Var (D-E) See Effect, Easy-to-Hard Acquisition, Latent Indep Historic S-Var (S-E) See Latent Learning _Acquisition, w/out Overt R Indep Historic S-Var (S-E) A form of latent learning in which the organism is exposed to the contingencies but is prevented from performing the requisite response during such exposure. Subsequently, when the response is enabled, learning may be demonstrated. _Act See Response, Nature of A way of defining responses in terms of their consequences without regard to topographical details. Almost all of our conventional response terms, from 'bar-pressing' in rats to 'watching TV' in humans, are defined by what they accomplish and are not concerned with the actual postures and muscle movements used in achieving those ends. An important issue in the Psychology of Learning is whether organisms learn acts or whether learning is restricted to the particular movements that are practiced. _Act, receptor-orienting Interdep Current R-Var (D-E) The response of orienting one's receptor(s) so as to maximize exposure to some stimulus. Receptor-orienting acts are common in visual discrimination learning because the organism may have first to learn to look at the relevant part of the stimulus before being able to respond appropriately. For example, you look at the title of a book or record when deciding whether to buy it. _Activity (acty) Indep Current R-Var (F-E) Behavior that occurs without any explicit contingencies. Many organisms, when placed inside a wheel, will run without getting anywhere. Most organisms, when placed in an open field, move around the space and engage in grooming, defecating, and other observable behaviors. Pure activity is to be distinguished from exploration*. _Adaptation, sensory Adaptation (see Habituation) Indep Historic S-Var (C-E) A basically physiological process in which a stimulus energy temporarily loses its effectiveness as a result of continued exposure. Light adaptation (as when going outside into the sunlight) and dark adaptation (as when going into a theater) are among the many familiar instances. _Age, Effects of Indep Current O-Var (Many paradigms-E/I) Browsing the Quad-L Database: Early Development: AC_ Human Infant BC_ Older Organisms OC_ Studies of early development are identified by lower-case letters in the third column of the code letters before the root abstracts. However, age effects include the life span. _Aggression Interdep Current R-Var (F-E) _Aggressive Behavior Indep Current R-Var (R-E) Behaviors typically directed toward another organism, but may occur with regard to an inanimate object, indicative of injurious intent such as sparring, biting, mounting, etc. _Alcohol (ALC) In(ter)dep Current O-Var (Many paradigms-E) In(ter)dep Historic O-Var (Many paradigms-E) Browsing the Quad-L Database: 1_ See Instructions for finer categories Independent effects are those where alcohol has been administered to an organism; interdependent effects are those where the alcohol was voluntarily consumed. Current effects are when the organism is under the influence of alcohol; historic effects are those persisting after the organism is sober. _Alternation (altern) Alternation, Spontaneous Indep Current R-Var (S-E) The tendency of many species of organisms in a choice situation to switch alternatives from one occasion to the next. Alternation may be based on the stimulus or the response chosen on the preceding occasion and the tendency typically decreases with time before the next choice. When people guess tosses of a coin, they typically switch their guess frequently even though the odds of the coin coming up heads or tails does not change (if it is an honest coin). Alternation, Single Interdep Historic S-Var (D-E)(E-E)(S-E) Browwsing the Quad-L Database: aH_ In behavioral paradigms that enable two alternatives, the contingencies may be arranged so as to alternate between them from one occasion to the next. For example, a maze can be so constructed as to require the sequence LRLRLRLR enroute to the goal. In a Pavlovian context, the US may occur on alternate trials. In dividing something with a friend, you may say "one for you, one for me, one for you, etc.") Alternation, Double Interdep Historic S-Var (D-E)(I-E)(S-E) Browsing the Quad-L Database: bH_ All contexts enabling single alternation can be modified to expose the organism to alternation by twos. For example, the maze could require the sequence LLRRLLRR enroute to the goal. Historically, many psychologists have argued that successful double alternation performance requires some kind of cognitive (symbolic) process such as counting. _Altruism Indep Current R-Var (F-E) Performance of a response with no obvious reinforcement to the performing organism but of value to another organism. Stories of dolphins helping drowning humans back to shore would be definitely altruistic. _Amount of Training In(ter)dep Current C-Var (All paradigms-E) See Overtraining A general term for the number of trials or the amount of time devoted to the acquisition of some behavior of interest. Historically, considerable interest centered on determining the 'true' shape of the 'learning curve.' _Anxiety, Manifest Indep Current O-Var (C-E)(D-E)(C-I) A human personality trait reflected in a high level of readiness to become anxious (fearful) in even moderately threatening situations. Anxiety has been conceptualized as an irrelevant* drive that may help or hinder appropriate behavior. _Association (assn) A hypothetical relationship between two events such that the occurrence of one tends to call forth the other. In principle, there are four possible kinds of associations, S-S, S-R, R-S, and R-R (where S = stimulus and R = response). Most early theories posited only one or the other of these kinds of associations, attempting to subsume all learning in terms of association formation. An example of an S-S association is when the sound of the music of a song makes you think of the setting in which you first heard the song. An example of an S-R association is when the sound of the music of a song elicits emotional reactions experienced when you first heard the song. Association, Backward If two stimuli are presented in temporal sequence, S1 then S2, the classic association is forward (S1-->S2). However, in some contexts, there is also evidence of the reverse association (S2-->S1). Typically, backward associations are weaker than forward ones. For example, if you study a foreign language memorizing English-Spanish, you may also learn something about Spanish-English, but you are better advised to study in both directions. _Attention (attn) A response which may be overt in the form of receptor orienting acts*, or covert and hence only inferred from the fact that only some of the potential stimuli in a learning situation may actually gain control over behavior. In general, stimuli that are perspicuous as a result of being intense, distinctive, or unexpected tend to command an organism's attention. Attention, selective As with the more general concept of attention*, selective attention is inferred from the fact that only part of a total stimulus complex may gain control over behavior. But in this case, attention cannot be attributed to conspicuous features of the stimulus, and hence it is presumed that the organism has selected particular stimuli to which to attend. In this context, an instructive exercise is to press the fingernail of one index finger into the palm of the other hand. You will note that you can selectively attend either to the mild pain produced by the finger on the palm, OR to the mild pressure produced by the palm on the finger. _Attribution Indep Current R-Var (F-E) Evidence that one organism has inferred something about the motivational state of another organism. An organism attempting to "comfort" another organism suggests attribution of suffering. (It should be noted that attribution is itself being attributed to another organism.) _Audiogenic Seizure Indep Current S-Var (F-E) Intense, high-pitch noise may cause some rodents to display radical, non-directed behavior. Audiogenic seizures are temporary are are usually harmless to the rodent. _Autobiography Systematic Analyses: Systematic _Autoshaping (AuSh) Indep Current C-Var (O-E) See Shaping and Sign-Tracking If a reinforcer* is regularly delivered to an organism without any explicit contingency, the organism may spontaneously begin to emit the behavior of interest. This is especially true if a stimulus is presented shortly before each delivery of a reinforcer. If the procedure is continued, persistence of the behavior is called 'automaintenance.' _Aversion, Conditioned (CTA) Interdep Historic C-Var (F-E) If an organism experiences a novel event, and this is followed by a highly traumatic experience, the organism may subsequently avoid any contact with the novel event. For example, if a novel taste is followed by becoming "sick-to-your-stomach," you are likely to avoid eating things with that taste. Wild animals typically show "bait-shyness," avoiding any new taste that is followed by discomfort. The noteworthy feature of conditioned aversions is that the traumatic experience may occur several hours after experiencing the novel event. _Avoidance Conditioning (avcond) Systematic Analyses, Paradigms See Conditioning, avoidance and Response, avoidance Avoidance, Passive Interdep Current C-Var (I-E) See Punishment An expression sometimes used to refer to punishment because, in that paradigm, the organism can avoid the aversive event (punishment) passively by not making the punished response. The term is most appropriate when the behavior occurs spontaneously without any other special contingencies. For example, a person may avoid social disapproval by not engaging in some irritating behavior such as picking one's nose. _Awareness In(ter)dep Current C-Var (C-E)(D-E)(O-E) A descriptive term that applies to a human subject who is able to verbalize the contingencies prevailing in an experiment. In order to get at the basic processes, we frequently attempt to deceive the subjects about the true purpose of an experiment. Even so, some subjects figure out what the contingencies are and indeed, there are some theorists who contend that awareness is necessary for learning to occur. Awareness, on Extinction In(ter)dep Current C-Var (C-I) Awareness, on Generalization Interdep Current S-Var (C-I) _Backward Conditioning (see Conditioning, Backward) Behavior Chain (see Chain, Behavior) Behavior Modification (See Modification, Behavior) Bibliographies Systematic Analyses _Bilateral Generalization Indep Current S-Var (C-I)(D-I) Bilateral Transfer See Generalization, Bilateral Biography Systematic Analyses: Systematic _Blocking Indep Current S-Var (C-E) A phenomenon observed when a stimulus is combined with another stimulus that has already been conditioned. The added stimulus may gain little or no control even though it would normally be an effective stimulus with the amount of training given. The pretrained stimulus apparently overshadows the added stimulus and blocks it from becoming conditioned. Blocking does not occur if, at the time the stimulus is added, the conditions of reinforcement are changed. Such a change apparently unblocks the added stimulus from being overshadowed by the trained stimulus. _Breeding Indep Current O-Var (Many paradigms) Browsing the Quad-L Database: bC_ The behavioral effects of selectively breeding animals with specific characteristics. Early interest was on the possibility of breeding "intelligent" organisms. _Chain, Behavior Indep Current R-Var (R-E) An integrated sequence of responses that comprise a molar act*. Responses are integrated when the feedback from earlier responses set the occasion for the next response in the sequence. In a homogeneous chain, the responses are the same but must be repeated some number of times. Walking, viewed as repetitively putting one foot in front of the other, is a homogeneous behavior chain. In contrast, a heterogeneous chain involves a sequence of different responses. Speaking, viewed as uttering sequences of sounds, is a heterogeneous behavior chain because each sound is at once a response and also a cue for the emission of the next sounds. Change, Effect of In(ter)dep Current C-Var (Many paradigms-I) _Choice Behavior (_Preference) Interdep Current S-Var (D-E)(S-E) Choice Behavior Interdep Current R-Var (I-I)(O-I) Browsing the Quad-L Database: L_ General, Emotionally-Positive Events Interdep Current R-Var (F-E) General, Emotionally-Negative Events Interdep Current R-Var (F-E) Reinforcement Variables Browsing the Quad-L Database: kL_ Amount of Reinforcement Browsing the Quad-L Database: lL_ Delay of Reinforcement Browsing the Quad-L Database: mL_ Varied/Fixed Reinforcement Browsing the Quad-L Database: nL_ Partial/Fixed Reinfrocement Browsing the Quad-L Database: oL_ Probability Learning Browsing the Quod-L Database: pL_ Conditional Outcome Choice Interdep Current C-Var (S-E) "Choice" is used in contexts in which the organism has two (or more) alternatives available and the alternatives differ in one way. It measures the organism's preference between (among) the outcomes. For example, a rat might be given a choice between tap water and water containing alcohol to determine whether the rat likes alcohol. Choice behavior is distinguished from decision-making behavior by the fact that the latter involves two (or more) differences in the outcomes. _Classical Conditioning Systematic Analyses: Paradigms See Conditioning, Classical _Cognition (cog) Symbolic interaction with the environment; in humans, the symbolism is predominantly verbal. Although we are all intimately familiar with our own cognitive processes (thinking, reasoning, planning, coding, interpreting, daydreaming, contemplating . . . one's mental life), we cannot directly observe covert activities of another organism (animal or human). We might, of course, accept a person's verbal report as accurately describing his or her thoughts, but such subjective reports are often suspect. However, there are objective types of behavior situations from which we can reasonably infer the occurrence of cognition in the subject. Cognitive Map (See Map, Cognitive) _Collateral Behavior Indep Current R-Var (O-E) In contexts involving a delay in the relevant behavior, organisms may fill the time by engaging in some alternative behavior that is available. Organisms are always doing something, but interest centers when the prevailing schedule induces an excessive amount of some particular activity. An especially interesting instance is that organisms may drink profusely, including drinking alcohol, while waiting to be fed. _Communication Systematic Analyses: Systematic Indep Current R-Var (F-E) _Comparative Systematic Analyses: Systematic Comparative Behavior Indep Current O-Var (Many paradigms-E/I) Browsing the Quad-L Database: C_ Age (see Age) Breeding (see Breeding) Gender (see Gender) Species (see Species) _Complexity Interdep Current C-Var (R-E) An attribute of a stimulus or a context determined by the number and variety of features available. _Compound, Temporal Indep Current S-Var (A-E) Instead of a single stimulus event, or a stimulus compound, two (or more) stimulus may occur sequentially as a cue for responding. In starting a race, the cues are "ready, set, go!" Compounding, Stimulus Indep Current S-Var (C-E) Control of behavior by a compound stimulus concurrently with no control by any of the elements of the compound presented separately. Concepts Systematic Analyses: Systematic _Concept Learning Indep Current S-Vvar (D-E) An abstraction which may refer to some concrete feature of an event. An organism may learn to discriminate red from another color, but if it learned the concept, 'red,' it can select a novel red object from among several of different colors. Some animals have learned the more abstract concept of odditity by choosing the odd of three objects. Conceptual Propositions Systematic Analyses Conditional Discrimination Indep Historic S-Var (D-E) See Discrimination, Conditional _Conditional Outcome Choice Interdep Current C-Var (S-E) A choice paradigm in which at least on of the outcomes depends on the subsequent performance of the organism. For example, a gambler has a choice between a slot machine, where the outcome is entirely a matter of chqance, and playing poker, where the outcome depends to some extent on his/her skill at playing the game. Conditioned Emotional Response (CER) See Response, Conditioned Emotional _Conditioning (cond) Classic texts in the basic learning processes distinguished 'conditioning' from 'learning.' Conditioning is used for contexts in which there is only one response available to the organism (of interest to the observer) and one observes whether or not that response occurs. If the response occurs, other measures such as rate, speed, amplitude, force, etc., may be recorded. Conditioning is used for avoidance, classical, instrumental, and operant contexts. Learning is used when there are at least two alternatives available to the organism thus permitting a choice. Typically, the alternatives require different effort or lead to different outcomes and one observes the percent preference for one alternative over the other. Learning is used for discrimination, differentiation, and spatial contexts. Conditioning, avoidance Systematic Analyses, Paradigms A learning paradigm in which the occurrence of an aversive event can be prevented by the occurrence of an avoidance response. In cued avoidance a warning signal precedes the aversive event, and typically, the avoidance response also terminates the warning signal. In noncued avoidance, there is no warning signal although the aversive event typically occurs with some temporal regularity. See Omission Training Conditioning, backward Interdep Current C-Var (A-E) A classical conditioning paradigm in which the US is presented before the S. Although there have been a few reports of 'reverse connections,' the majority of studies have found that there is no 'conditioning' of an observable R to the S when the stimuli are presented in 'backward' order but some backward association* may nevertheless be formed. Conditioning, classical Systemataic Analyses, Paradigms A paradigm in which two stimuli are repeatedly presented in a fixed temporal order and where the second of the stimuli (the Unconditioned Stimulus, US) reliably elicits a reflex response (the Unconditioned Response, UR). The first stimulus (S) can be any event within the sensory capacity of the organism, but it is normally one that is initially neutral with respect to the UR. The outcome of this operation is that a response (the Conditioned Response, CR) usually bearing at least a family resemblance to the UR is elicited by the now conditioned stimulus (CS). Conditioning, delayed A classical conditioning procedure distinguished by the fact that the S persists throughout the S-US interval. Hence, in the delayed conditioning procedure, the S is physically contiguous in time with the onset of the US. (This is distinguished from trace conditioning where the S terminates before the onseet of the US.) Conditioning, differential In(ter)dep Current R-Var (Many paradigms-I) A paradigm involving the separate presentation of 2 (or more) stimuli with different contingencies prevailing depending on which stimulus is presented. Differential conditioning can occur in classical, operant, or instrumental procedures and in each case leads to differential responding that is more-or-less appropriate to the contingencies associated with each stimulus. Most commonly, one stimulus (called S+) is reinforced and the other stimulus (S-) is nonreinforced, in which case the organism learns to respond in the presence of S+ and not in the presence of S-. Conditioning, higher-order Indep Historic C-Var (C-E) A variation on the classical conditioning theme in which, in place of a US with an unlearned reflexive tendency to produce a UR, a previously-conditioned CS is used. So long as the 'old' CS reliably elicits a CR, it can be used instead of a US to condition the response to a 'new' CS. For example, once the words "No-no" have been conditioned to aversive consequences, a child can be restrained from a new behavior by saying "No-no." Conditioning, instrumental Systematic Analyses, Paradigms A learning paradigm in which a response is aperiodically enabled, and its emission is shortly followed by a reinforcing state of affairs. The defining feature of instrumental conditioning is that the response is in fact instrumental in obtaining the reinforcement, and it is distinguished from operant conditioning by the fact that a response can only occur on discrete trials. Hence, a hungry rat running down a short straight alley for food reward is prototypical of instrumental conditioning in the lab and a student taking an exam for the reward of a good grade is an instrumental response in college life. Conditioning, operant Systematic Analyses: Paradigms A learning paradigm in which a response is freely available to the organism and its emission is closely followed by reinforcement. The defining feature of operant conditioning is that the response operates on the environment to produce the reinforcement, and the distinguishing feature is that the response is continuously available. Hence, a hungry rat pressing on a freely-available bar for food reward is prototypical of operant conditioning in the lab and a student studying in order to do well on an exam is an operant response in college life. Conditioning, Pavlovian A special case of classical conditioning* in which the observed conditioned response (CR) is physically similar to the unconditioned response (UR). In Pavlov's lab, salivation by dogs was both the CR and the UR. The effect of pairing two stimuli may be monitored in other ways, such as conditioned suppression*. Conditioning, sensory Indep Current S-Var (C-E) A variation on the classical conditioning theme in which both of the stimuli are neutral. It involves the pairing of two stimuli in a regular order, but neither stimulus is a US with a reflex UR. As a result, there is no overt CR to monitor the occurrence of sensory conditioning. If the subjects are human adults, we can obtain verbal reports that the presentation of one stimulus tends to elicit an 'image' of the other stimulus. An objective technique for evaluating sensory conditioning involves a three-stage experimental design called sensory preconditioning*. Conditioning, simultaneous A conditioning paradigm in which the S and the US begin at the same time (although they may end at different times). Typically, little or no conditioning, in the sense of an overt response to the S presented alone, is observed with the simultaneous procedure. Conditioning, temporal Indep Current C-Var (C-E) A conditioning paradigm in which a US is presented at regular intervals without regard to the organism's behavior and without any explicit antedating S. Provided the intervals are not too long (up to several minutes), a CR occurs shortly before each US occurrence. The interpretation of temporal conditioning is that each US serves not only to elicit a UR but also as a (Trace) CS for the next occurrence of the US. Perfect examples of temporal conditioning are not very common in everyday life because there are usually regular CS's associated with temporal regularitiesin the environment. But if you have had occasion to listen to a leaky faucet dripping in the sink, you will be familiar with temporally conditioned anticipations of the next drip. Conditioning, trace Indep Current C-Var (C-E)(A-E) A classical conditioning paradigm in which the S is brief and terminates before the US occurs. Because the CR tends to occur just before the US, the CS is not physically present at the time the CR occurs. Hence, the CR is presumably associated with the (memory) trace of the CS. In fact, conditioning is typically found to be just as good with the trace procedure as with the delayed procedure. Most everyday instances of classical conditioning fit the trace paradigm. For example, if a child is given candy after being told his or her behavior was good, the actual sound of the word 'good' is no longer present when the candy is received. Nevertheless, the word 'good' will acquire secondary reinforcing properties as a result of this experience. Conferences Systematic Analyses _Conflict Interdep Current C-Var (I-E) An approach-avoidance conflict obtains when there is a tendency to respond because of anticipated reward but also a tendency not to respond because of anticipated punishment. There may also be conflict when forced to decide between two attractive alternatives (approach-approach conflict) or between two unattractive alternatives (avoidance-avoidance conflict). _Consolidation Indep Historic C-Var (C-E) Evidence that the biochemical process involved in learning is not instantaneous but requires time after the experience to consolidate. A traumatic event shortly after a learning trial may obliterate the effects of that trial. _Context The background stimulus environment in which a learning experience occurs. Context includes the interoceptive as well as the exteroceptive environment, and also includes stimulus events in the recent past. It is clear that an organism's response to a stimulus can be made conditional on the context in which it occurs. For example, a dog can learn to salivate to a buzzer in one room but not in another room, just as you have learned to react differently to words such as 'foot' (body part or length), 'light' (illumination or weight), 'turkey' (food or sucker), etc., depending on the verbal context. Of greater practical significance is the fact that learning necessarily takes place in some context and is somewhat specific to that context. Context, Adjustment to Indep Current S-Var (F-E) Context Effects Indep Current S-Var (Many paradigms-E/I) Interdep Historic C-Var (F-E) _Contiguity, S-S Two stimulus events that occur reasonably close together in space and/or time. For example, a husband and wife may often be seen close together in space, and lightning and thunder occur close together in time. Insofar as learning involves S-S associations, S-S contiguity is a necessary condition for such learning and most S-S theorists also contend that it is sufficient. Contiguity, S-R A response occurring reasonably close in space and/or time to a stimulus. If you get up at the sa,e time every morning, waking up at that time will become associated with that time of day. Insofar as learning involves S-R associations, S-R contiguity is a necessary condition for such learning, and most S-R theorists also contend that it is sufficient. Contiguity, S-R-Reinf Indep Current S-Var (D-E) _Contrast In general, contrast is a phenomenon in which the reaction to a stimulus is influences by other stimuli in the environment. When the reaction is stronger than it would normally be, we speak of positive contrast; conversely, when the reactions is weaker than it would normally be, we speak of negative contrast. For example, a grade of 'B' may look good to a person accustomed to getting grades of 'C' but look bad to a person accustomed to getting grades of 'A'. Contrast, behavioral In differential operant conditioning (a multiple reinforcement schedule) response rate in the two components may differ from that which is generated by each of the schedules if experienced separately. Response rate may be higher in the preferred component (positive behavioral contrast) and lower in the nonpreferred component (negative behavioral contrast). The latter is sometimes concealed by a 'floor effect*; for example, if the nonpreferred schedule is extinction, the subject can exhibit no lower than a zero response rate. Behavioral contrast may also be concealed when the schedule exerts strong control over response rate. For example, if two DRL schedules are combined in a multiple schedule, the requirement for spaced responding may prevail over contrast effects. Contrast, incentive In differential instrumental conditioning performance to the two stimuli may differ from that which would obtain were the conditions of reinforcement experienced separately. The response may be stronger (faster, more vigorous) to the stimulus associated with the preferred conditions (positive incentive contrast) and weaker (slower, less vigorous) to the stimulus associated with the less preferred condition (negative incentive contrast). These effects may be obscured by 'ceiling' effects (an organism can run only so fast) and 'floor' effects (an organism can run no slower than not run at all). Contrast, simultaneous When contrast effects are observed during differential conditioning*, so that the two stimuli are repeatedly presented, we refer to simultaneous contrast. The effects are not strictly simultaneous since only one stimulus is presented at any one time, but they are recoverable over a number of repetitions and hence are conceptually overlapping. Contrast, successive A contrast effect is frequently observed when an organism has had extensive training with one schedule/condition of reinforcement and then is shifted to a new schedule/condition for continued training. In successive contrast, there is typically no change in any discriminative stimulus, and there is a single shift and hence, a single opportunity to observe contrast effects. _Control, stimulus Behavior is said to be under stimulus control when a change (offset, onset, increase or decrease in intensity, or a change in some qualitative dimension) in a specified stimulus event is reliably shortly followed by a change in some aspect of behavior, and there is some specificity of the effect to that stimulus. Insofar as the goal of Psychology is the prediction and control of behavior, a major objective is to discover how to bring behavior under stimulus control. The Principles of Learning are a major component of that objective. Control, self When behavior is controlled by stimuli emitted by the organism itself, we can say that the organism has self control over that behavior. A special and especially interesting case is a person's control of his or her own behavior, usually by means of verbal self-instructions. Although we commonly think of self-control in terms of restraining response tendencies (for example, not eating too much), the more general concept also includes response emission (for example, doing an unpleasant job). (See Volition). Control Procedures In(ter)dep Current C-Var (All paradigms-E) _Cooperative Behavior Indep Curent R-Var (F-E) _Correction Procedure Interdep Current C-Var (S-E) In a choice situation with one correct alternative, the correction procedure enables the organism to return to the choice point and take another alternative until the trial ends with reinforcement. If you make a wrong turn while driving, you make a U-turn and correct your route. Correlated Punishment Interdep Current C-Var (I-E) Correlated Reinforcement Interdep Current S-Var (I-E)(O-E) _Counterconditioning Indep Current C-Var (C-I) The procedure of conditioning a response to a stimulus that is incompatible with the response already conditioned to that stimulus. Counterconditioning is often used to help people overcome unrealistic fears. _Counting Systematic Analyses: Systematic Indep Current S-Var (D-E)(I-E)(R-E) _Cross-modal Generalization Cross-modal Transfer See Generalization, Cross-modal _Cue A discriminative stimulus, an event that conveys information of potential significance to the organism. For example, when walking in the country, a rattle sound may cue the presence of a dangerous snake. Cue, Aversive Interdep Current S-Var (D-E) Cue, Irrelevant Indep Current S-Var (D-E) It is conventional to use the term 'cue' in discrimination learning contexts and then to distinguish between relevant cues (those that are explicitly correlated with reinforcement) and irrelevant cues. In many learning situations, a major aspect of the task is learning which cues are relevant and the larger the number of irrelevant cues, the harder this task. _Curare Indep Current O-Var (C-E) _Decision-Making Interdep Current S-Var (D-E)(S-E) Although all choice situations could be considered to require a decision, the term is typically restricted to situations in which there are several alternatives in which some feature(s) favor one alternative and some other feature(s) favor the other alternative. The choice between foods in which the preferred food is more expensive requires a decision. _Decrement, stimulus generalization The decrease in the level of performance that results if the test situation is different from the original learning situation. Although learning tends to generalize to similar stimuli, the response is weaker, and more so the less the similarity. _Delay, Non-Chaining Indep Current S-Var (S-E) Delay, Within-Chain Indep Current S-Var (I-E)(S-E) _Delayed Response Indep Historic S-Var (D-E)(R-E) Restraining an organism for some period of time after being exposed to a lure being hidden before being released to retrieve the lure. The delayed response may be based on spatial cues (the lure is hidden under the left or right of two identical objects) or non-spatial cues (the lure is hidden under one of two different objects). In the former case, it is necessary to prevent the organism from maintaining orientation toward the correct direction. Successful performance ofa delayed response requires that the organism has learned 'object permanence' (objects continue to xist even though out of sight) and memory. _Detour Performance Indep Current R-Var (S-E) _Differential Conditioning See Conditioning, Differential Differential Discrimination Indep Current R-Var (D-I) _Differentiation Learning Systematic Analyses: Paradigms Differentiation, response Learning to emit a response with particular quantitative and qualitative properties. Response differentiation applies to all of our skills. For example, the pianist must not simply strike the right keys but must do so with the right timing and force. Similarly, the ballet dancer must execute the steps rhythmically and gracefully. Even a lecturer should adjust the speed and intensity of his or her voice in relation to the nature and size of the audience. In general, response differentiation is learned as a result of differential reinforcement of the relevant response dimensions. Hence, where maximal speed is desired, maximal reward should be given for faster and faster speeds. _Discrimination Learning Systematic Analyses: Paradigms Discrimination, stimulus Learning to select from among several simultaneously-presented stimuli on the basis of differential reinforcement associated with those stimuli. In the laboratory, a rat might be reinforced for choosing the black rather than a white stimulus regardless of position. Among the multitude of discriminations that we all make every day, choosing the correct answer on multiple-choice tests is especially relevant for students. The difficulty of learning a discrimination depends on the similarity of the stimuli, and organisms can learn finer discriminations when the stimuli are presented simultaneously than when they re presented separately in differential conditioning*. _Discriminated Operant Interdep Current C-Var (O-E) _Disinhibition Indep Current S-Var (Many paradigms-I) An increase in the strength of a response occasioned by the presentation of a novel stimulus in the situation. Typically, novel stimuli lead to external inhibition*, i.e., a decrease in the strength of the response. If, however, the response strength is low because it is being inhibited as a result of extinction*, the novel stimulus may lead to a reappearance of the response. The term itself suggests that the novel stimulus is able, somehow, to remove the inhibition. For example, a person may have stopped using double negatives (e.g., "I don't have no money"), but slip up now and then under pressure, andmany is the soldier who has wet his pants when a bomb explodes nearby. _DRL/DRH See Correlated Reinforcement _Double Alternation Interdep Historic S-Var (D-E) _Drive Motivation See Motivation, Primary Drive A hypothetical process given the role of energizing the organism into performance of learned responses. "Drive" and "motivation" are often used interchangeably, although drive is typically a more specific construct (e.g., the hunger drive, the thirst drive, etc.). "Drive" is to be distinguished from "need". . . a need is a physiological necessity for survival whereas a drive is a psychological force. We may eat when we really do not need more food (especially if an appealing dessert is offered) and we may not feel hungry even when we actually need food (especially particular vitamins and minerals). Drive, boredom (See Curiosity) Drive, exploratory (See Curiosity) Drive, irrelevant A motivational state that is not appropriate for the nature of the reinforcement. For example, fear is an irrelevant drive when taking an exam. Such drives are, of course, not irrelevant to the organism; one might just as well refer to irrelevant reinforcement. Drive, primary Drives that are unlearned. The primary drives are based on deprivation of commodities necessary for survival, such as food and water, or presentation of injurious events, such as electric shock. (Same as Motivation, primary). Drive, secondary A drive that has been acquired as a result of learning. The least controversial secondary drive is fear which can become associated through classical conditioning to any of a variety of originally neutral stimuli. For example, pairing the word "bad" with punishment will result in the word having effective secondary motivating properties. (Same as Motivation, secondary). Drive Stimulus A hypothetical cue aroused in conjunction with any drive state. Many theorists appeal to the construct of drive stimulus to enable to organism to detect which drive is present and in what degree. Thus, you can discriminate being hungry from being thirsty, and being hungry from being famished. The drive stimulus enables the organism to select a relevant response. Drive Reduction Hypothesis Interdep Current S-Var (C-E) _Drug Effects Indep Current O-Var (Many paradigms-E) Drugs, Addictive, Paradigmatic Effects Drugs, Addictive Behavioral Effects In(ter)dep Current O-Var (Many paradigm-E) In(ter)dep Historic O-Var (Many paradigms-E) Browsing the Quad-L Database: 2_ Paradigmatic Effects Associative Processes Indep Current C-Var (C-E) Intake Processes Interdep Current R-Var (F-E) Contingencies Interdep Current S-Var (O-E) Browsing the Quad-L Database: 3_ _Dynamic Transmission Interdep Current S-Var (C-I) A phenomenon with human subjects in which a response conditioned to a physical object can be elicited by the name of that object, or vice versa. For example, a person might be conditioned to blink when a picture of a dog is flashed on a screen; subsequently the person may blink if the word "dog" is played through a speaker. There is much everyday evidence of dynamic transmission. For example, the word "snake" is enough to make many people cringe, and the effectiveness of calling eople names (even though they ostensibly "never hurt") is dramatic testimony to the generality of the phenomenon. Early Experience Indep Historic O-Var (A-E)(S-E) _Economy, Open/Closed Interdep Current C-Var (F-E) _Effect, easy-to-hard Learning a hard discrimination between very similar stimuli is facilitated by preliminary training on an easy discrimination between stimuli that are more different along the relevant stimulus dimension. For example, in training an apprentice gemologist, one begins with stones that are conspicuously different in quality and gradually develops a 'trained eye'. In some laboratory studies, the hard discrimination could not even be learned unless preceded by exposure to easier problems of the same type. Effect, frustration The increased vigor of responses that occur shortly after an organism has experienced frustrative nonreward. If an organism expects reward because of past experiences, nonreward is frustrating and the motivational property of frustration may be revealed in an over-reaction to subsequent stimuli. Specifically in the laboratory, a rat that has failed to receive an accustomed reward for one response will run faster if immediately placed in a runway. In analogous fashion, if you fail to get the good grade you expected for a paper, you may find that, for a while, you talk louder and act more vigorously than usual (for example, you may slam the door shut). Effect, Law of There are two parts to the original Law of Effect: Responses that are followed by a 'satisfier' are likely to be repeated (stamped in), an responses that are followed by an 'annoyer' are likely not to be repeated (stamped out). As an empirical generalization about performance, the Law of Effect is generally accepted although there may be limitations about whether all rewards and punishers are equally effective for all responses. The Law of Effect has also been interpreted as a theoretical proposition, that learning is effected by reinforcement. That proposition has been at the center of many controversies in the Psychology of Learning. (See Theory, reinforcement.) Effect, overlearning reversal (ORE) The reversal of a discrimination proceeds faster if training on the original discrimination has been extended well beyond the point where there is no further measurable improvement in performance. Effect, overlearning extinction (OEE) The extinction of a response proceeds faster if training has been extended well beyond the point where there is no further measurable improvement in performance. The OEE occurs following CRF training but after PRF. Effect, partial reinforcement acquisition (PREA) The terminal performance level of an instrumental response (such as a rat running down a straight alley) that has received PRF is superior to (faster than) that following CRF in the early and intermediate parts of the response although performance may be inferior in the last part of the response. People often start projects with optimistic enthusiasm and then become more and more pessimistic as they get closer to finishing. Effect, partial reinforcement extinction (PREE) The resistance to extinction of a response is greater if that response has received PRF than if it has received CRF. In general, the lower the percentage of times that the response was reinforced during acquisition the more persistent the response will be when no further reinforcement are given (i.e., extinction). However, the effect of PRF also depends on the length of runs of nonreinforced responses that occur before a reinforced response. The PREE occurs in all conditioning paradigms. Effect, generalized partial reinforcement extinction (GPRE) The PREE tends to generalize to some extent from one situation to similar situations. That is to say, PRF for responding to one stimulus will increase the resistance to extinction of that response to other stimuli that have been the occasion of CRF. Indeed, it is possible that early exposure to stringent schedules of reinforcement promote later persistence in the face of frustration (failure). _Effort Indep Current R-Var (O-E) Effort, on Extinction Interdep Current C-Var (I-I)(O-I) _Electro-Convulsive Shock (ECS) Interdep Historic C-Var (F-E) Indep Current O-Var (D-E)(R-E)(S-E) Empirical Analyses Systematic Analysees Empiricism A scientific strategy in which all terms refer to observable events and all relationships are empirically verified by experimental analysis. Empiricism is atheoretical; it disclaims any appeal to hidden causes, imaginary processes, that is, to hypothetical constructs. A purely empirical Psychology of Learning seeks to discover the variables of which learning is a function and, at most, to attempt to interrelate those functions systematically. All experimental psychologists are empiricists and accept the laboratory as the ultimate court of appeal but only some of them presume to go beyond the purely empirical level and create theories that purport to explain the empirical phenomena. _Enriched Environment Indep Historic O-Var (D-E)(S-E) _Errors, in Typewriting/Speech Indep Curent R-Var (R-E) Errors, Elimination of Indep Current R-Var (S-E) Escape Conditioning (escond) Systematic Analyses: Paradigms _Expectancy Systematic Analyses: Systematic A hypothetical cognitive process in which an organism expects or anticipates the occurrence of a stimulus event. Specifically for example, an expectancy analysis of Pavlovian conditioning is that the antecedent stimulus such as a metronome makes the dog think about (anticipate, expect) the delivery of the food. You will be disappointed if you do not get the 'A' grade that you expected on your paper. _Exploratory Behavior Indep Current R-Var (F-E) Extended Training Interdep Current C-Var (Many paradigsm-E) _Extinction, experimental Systematic Analyses: Processes In(ter)dep Current C-Var (Many paradigms-I) A decrease in the strength of a response as a result of nonreinforcement. _Fear Theoretically, an emotionally-negative state associated with responses that are innately elicited by painful, aversive stimuli*. According to fear theory, the fear response (rf-sf) can also become conditioned to originally neutral stimuli and is the mechanism of secondary motivation*. An important implication of this theory is that, when an acquired fear is inappropriate, such as fear of social situations, rf-sf can be extinguished by exposure to the feared stimulus without aversive consequences. _Feature Learning Indep Current S-Var (D-E) _Feedback, negative When the effect of response-produced feedback is to decrease the tendency to emit the response, it is described as negative feedback. Negative feedback is not necessarily emotionally-negative. An especially important instance of negative feedback is error information leading to corrections. Feedback, R-produced In(ter)dep Current S-Var (Many paradigms-E) At the most primitive level, feedback refers to the way a response feels, or looks, or sounds when it is being performed. Thus, when you move your arm, you feel your arm moving (technically known as kinesthetic and proprioceptive sensations from the muscles and joints of your arm) as well as any changes in the pressure of your clothing or any objects that you may touch as a result. Similarly, when you talk you feel your mouth and tongue moving and you hear the words you are uttering. Such feedback is essential for the normal performance of the response. If the feedback is delayed or distorted in some way, performance deteriorates dramatically. The term "feedback" is also used more generally for any kind of differential stimulation that is contingent on the emission of different responses. Thus, knowledge-of-results and reinforcement are forms of feedback because they inform the organism about the adequacy of the response. Feedback, in Behavior Chains Indep Current R-Var (R-E) _Fixation Interdep Current C-Var (D-E) A phenomenon in which a response is extremely resistant to extinction, typically resulting from the fact that the response was not only partially reinforced during acquisition, but was also punished. Insufficient punishment, that is, punishment that does not actually eliminate the behavior may fixate the esponse so that it persists even when it is not effective. _Fixed action pattern An unlearned behavior chain which, once initiated by a releasing stimulus*, runs through to completion without any further exteroceptive stimulus support. These are frequently observed in lower animals in conjunction with mating behavior. Fixed Interval Schedule Interdep Current C-Var (O-E) Fixed Ratio Schedule Interdep Current C-Var (O-E) _Flooding InterdepCurrent C-Var (A-I) _Foraging Interdep Current C-Var (F-E) _Force-Correlated Reinforcement Interdep Current S-Var (O-E) _Free Behavior Systematic Analyses: Paradigms Free Behavior Situation Interdep Current C-Var (F-E) _Frustration A hypothetical response (rf) occasioned innately by the failure to receive an anticipated reward. Frustration is assumed to have motivational (potentiating) properties and it is also assumed to control behavior by the innate or learned response associated with its response-produced sf. A very common response to frustration is aggression. Frustration Effect Interdep Historic C-Var (I-E) _Gender, Effects of Indep Current O-Var (Many paradigms-E Browsing the Quad-L Database: gC_ Observed differential effects of variables on males/females. Generality Organisms Indep Current O-Var (Many paradigms-E) Responses Indep Current R-Var (Many paradigms-E) Stimuli Indep Current S-Var (Many paradigms-E) _Generalization (genz) Systematic Analyses: Processes Generalization, Bilateral Indep Current S-Var (C-I)(D-I) Generalization, Compound Stimuli Indep Current R-Var (C-I) Generalization, cross-modal Indep Current S-Var (C-I)(D-I) The generalization of a learned association from one sense modality, such as vision, to another, such as hearing. The most positive evidence of cross-modal generalization comes from training people with a visual shape discrimination which they can then perform tactually (feeling the stimuli). Another approach is to use non-specific stimulus features such as rhythm; Morse code can be presented in flashing lights or in audible clicks. Generalization, Interocular Generalization, Mediated Interdep Current S-Var (C-I) Generalization, response The tendency to emit responses that are similar to the reinforced response if it is for some reason blocked. A particular instance of response generalization is bilateral transfer. You probably make most of your responses with your preferred hand but you typically can, albeit clumsily, make many of those responses with your nonpreferred hand if required to do so. Conceptually, response generalization is based on the similarity of response feedback*. You will do much more poorly with your nonpreferred hand if deprived of visual feedback so that you cannot see and correct errors as they occur. Generalization, Semantic Interdep Current S-Var (C-I) Generalization, stimulus (S/Genz) Indep Current S-Var (Many paradigms-I) A response acquired in one stimulus situation will tend also to occur in similar stimulus situations. The strength of a generalized response tendency depends on the strength of conditioning to the original stimulus; generalized responses are naturally weaker than the response to the original stimulus. Within that limit, the generalized response tendency is stronger the more similar a new test situation is to the original learning situation. Although stimulus generalization is typically adaptive (similar situations usually require similar behavior), many common difficulties result from over-generalization. Prejudices and stereotypes are among them. _Genetic Factors Indep Current O-Var (Many paradigms-E/I) _Goal-box Detention Indep Current S-Var (I-E)(S-E) Goal-box Effects Indep Historic C-Var (S-E) Goal-box Placement Indep Historic S-Var (S-E) _Gradient, generalization The gradient depicts graphically the way in which ti stimulus generalization of a response varies with the difference between the training and test situations. A steep' gradient indicates that the response is narrowly confined to the training stimulus; a flat' gradient means that the response generalizes broadly to a wide range of stimuli. Among the variables that affect the slope of generalization gradient is the amount of training: the range of generalization increases as training progresses from a low to intermediate amount, but the gradient gets steeper with continued training. Gradient, goal In a spatial or temporal behavior chain culminating in a reward, both learning and performance tend to increase the closer one gets to the goal. The goal gradient is presumably based on the within-chain delay of reinforcement because, the closer one gets to the goal, the shorter the delay before the reward is received. However, the empirical goal gradient of performance measures such as speed typically reach a peak somewhere beyond the middle of the response chain and then decrease. One interpretation of the terminal decrease is that the end of the behavior chain is typically followed by performing some other, incompatible response (such as consuming the reward) which becomes anticipatory. For whatever reason, the terminal de-reward) which become anticipatory. For whatever reason, the terminal decrease in performance near the end of a behavior chain is greater with partial reinforcement than with continuous reinforcement. (See Effect, partial reinforcement acquisition) Gradient, postdiscrimination The stimulus generalization gradient obtained after differential reinforcement of two stimuli along the stimulus continuum. The postdiscrimination gradient is steeper than the gradient obtained without prior differential reinforcement, and more so the closer the stimuli given differential reinforcement are to each other. (See also Peak shift) _Habituation Indep Current S-Var (F-E) A basically psychological process whereby responsiveness to a stimulus is reduced as a result of repeated exposures without any significant consequences. Habituation is initially revealed in a decrease in the orienting response to a stimulus; adapted stimuli are also relatively ineffective in a learning context. (Sometimes called adaptation*.) Habituation, US Indep Historic S-Var (A-E)(C-E)(E-E) _Helplessness, learned A descriptive term characterizing the non-adaptive behavior of an organism who, having previously been exposed to unpredictable, unavoidable, inescapable aversive events, fails to learn in a new situation in which it is possible to avoid an aversive event. It is as if, in the former environment, the organism learns that nothing can be done to control what happens, and this response of 'giving up' generalizes to one of not event trying in a new situation. If an unfortunate child early on encounters a teacher who is not responsive to the child's efforts, the child may learn to feel helpless in all school situations. And many is the child who has said, "What difference does it make? No matter what I do, my parents are never satisfied." _Heart-Rate Conditioning Indep Current R-Var (C-E) _Higher Mental Processes Systematic Analyses: Systematic _Higher-Order Conditioning Indep Historic C-Var (C-E) Interdep Historic C-Var (E-E)(A-E) _Hoarding Behavior Indep Current R-Var (F-E) _Hope Theoretically , an emotionally-positive state associated with the anticipation of reinforcement*. Hope functions as a "go" mechanism in incentive theories*, which may be opposed by the "no-go" mechanisms of frustration (anticipation of nonreinforcement) or fear (anticipation of punishment*). _Hormones Indep Current O-Var (I-E) _Hypothesis An empirical hypothesis is a prediction about the outcome of an experiment, ideally derived from a systematic analysis. A theoretical hypothesis is a proposition about the possible conceptual basis for an established phenomenon. In both cases, the singular criterion of a good hypothesis is that it be empirically testable. An hypothesis can never be proven true; the best that can happen is that the result of the test is consistent with the hypothesis and hence supports it. If the result is inconsistent with the hypothesis, it can be rejected. Hypothesis, aftereffects An analysis of the PREE based on the assumption that there are persisting aftereffects (memories) of the reinforcement or nonreinforcement from the preceding trial(s). The proposition is that partially reinforced organisms are sometimes reinforced during acquisition for responding in spite of having been nonreinforced on the immediately preceding one or more trials. Hence, such organisms should persist in responding when they are nonreinforced in extinction. In contrast, organisms who have received CRF encounter nonreinforcement for the first time during extinction. Hence, such organisms show a large stimulus generalization decrement on the second extinction trial. Hypothesis, discrimination An analysis of the PREE based on the assumption that an organism must iscriminate between acquisition and extinction in order for behavior to extinguish. The difference between continuous reinforcement and the continuous nonreinforcement in extinction is greater than the difference between partial reinforcement (which is, of course, equally well described as partial nonreinforcement) and continuous nonreinforcement. In short, organisms trained with PRF persist longer because it takes them longer to detect that the schedule of reinforcement has changed to extinction. Hypothesis, drive-reduction The strong form of the drive-reduction hypothesis is that drive reduction is both a necessary and a sufficient condition for reinforcement . . . all reinforcers entail the reduction of some drive state. The weak form of the drive-reduction hypothesis is that drive reduction is sufficient, but not necessary for reinforcement. Hence, all drive-reductions are reinforcements but not all reinforcements are drive-reductions. Hypothesis, frustration An analysis of both the PREA and the PREE*. The basic assumptions are that (1) the failure to receive an expected reward is frustrating, that (2) frustration (rf) has motivational properties, that (3) rf is a learnable response and becomes anticipatory in a behavior chain followed by frustrative nonreward*, and that (4) the innate response to rf-sf is incompatible with the instrumental response but that new compatible responses can become associated with rf-sf. Hence, to account for the PREA, occasional nonreinforcement conditions rf-sf to cues early in the behavior chain, and compatible responses become associated with rf-sf because making the instrumental response in spite of anticipatory frustration is sometimes reinforced. This means that rf-sf actually potentiates the instrumental response through its motivational properties. Later, in extinction, because partially reinforced organisms have had compatible responses associated with rf-sf, they persist in making the instrumental response whereas continuously reinforced organisms, who experience frustrative nonreinforcement for the first time during extinction, make the innate incompatible response which interferes with the instrumental response. Although somewhat omplicated, the frustration hypothesis has a certain intuitive appeal. If you sometimes, but not always, receive a good grade for your school projects you are likely to embark on a new project with more vigor than the person who always or never received good grades. Past successes have taught you to give it another try and past failures have taught you to put out extra effort in your new venture. The person who always gets good grades doesn't have the added impetus of the fear of failure, and the person who always fails, is disinclined to even start on another inadequate project. Hypothesis, need-reduction The contention that all primary reinforcements entail a decrease in a biological survival need of the organism. The need-reduction hypothesis antedated the drive-reduction hypothesis*, and was clearly refuted by evidence that a substance such as saccharine, which is nondigestible and hence cannot affect the organism's need state, nevertheless can serve as a very effective reinforcer. Hypothesis, prepotent response The hypothesis that operant/instrumental reinforcement results when a high-probability (prepotent) response follows a lower-probability response. For example, eating is a high-probability behavior for a hungry organism so enabling the organism to eat by giving food is reinforcing of lower probability behaviors such as running or pressing bars. More generally, letting a person (including yourself) do something he or she wants to do can reinforce doing things that should be done. _Imitation The attempt to emit the behavior observed in another organism. Many species appear to have an innate tendency to imitate the behavior of similar organisms, but imitation is also a learnable response. The imitated behavior must already be substantially available in the organism's repertoire for imitation to occur. (See Learning, vicarious.) _Imprinting Indep Historic S-Var (F-E) A phenomenon observed in various species of fowl, such that an infant organism develops a strong attachment to whatever object it encounters during a particular time (the critical period) after hatching. The infant's attachment is revealed in following the object around and in emitting distress calls if the object is removed. Stronger evidence of imprinting is maturity. Imprinting reflects a powerful effect of experience, but does not readily fit into any conventional learning paradigm*. _Individual Differences Indep Current O-Var (Many paradigms-E) _Induction Indep Historic C-Var (C-I) Inference Systematic Analyses: Systematic _Information Value of S Indep Current S-Var (C-E) _Inhibition (inhib) A general term for a hypothetical process that opposes positive excitatory tendency. Inhibition is usually said to develop as a result of nonreinforcement. Inhibition, conditioned Indep Current R-Var (C-I) An inhibitory (decremental) property of an initially neutral stimulus that is acquired as a result of signaling nonreinforcement when it is combined with a normally positive (reinforced) stimulus. Verbal stimuli, such as a parental threat to deny children some fun experience unless they behave, are common conditioned inhibitors with humans. Inhibition, external Indep Current S-Var (Many paradigms-I) If something unusual happens shortly before the presentation of the stimulus for a learned response, there will be a decrement in the performance of that response. As might be expected, the sooner the unusual event is to the conditioned stimulus and the more distracting the unusual event is, the greater the disruption in performance. This phenomenon can be observed in any of the learning paradigms and can be understood as a special case of a stimulus generalization decrement*. The unusual event naturally changes the context thus changing the total stimulus situation and leading to poorer performance. Inhibition, latent Indep Historic S-Var (C-I) The decremental effect of pre-exposing the organism to a stimulus before it is used in a conditioning paradigm. If the orienting response to a stimulus has been habituated as a result of repeated exposure in isolation future learning with respect to that stimulus is retarded and not very persistent. _Innate Performance Factors Indep Current R-Var (R-E) A result of genetic factors, inherited, unlearned. Reflexes and illustrate nnate behavioral tendencies. _Inner Speech Indep Current R-Var (R-E) _Insight Indep Current R-Var (F-E)(R-E)(S-E) _Insoluble Discrimination Interdep Current C-Var (E-E) _Instinctive Behavior Indep Current R-Var (F-E) An unlearned disposition to respond in a particular way when exposed to appropriate (releasing*) stimuli. The word instinct' is preferred to 'reflex' when the behavior in question is of a molar nature. The most elaborate instincts in animal behavior revolve around reproduction but even in that context instinctive behavior can be shaped to some extent by practice. _Instructions, Effects of Indep Current O-Var (C-E)(I-E) Instructions, on Extinction Indep Current O-Var (C-I) _Instrumental Conditioning Systematic Analyses: Paradigms _Intake, Food/Water Indep Current R-Var (F-E) _Intensity, stimulus A quantitative dimension of a stimulus event, e.g., the brightness of a light, the loudness of a tone, etc. _Interaction Systematic Analyses: Processes Classical-Avoidance Indep Historic C-Var (A-E) Classical-Classical Indep Current C-Var (C-E) Classical-Discrimination Indep Historic C-Var (D-E) Classical-Escape Indep Historic C-Var (E-E) Classical-Operant Indep Current C-Var (O-E) Instrumental-Classical Indep Current S-Var (I-E) Operant-Discrimination Interdep Historical C-Var (D-E) Operant-Escape Indep Historic C-Var (E-E) Within-Paradigm Indep Current S-Var (Many paradigms-E) _Intertrial Behavior Indep Current R-Var (A-E) _Interval, interstimulus (ISI) Indep Current C-Var (C-E) Interdep Historic C-Var (F-E) In classical conditioning*, the interval of time between the onset of the CS and the onset of the US*. For many response systems (such as the eyeblink), the optimum ISI is about one second or less, but for other systems (such as salivation), conditioning can be obtained with ISI's of several minutes. In all cases the ISI can be too long to be effective. Interval, Intertrial In(ter)dep Current C-Var (Many paradigms-E/I) Interval, Shock-Shock Interdep Current C-Var (A-E) _Irrelevant Incentive Learning Indep Historic S-Var (S-E) _Language Systematic Analyses: Processes _Latent Learning Indep Historic S-Var (S-E) _Learning A relatively persistent hypothetical process resulting from experience and reflected in a change in behavior under appropriate circumstances. By 'hypothetical' we mean that learning itself cannot be directly observed; we can only infer that learning has occurred if we see some change in an organism's behavior. Thus, I cannot see what you've learned about the Psychology of Learning, but if you do better on the final exam than you could have done without studying the subject, I can infer that you have learned something and, presumably, the better you do on the exam, the more you have learned. To be a genuine reflection of learning, the change in behavior must be relatively persistent; indeed there are good reasons to believe that most learning is, for all practical purposes, permanent. It is also necessary to rule out other possible reasons for the change such as improved muscle tonicity, fatigue, or motivational factors. Learning, differentiation (See Differentiation, response) Learning, discrimination (See Discrimination, stimulus) Learning, escape A paradigm in which an organism must learn a response in order to terminate an aversive state of affairs. Aversive stimuli typically elicit unlearned responses and these may be successful; for example, an untrained person who has fallen overboard from a boat might manage to stay afloat by hectic movement of arms and legs. Escape learning involves acquiring new responses to cope with an aversive situation, at least reducing its aversiveness. Although swimming is also an enjoyable sport, it is initially learned as an escape from the aversive experience of sinking under water. Learning, latent Learning that is not apparent in an organism's behavior. There may not be any immediate change in behavior in spite of exposure to some potential learning situation, but an appropriate change may later be observed when the organism is adequately motivated to display what has been learned. Learning, observational (See Learning, vicarious) Learning, vicarious A change in behavior as a result of observing another organism being exposed to the contingencies that prevail in a learning paradigm*. The use of the term, "vicarious," implies the presumption that the observer reacts emotionally to the emotionally-significant events experienced by the model. Learning set If a primate is exposed to a series of learning tasks of the same type (e.g., discrimination between objects, matching-to-sample*, PA learning, etc.), the number of trials required to learn decreases. This is true even though the actual stimuli involved in each task are different. Functionally, the subject has learned how to learn that particular type of task. A learning set is specific to the type of task practiced. However, educated adult humans have learned to learn many types of tasks. _Linguistic Behavior Indep Current R-Var (O-E) _Manipulative Behavior Indep Current R-Var (R-E) _Map, Cognitive Indep Current R-Var (R-E) A hypothetical cognitive process in which the organism is assumed to use a mental map of the environment when responding with respect to the spatial context. A major historical controversy was whether organisms learn mazes in terms of a cognitive map or in terms of a sequence of responses. _Matching-to-Sample See Conditional Discrimination _Maze "Bright"/"Dull" Indep Current O-Ver (S-E) _Mediation Making one or more responses to an initial stimulus, whose response- produced feedback cue the designated response, with human, natural-language verbal mediators are most common. For example, the designated association black-Christmas might use "white" as a mediator (black-white, white-Christmas). However, any response including an image can serve a mediating function, and there may be a fairly long chain of responses involved in the mediation of a single association. _Memory Systematic Analyses: Systematic See Retention A hypothetical process enabling the recall of information to which an organism was previously exposed. Learning and memory are often said to be opposite sides of the same coin, learning being the acquisition of associations and memory being their utilization. Models Systematic Analyses _Modification, Behavior Although the expression would be appropriate in any situation in which learning principles are used to change an organism's behavior, behavior modification typically refers to clinical treatment procedures based on established laboratory methods. The basic goal of behavior modification is to extinguish or otherwise eliminate undesirable, maladaptive behavior and replace it with newly learned desirable, adaptive behavior. Opponents of such techniques contend that they are only symptomatic treatments, without dealing with the underlying cause of the original behavior. _Motivation (motiv) Systematic Analyses: Processes Motivation is the generic term for any temporary state property of the organism that serves to arouse the organism and energize (potentiate) the organism's behavior. Motivation, Irrelevant Indep Current O-Var (I-E)(S-E) Motivation, opponent process A state, resulting from the occurrence of an emotionally-significant event, that is opposite in hedonic (affective, emotional) value from the original event. For example, the "high" resulting from drug use is followed by a "low" that motivates further consumption of the drug. Motivation, primary Indep Current O-Var (Many paradigms-E) Motivational states that arise without learning. There are two classes of primary motivational states, those produced by deprivation of some commodity and those produced by aversive stimulation. (Same as Drive, primary.) Motivation, secondary Motivational states that depend on appropriate learning experiences for their existence. The principal basis of secondary motivation is classical conditioning*; an originally neutral stimulus if paired with a primary aversive stimulus, acquires secondary motivating (fear*) properties. (Same as Drive, secondary.) _Motor Programs Indep Current R-Var (R-E) _Need, biological A condition arising from deprivation of any commodity that is necessary for survival of the organism. _Neophobia _Neural Concomitants Indep Current R-Var (C-E/I)(I-E)(O-E) Neural Conditioning Indep Current R-Var (C-E) Neurological Insults Indep Current O-Var (Most paradigms-E/I) _Novelty/Complexity Indep Current S-Var (S-E) Obituary Systematic Analyses: Systematic _Observational Learning Indep Historic S-Var (Many paradigms-E) _Observing Response Interdep Current S-Var (D-E) _Oddity Learning Indep Current S-Var (D-E) _Odor Cues Indep Current S-Var (S-E) _Omission Training Interdep Current C-Var (A-E) A learning paradigm in which an event, usually an unconditioned stimulus is scheduled to occur provided the organism does not make a response. (Avoidance conditioning is a special case of omission training where the scheduled event is aversive.) _Operant Conditioning (opcond) Systematic Analyses: Paradigms Operant Level Indep Current R-Var (O-E) The rate at which a freely-available response is emitted without any explicit reinforcement*. For example, you might watch for meaningless mannerisms made by a lecturer and count how frequently they occur. _Overshadowing Indep Current S-Var (C-E) When a compound of two stimuli is used as a CS*, only one of them may become conditioned, it is usually the more intense, conspicuous stimulus that is conditioned, presumably because it overshadows perception of the weaker stimulus. _Overtraining Interdep Current C-Var (Many paradigms-E) Overtraining, on Extinction Interdep Current C-Var (Many paradigms-I) _Paired Performance Interdep Current C-Var (A-E) Indep Current S-Var (D-E) _Paramecia Indep Current O-Var (C-E) _Partial Reinforcement Acquisition Indep Current S-Var (Many paradigms-E) Extinction Interdep Historic C-Var (Many paradigms-I) _Passive Avoidance Interdep Current C-Var (I-E) _Pattern String Problem Indep Current R-Var (R-E) _Perception Systematic Analyses: Systematic _Place/Response Learning Indep Current R-Var (S-E) _Placement, Direct Indep Historic S-Var (S-E) _Planaria Indep Current O-Var (S-E) _Polydipsia, Schedule Induced Indep Current R-Var (O-E) _Post-Conditioning Treatments Indep Historic S-Var (C-E) _Preconditioning, sensory A method for determining whether sensory conditioning can occur in animals. First, two neutral stimuli such as a tone and a light are paired; this is the preconditioning phase. Then one of them is conditioned with an effective unconditioned stimulus such as food. Finally, the other stimulus is presented. If a CR occurs to the test stimulus that was never paired with the US, it is inferred that the two neutral stimuli became associated during the preconditioning phase. _Predifferentiation, stimulus Exposing an organism to stimuli later to be used in a discrimination learning paradigm. During predifferentiation, no explicit contingencies of reinforcement are involved, but preexposure may nevertheless facilitate the subsequent discrimination. _Pre-exposure, Stimulus Indep Historic S-Var (Many paradigms-E/I) _Prefeeding Indep Historic S-Var (S-E) _Preference, Initial Indep Current S-Var (D-E) _Preparedness Hypothesis Indep Current S-Var (C-E) _Proprioception Inkdep Current S-Var (S-E) _Priming Indep Current S-Var (C-E) _Punishment (pun) Systematic Analyses: Processes Interdep Current C-Var (Many Paradigms-E/I) Punishment The occurrence of an event shortly following a response that leads to a decrease in the probability of recurrence of that response. (The word punishment' is also frequently used simply because the intent is to eradicate some undesirable behavior. The student must be careful in deciding whether the word is being used in this more casual sense.) As with reinforcement*, there is no implication that the response actually produced the punisher or that the organism be aware of the contingency. Punishment, primary negative Following a response with the removal of an innately emotionally-positive event. Candy is such an event for most children, so taking candy away from a child for misbehaving would be an instance of primary negative punishment. Note that the operation involves taking away something that the organism already has; the operation of not giving an emotionally-positive event is extinction*. Punishment, primary positive Following a response with the presentation of an innately emotionally- negative event. Electric shock is the most common punisher used in the laboratory, but any painful event (such as a physical blow, a bite, a very hot object, etc.) can be used as a positive punisher. Generally speaking, any events that function as positive punishers can, by their removal, serve as negative reinforcers*. Punishment, secondary negative Following a response with the removal of an event that has acquired emotionally-positive properties. Such events are secondary reinforcers*, having acquired their properties by being paired with primary reinforcers. Hence, taking money away from a person, as is done with fines for breaking laws, constitutes secondary negative punishment. Punishment, secondary positive Following a response with the presentation of an event that has acquired emotionally-negative properties. The event is a secondary motivator as a result of having been paired with primary aversive events. Thus, after the word "bad" has been paired with painful consequences, the word itself can be used as a secondary positive punisher. Punishment, varied Although paltry little research has been done on the topic, it should be clear that, logically there are many schedules and conditions of punishment as there are schedules and conditions of reinforcement. Varied punishment means that some dimension of the punishment, such as its intensity, is varied from occasion to occasion. Varied punishment is less effective in suppressing behavior than is constant, intense punishment, and varied punishment may actually increase the persistence of the behavior. _Pupillary Conditioning Indep Current R-Var (C-E) _Radial Maze Performance Interdep Current C-Var (S-E) _Reacquisition In(ter)dep Historic C-Var (Many paradigms-I) _Reaction Time (RT) Indep Current R-Var (R-E) _Ready Signal Indep Current S-Var (C-E) _Rearing Conditions Indep Historic O-Var (D-E/I) _Reasoning Indep Current R-Var (S-E) _Recognition Systematic Analyses: Systematic Indep Current R-Var (F-E) _Redintegration Indep Current S-Var (C-E) Reflexive Behavior Indep Current R-Var (R-E) Reflex A simple, unlearned response elicited automatically by an appropriate stimulus. The knee-jerk to a tap on the knee cap is a familiar reflex in humans. Reflex, orienting/investigatory The unlearned response of turning toward a source of stimulation. Orienting responses are most commonly visual, and are elicited by novel stimuli in direct proportion to their intensity. Reinforcement (reinf/rf) Systematic Analyses: Processes Interdep Current C-Var (I-E) Amount, Varied Indep Current S-Var (I-E) Conditions Indep Current S-Var (S-E) Delay, Varied Indep Current S-Var (I-E) Delay, Within-Chain Indep Current S-Var (I-E) Drive-Reduction Interdep Current S-Var (C-E) Effectiveness Interdep Current C-Var (I-E) Intertrial Indep Current S-Var (I-E) Irrelevant Indep Historic S-Var Quality Indep Current S-Var (I-E) Partial See Partial Reinforcement Schedule Indep Current S-Var (Many paradigms-E) Reinforcement (Classical conditioning) The occurrence of the US. Reinforcement (Operant/instrumental conditioning) The occurrence of an event shortly after a response that leads to an increase in the probability of recurrence of that response in the future. (Note 1. The event need not be produced by the response for reinforcement to occur.) (Note 2. The organism need not be aware of the contingency for reinforcement to occur.) Reinforcement, amount of Indep Current S-Var (Many paradigsm-E) The quantity of reinforcer, such as the number of pellets of food given as reward. In general, operant/instrumental performance is an increasing function of the amount of reinforcement. This includes secondary reinforcement*; the more lavish you praise for another person's good performance, the more likely that person is to continue performing well. Reinforcement, conditioned (See Reinforcement, secondary) Reinforcement, correlated A condition in which some dimension of the reinforcer (amount, delay, etc.) is systematically related to some dimension of the response (speed, vigor, etc.). Presumably, a student's grades are correlated with the effort expended in studying. In general, organisms tend to maximize reinforcement when exposed to conditions of correlated reinforcement, but may not expend the extra effort if the greater reward is not "worth it." Reinforcement, delay of Indep Current S-Var (Many paradigms-E) The time after an operant/instrumental response has been made before a reinforcer is given. In general, performance is lower the longer the delay of reinforcement although the effect depends importantly on what the organism does during the delay interval. Delay is most detrimental if incompatible responses occur during the delay whereas delay may have little or no effect if the organism can keep repeating the response during the delay. Reinforcement, nondifferential Providing two stimuli with the same schedule and condition of reinforcement. Not only does nondifferential reinforcement lead to nondifferential behavior but it also tends to retard any future occasions for learning with differential reinforcement*. Reinforcement, partial (PRF) A reinforcement schedule in discrete-trial paradigms (classical and instrumental conditioning) according to which reinforcement occurs on only part of the trials. (Note: It is not a case of giving only part of the reinforcement but rather of giving reinforcement part of the time.) The most common schedule is 50% PRF, meaning that half of the trials are reinforced, and normally, the trials to be reinforced are randomly determined. However, reinforcing every other trial (alternating 50% PRF) is partial reinforcement schedule leading with sufficient training to appropriately alternating levels of performance. Reinforcement, primary negative The termination of a stimulus event that is innately aversive*. Thus, the offset of an electric shock in escape learning is a common instance of primary negative reinforcement in the laboratory, as is the removal of some foreign object in your eye a common everyday example. Reinforcement, primary positive The presentation of a stimulus that is innately positive, pleasant, or desirable. Giving food to a hungry rat is a common laboratory example, as is giving candy to a child a familiar everyday example. Reinforcement, secondary negative The termination of a stimulus that has acquired aversive properties by having been paired with a primary aversive event. Turning off a light that signals an impending shock in an avoidance conditioning context is a typical laboratory example, as is the sight of a patrol car turning off into another street a familiar everyday instance. Reinforcement, secondary positive The presentation of a stimulus that has acquired positive properties by having been paired with a primary positive event. Turning on a light that has previously signalled food delivery is a laboratory instance, as is receiving money a common everyday example. Reinforcement, varied A condition in which some dimension of the reinforcer (amount, delay, etc.) is varied from occasion to occasion. Behavior that has received varied reinforcement is more persistent than continuously reinforced behavior. (Note: PRF is a special case of varied reinforcement.) _Relearning Interdep Historic C-Var (Many paradigms-I) Reacquisition of a response that has been extinguished*. Relearning is much more rapid than original learning, possible because reintroduction of reinforcement reinstates part of the stimulus context that prevailed during original acquisition. A person who has acquired the smoking habit may quit but will relearn all of the habitual responses almost immediately upon resuming the behavior. _Respondent A type of behavior that can be elicited reflexively by an appropriate stimulus. Respondents are especially adaptable for classical conditioning*. Response (R) Systematic Analyses, Processes Amplitude Indep Current R-Var (C-E) Conditioned Emotional Indep Current C-Var (O-E) Force Interdep Current S-Var (O-E) Incompatible In(ter)dep Current C-Var (Many paradigms-I) Latency Indep Current R-Var (Many paradigms-E) Nature of Indep Current R-Var (Many paradigms-E) Observing Indep Current R-Var (D-E)(O-E) Sequential re PRF Indep Current S-Var (I-E) Visceral Indep Current R-Var (O-E) _Response (R) Systematic Analyses: Processes Formally, we define a response as any glandular or muscular activity of an organism that can be reliably observed by an experimenter. Although this definition is a good one, in principle, it may be too inclusive for our present purposes because not all glandular/muscular activities of all organisms are learnable. Accordingly, we actually work with a FUNCTIONAL definition: A response is any activity of an organism that obeys the principles of learning. This is, of course, a circular approach but it is useful because it bypasses some very knotty problems. For example, we frequently refer to the "bar-press" response of rats by which we mean that a switch connected to a bar protruding into the rat's environment was somehow closed by the rat. Now actually, the bar-pressing response is a rather long behavior chain involving approaching the bar from some direction, raising up on the hind legs, placing one or both front paws on the bar, pressing down with at least the minimum required force, releasing the bar, and turning toward the reward hopper. Yet it is not entirely clear when the response begins and ends but the functional approach simply finesses this question altogether. Since we obtain lawful relationships by counting the closure of the switch as being a response, we can proceed with an experimental analysis directly. Response, alimentary Behaviors associated with ingestion such as sucking, chewing, salivating, and swallowing. Response, anticipatory A response that occurs before its original time as a result of being regularlypreceded by a stimulus to which it has become conditioned*. The CR in classical conditioning is an instance of an anticipatory response but such responses also occur in many other situations. A racer "jumping the gun" is a common instance of anticipatory responding. Response, avoidance (AR) A response that precludes or postpones the occurrence of an aversive event. Avoidance responses fall within the larger categories of operant or instrumental responses fall within the larger categories of operant or instrumental responses because they affect the organism's environment, but rather than causing an event to happen, they cause a scheduled event not to happen. (See Conditioning, avoidance) Response, conditioned (CR) A learned response occasioned by an originally neutral stimulus. The term can legitimately be used in the context of any conditioned procedure but it is preferable to restrict its usage to classical conditioning where the CR is elicited by the CS prior to or in the absence of the US. Response, conditioned emotional (CER) A hypothetical CR presumed to result from pairing an originally neutral stimulus with a noxious, painful stimulus. Although the CER cannot be directly observed, it can be monitored by its effect on other behavior. Specifically, if an organism is emitting a reinforced operant response, and a CS paired with an aversive stimulus is presented, the rate of operant responding is suppressed. This conditioned suppression is observed even if the CS has been paired with an aversive stimulus in another situation and without the operant response occurring at the time. The CER is essentially the same as the fear response except that no additional properties such as motivation, are ascribed to the CER. Response, consummatory The terminal response appropriate to deprivation drives. Eating, copulating, and drinking are specific consummatory responses. Response, defense An unconditioned defense response is the unlearned, reflexive response elicited naturally by an aversive stimulus. A blink to a puff of air to an eye, and a tear to a cinder in your eve, are familiar unconditioned defense responses. Conditioned defense response are ones elicited by a CS that antedates the occurrence of an aversive stimulus*. Such responses do not avoid the aversive stimulus, but may to some extent reduce its aversiveness. For example a conditioned eyeblink reduces the minor discomfort from a puff of air used as the US in a classical defense conditioning situation. Response, Delayed Indep Historic S-Vr (D-E) See Delayed Response Response, escape (ER) A response that successfully terminates an aversive stimulus*. Many escape responses are unlearned, consisting of withdrawal (e.g., jerking the had away from a hot stove) or flight (e.g., running away from a spreading fire). In some situation, however, escape responses must be learned. For example, you have probably learned no to rub your eye if there is a cinder in it, and have learned other maneuvers designed to remove the painful object. (See Learning, escape.) Response, incompatible Responses are said to be incompatible when they cannot be performed at the same time. Typically, we think of incompatibility as being physical; you cannot read and watch television concurrently. However, incompatibility can also be psychological; you cannot concentrate on your studies and worry about your personal problems simultaneously. People sometimes try to break undesirable habits by substituting an incompatible response such as eating or chewing gum instead of smoking. Response, observing A response that has no direct instrumental value but that obtains information that is relevant to the prevailing contingencies. For example, looking up the program schedule in the T.V. section of the newspaper is an observing response that may or may not lead on to watching T. V. Response, unconditioned (UR) An overt reflex elicited by an unconditioned stimulus*. In this context, the term "unconditioned" can be translated as "unlearned," in the sense of not being acquired through experience. Note, however, that being"unconditioned" is not a property of the response itself . . . it is a property of the association of a response with a US. Hence, an eyeblink is a UR when it is elicited by a US such as a puff of air, but an eyeblink can also be a CR or a voluntary response. (Sometimes called "unconditional") Response/Place Learning Indep Current R-Var (S-E) _Retention In(ter)dep Historic C-Var (Many Paradigms-E) _Reversal Indep Historic R-Var (C-I) Reversal Learning Interdep Current R-Var (D-I)(I-I)(O-I) Reversal/Nonreversal Shifts Interdep Current C-Var (D-I) Reversal, Presolution Interdep Current C-Var (D-E) _Reward A class of reinforcers that include desirable objects of some kind. We sometimes loosely use reward' and reinforcement' interchangeably, but not all reinforcers are rewards, and reward is better used to refer to the object itself. _Schedules of Reinforcement Interdep Current C-Var (O-E) A schedule is a rule or a program determining when a designated event will occur. In general, schedules can be based on time (as in an airplane schedule), or on enumeration (as in a tax schedule). Schedules of reinforcement are based on the time elapsed since the preceding reinforcement*, or on the number of responses emitted since the receding reinforcement. Schedule of reinforcement, continuous (CRF) A schedule in which every response is reinforced. Schedule of reinforcement, fixed interval (FI) When the availability of reinforcement is determined by elapsed time since the preceding reinforcement it is an interval schedule, and when that interval is constant, it is a fixed interval schedule. A person who likes to hear a grandfather clock chime the hour is on a FI-1-hour schedule. Schedule of reinforcement, fixed ratio (FR) When the availability of reinforcement is determined by the number of responses emitted since the preceding reinforcement it is a ratio schedule, and when that number is constant, it is a fixed ratio schedule. A person on piece-rate wages is working on a FI of some specified number of pieces. Schedule of reinforcement, varied interval (VI) When the availability of reinforcement is determined by elapsed time since the preceding reinforcement, an interval schedule is in force, and if that interval varies from occasion to occasion it is a varied (or variable) interval schedule. The amount of time a hitch-hiker must wait before being offered a ride illustrates a VI schedule. Schedule of reinforcement, varied ratio (VR) When the availability of reinforcement is determined by the number of responses emitted since the preceding reinforcement it is a ratio schedule, and if that number varies from occasion to occasion it is a varied (or variable) ratio. The number of times that you might have to pump on the accelerator of an old car in order to get it started illustrates a VR schedule. _Secondary Reinforcement Indep Historic C-Var (Many paradigms-E) _Self-Control See Decision Making See Control, self _Self-Recogntion Indep Current R-Var (R-E) _Sensory Conditioning Indep Current S-Var (C-E) Sensory Preconditioning Indep Current S-Var (C-E) Setting Operations Indep Current O-Var (Most paradigms-E) The motivating and instructing operations performed by an experimenter before actually beginning an experiment. With animals, setting operations typically include deprivation of some commodity, familiarization with the apparatus, and gentling to the experience of being handled. With humans, setting operations are typically verbal, telling the subject about the nature of the task, the stimuli to be experienced, assurances that no painful events are involved, and solicitation of cooperation. _Sex, Effects of (See Gender) Sexual Behavior _Shaping A technique for obtaining an operant response by reinforcing successive approximations to the desired behavior. Shaping is like the game of "you're getting warmer," and is based on the Principle of Response Generalization. Reinforcing one response will lead the organism to emit a variety of behaviors of a similar kind and the objective is to reward another response that is still closer to the one designated for study. _Shock-Right See Cue, Aversive _Sign-Tracking _Similarity, response A hypothetical dimension of "sameness" of reposes. Presumably, response similarity is related to the similarity of the response-produced cues (feedback*) coincident with the response. Hence responses that "feel" similar, or sound or look similar, or have comparable effects on the environment are conceptually similar responses. Similarity, stimulus Stimulus similarity can often be measured in physical terms such as pitch of tones, or colors of lights. Similarity is sometimes measured in terms of the number of elements that complex stimuli have in common. But in the last analysis, stimulus similarity is measured functionally . . . events to which organisms respond similarly are similar. Normally dissimilar events can acquire functional similarity if the organism learns to make the same response to them. The analysis is that the feedback from the common response mediates acquired similarity of cues. For example, learning a concept such as "dog" increases stimulus generalization of responses learned with one dog to other dogs (Fido-dog-bites so Spot-dog-bites). _Simultaneous-Successive Interdep Current C-Var (D-E) _Skill Indep Current R-Var (R-E) _Sleep Indep Current R-Var (F-E) _Social Behavior Systematic Analyses: Systematic _Spatial Learning Systematic Analyses: Paradigms Spatial Differentiation Interdep Current R-Var (S-I) _Species, Effects of Indep Current O-Var (Many paradigms-E) Browsing the Quad-L Database: sC_ Comparison of experimental variables on different species. _Speech Indep Current R-Var (R-E) _Speed, Response Interdep Current C-Var (I-E) Indep Current R-Var (I-E) _Spinal Organism Indep Current O-Var (C-E) Spontaneous Recovery In(ter)dep Historic C-Var (Many paradigms-I) If a rest interval away from the situation is given following experimental extinction there is usually a reappearance of the conditioned response*. Spontaneous recovery is an increasing function of the amount of recovery time with a usual practical limit of about half the performance level that prevailed before extinction. However, with very prolonged intervals, almost complete recovery has been reported, leading to the I interpretation that extinction does not eradicate the excitatory process built up during acquisition. _State-dependent Learning Indep Current O-Var (C-E)(S-I) Stimulation, Post-trial Indep Historic C-Var (C-E) _Stimulus (S) Systematic Analyses: Processes Indep Current S-Var (Many paradigms-E) Browsing the Quad-L Database: S_ Any event acting on a suitable receptor. There is no way to know, in advance, whether the organism possesses a 'suitable' receptor for the event in question. The human ear, for example, cannot hear sounds at high pitches that other organisms (e.g., dogs) can hear. Hence, it is an empirical question whether the event of interest is indeed a stimulus for the organism of interest. A long-standing controvery is the reality of 'extra-sensory perception,' the contention that some humans can 'read' the mind of another human. For this reason, it is preferable to refer to a '_putative' stimulus, an event to which the organism is exposed and which may or may not actually function as a stimulus. Adaptation Indep Historic S-Var (A-E)(C-E) Compound, temporal Indep Current S-Var (A-E)(C-E) Conditions Indep Current S-Var (I-E) Decrease Indep Current S-Var (Many paradigms-E) Duration Indep Curent S-Var (A-E)(C-E) Effectiveness Indep Current S-Var (E-E)(S-E) Effectiveness Interdep Historic C-Var (F-E) Generality Indep Current S-Var (C-E) Habituation Indep Historic S-Var (A-E)(C-E) Intensity Interdep Historic C-Var (F-E) Novelty/Complexity Indep Current S-Var (S-E) Pre-Exposure Interdep Historic C-Var (F-E) Pre-Exposure Indep Historic S-Var (D-E) Pre-Training Interdep Historic C-Var (D-E) Preference Indep Current S-Var (D-E) Quality Indep Current S-Var (D-E) Subliminal Indep Current S-Var (C-E)(C-I) Verbal Indep Current S-Var (C-E) Stimulus, Unconditioned Duration Indep Current S-Var (C-E) Effectiveness Indep Current S-Var (C-E) Effectiveness Interdep Historic C-Var (F-E) Intensity Indep Current S-Var (Many paradigms-E) Intensity Interdep Historic C-Var (F-E) Varied Indep Current S-Var (C-E) Stimulus, antecedent A stimulus that precedes an unconditioned stimulus in classical conditioning*, or occurs before the response in instrumental conditioning*. The more conventional term is "conditioned stimulus*", and although that term is frequently used, it is gratuitous to identify a stimulus as a CS before it has even been presented in a learning context. Stimulus, compound, simultaneous Indep Current S-Var (A-E)(C-E)(D-E) A combination of two (or more) identifiable stimulus events as antecedent (or conditioned*) stimulus. Typically, the elements of a compound are presented simultaneously and are also coterminus (end at the same time) but other timing arrangements are possible in temporal compounds. Stimulus, conditioned (CS) A stimulus that precedes an unconditioned stimulus in classical conditioning, or that antedates the response in instrumental conditioning. The term "conditioned" can roughly be translated as "learned in this situation." That is to say, the CS is the event that is expected to acquire (or has acquired) the capacity to evoke the CR. A better translation of Pavlov's original term is '_conditional.' This word better captures the fact that the effectiveness of the stimulus in producing a CR is conditional on its being followed by the US. Stimulus, drive Hypothetical cue properties associated with each drive state. It is presumed that interoceptive stimuli inform the organism whether a drive is present and, if so, how intense it is. For example, you generally know if you are anxious about an upcoming exam. The particular importance of drive stimuli is that they are a part of the stimulus complex in which learning occurs and subsequently help guide the organism toward adaptive behavior. Stimulus, exteroceptive A stimulus event originating outside the organism and that stimulates an appropriate receptor on the surface of the skin. In addition to sights and sounds, tastes and smells are exteroceptive stimuli even though the receptors are inside the mouth. Stimulus, functional The stimulus as it is perceived by the organism. As distinct from the nominal stimulus which is the event as it actually occurs in the world, the functional stimulus depends on the orientation of the organism's receptors, and also on any attention effects. Stimulus, interoceptive Stimuli arising from within the body. These include kinesthetic and proprioceptive feedback from the muscles and joints and also various "gut" stimuli associated with various states of the organism (e.g., hunger pangs, headache, etc.) Stimulus, neutral A stimulus that initially has no tendency to elicit the response of interest. (If the intent is emotionally-neutral, it is normally stated.) Stimulus, unconditioned (US) A stimulus event that automatically elicits a reflexive response, the unconditioned response*. (Also called "unconditional".) Stimulus, asynchronism The separation in time between the onsets of two stimuli, usually the antecedent (conditioned*) stimulus and the unconditioned stimulus in the classical conditioning paradigm*. Stimulus trace A hypothetical process that results from the occurrence of a stimulus and that persists with decaying strength for some period of time even after the stimulus has terminated. Stimulus Order Indep Current C-Var (C-E)(A-E) _Superstition The continued performance of a response that chanced to be followed by reinforcement*. Because reinforcement increases the likelihood of antedating responses without regard to whether they actually produced the reinforcement, adventitious reinforcement is just as effective as deliberate reinforcement. _Suppression, Conditioned (CER) See Response, Conditioned Emotional A decrease in the rate of emission of a positively-reinforced operant response during the time of action of a stimulus event that signals the impending occurrence of an aversive event. A dramatic example occurred during World War II when the sound of an approaching rocket suppressed talking by people in an air-raid shelter in England. _Symbolic Behavior Systematic Analyses: Systematic Symposia Systematic Analyses _Taste Aversion See Aversion, Conditioned _Temporal Conditioning Indep Current C-Var (C-E) Textbooks Systematic Analyses: Systematic Theoretical Analyses Systematic Analyses _Theory, continuity The proposition that learning is a gradual, cumulative process, with some learning occurring on every trial. Specifically in a discrimination learning situation, the presumption is that the organism learns about the correctness of the relevant cues*, little by little, on every trial even if performance appears to be hovering at the chance level. A common assumption is that learning progresses on each trial at a rate that is a constant fraction of the difference between the level of learning before the trial and the upper limits. The following table illustrates this assumption. Theory, noncontinuity A theory that learning is not a gradual, cumulative, continuous process but instead occurs in a sudden, insightful manner. Noncontinuity analyses are typically applied to discrimination learning where it is assumed that the organism tests hypotheses about the solution. Theory, reinforcement The proposition that reinforcement*, whatever its nature, is necessary for learning to occur. Such a theory may contend that learning occurs so long as some reinforcement occurs, but a pure reinforcement theory postulates that the amount of reinforcement directly determines the amount of learning. (Note that reinforcement pertains to the effect of reinforcement on learning; all theories accept the role of reinforcement in determining performance.) Reinforcement theory has generally been abandoned by contemporary learning theorists but is still practiced in many everyday contexts. _Timing Systematic Analyses: Systematic _Tool-Use Behavior Systematic Analyses: Systematic Indep Current R-Var (R-E) Browsing the Quad-L Database: tH_ Early efforts to identify uniquely human behavior was our use of tools as instruments for obtaining reinforcers. An even more demanding criterion is making or modifying an object so it can be used as a tool. _Trace Conditioning Indep Current C-Var (C-E)(A-E) See Conditioning, Trace _Transfer Interdep Historic C-Var (S-E) Interdep Current C-Var (Many paradigms-I) _Transposition Indep Current S-Var (D-I) Responding to a new set of stimuli on the basis of the same relation that was learned with another set of stimuli. Having been reinforced for choosing the larger (or brighter, heavier, etc.) Of a pair of stimuli, the subject is likely to choose the larger of a new pair of stimuli, Transposition may be thought of as a special case of stimulus generalization*, except there are several stimuli involved on each presentation and the choice response is relative. Transposition has been observed in many species. You are probably most familiar with transposition in music, since you recognize a particular melody when played in a different key. _Tropism An learned tendency to approach (positive tropism) or move away from (negative tropism) some natural stimulus event. Moths are positively phototropic (they approach light) as are many insects. Other common tropism are geotropic (approach the earth) and heliotropic (approach the sun). _Typewriting Indep Current R-Var (R-E) _Unconscious Conditioning Indep Current C-Var (C-E) _Variable Interval Schedule Interdep Current C-Var (O-E) Variable Ratio Schedule Interdep Current C-Var (O-E) _Variability Systematic Analyses: Processes _Varied Reinforcement Indep Current S-Var (I-E) Varied Reinf, on Extinction Interdep Historic C-Var (I-I) _Vicarious Learning See Observational Learning Vicarious Trial-and-Error Indep Current R-Var (D-E)(S-E) _Voluntary Responses Indep Current C-Var (C-E) _Volition Indep Current R-Var (R-E) _Warning Signal Interdep Current C-Var (A-E) _Within-Session Effects Indep Current C-Var (C-E) Interdep Current C-Var (A-E)