New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
methods behavioral research
Introduction To Learning And Behavior 4th Edition Russell A. Powell, P. Lynne Honey, Diane G. Symbaluk - Solutions
4. If you drink five soda pops each day and only one glass of orange juice, then the opportunity to drink can likely be used as a reinforcer for drinking .
3. According to the Premack principle, if you crack your knuckles 3 times per hour and burp 20 times per hour, then the opportunity to can probably be used as a reinforcer for .
2. The Premack principle states that a behavior can be used as a reinforcer for a behavior.
1. The Premack principle holds that reinforcers can often be viewed as rather than stimuli. For example, rather than saying that the rat’s lever pressing was reinforced with food, we could say that it was reinforced with food.
5. Research has shown that hungry rats will perform more effectively in a T-maze when the reinforcer for a correct response (right turn versus left turn) consists of several small pellets as opposed to one large pellet (Capaldi, Miller, & Alptekin, 1989). Chickens will also run faster down a runway
4. The motivation that is derived from some property of the reinforcer is called motivation.
3. A major problem with drive reduction theory is that.
2. According to this theory, a s reinforcer is one that has been associated with a p reinforcer.
1. According to drive reduction theory, an event is reinforcing if it is associated with a reduction in some type of p drive.
3. One suggestion for enhancing our behavior in the early part of a long response chain is to make the completion of each link more s , thereby enhancing its value as a s reinforcer.
2. An efficient way to train a complex chain, especially in animals, is through b chaining, in which the (first/last) link of the chain is trained first. However, this type of procedure usually is not required with verbally proficient humans, with whom behavior chains can be quickly established
1. Responding tends to be weaker in the (earlier/later) links of a chain.This is an example of the g g effect in which the strength and/or efficiency of responding (increases/decreases) as the organism approaches the goal.
2. Within a chain, completion of each of the early links ends in a(n) s reinforcer, which also functions as the for the next link of the chain.
1. A chained schedule consists of a sequence of two or more simple schedules, QUICK QUIZ L each of which has its own and the last of which results in a t r .
3. To the extent that a gymnast is trying to improve his performance, he is likely on a(n) schedule of reinforcement; to the extent that his performance is judged according to both the form and quickness of his moves, he is on a(n) schedule.
2. In a(n) schedule, the response requirement changes as a function of the organism’s performance while responding for the previous reinforcer, while in a(n) schedule, the requirements of two or more simple schedules must be met before the reinforcer is delivered.
1. A complex schedule is one that consists of _______.
3. A child who is often hugged during the course of the day, regardless of what he is doing, is in humanistic terms receiving unconditional positive regard. In behavioral terms, he is receiving a form of non social reinforcement. As a result, this child may be (more/less) likely to act out in order
2. In many mixed martial arts matches, each fighter typically receives a guaranteed purse, regardless of the outcome. In the Ultimate Fighter series, the winner of the final match is awarded a major contract in the UFC while the loser receives nothing. As a result, Dana is not surprised when he
1. During the time that a rat is responding for food on a VR 100 schedule, we QUICK QUIZ J begin delivering additional food on a VT 60-second schedule. As a result, the rate of response on the VR schedule is likely to (increase/decrease/remain unchanged) .
3. As shown by the kinds of situations in which superstitious behaviors develop in humans, such behaviors seem most likely to develop on a(n) (VT/FT)schedule of reinforcement.
2. Herrnstein (1966) noted that superstitious behaviors can sometimes develop as a by-product of c reinforcement for some other behavior.
1. When noncontingent reinforcement happens to follow a particular behavior, QUICK QUIZ I that behavior may (increase/decrease) in strength. Such behavior is referred to as s behavior.
3. For farmers, rainfall is an example of a noncontingent reinforcer that is typically delivered on a schedule(abbreviated ).
2. Every morning at 7:00 a.m. a robin perches outside Marilyn’s bedroom window and begins singing. Given that Marilyn very much enjoys the robin’s song, this is an example of a 24-hour schedule of reinforcement(abbreviated ).
1. On a non schedule of reinforcement, a response is not required to QUICK QUIZ H obtain a reinforcer. Such a schedule is also called a response i schedule of reinforcement.
5. Frank discovers that his golf shots are much more accurate when he swings the club with a nice, even rhythm that is neither too fast nor too slow.This is an example of reinforcement of behavior(abbreviated ).
4. On a video game, the faster you destroy all the targets, the more bonus points you obtain. This is an example of reinforcement of behavior (abbreviated ).
3. In practicing the slow-motion form of exercise known as tai chi, Tung noticed that the more slowly he moved, the more thoroughly his muscles relaxed.This is an example of d reinforcement of behavior (abbreviated ).
2. As Tessa sits quietly, her mother occasionally gives her a hug as a reward. This is an example of a schedule.
1. On a (VD/VI) schedule, reinforcement is contingent upon responding continuously for a varying period of time; on an (FI/FD) schedule, reinforcement is contingent upon the first response after a fixed period of time.
4. In general, schedules produce postreinforcement pauses because obtaining one reinforcer means that the next reinforcer is necessarily quite(distant/close) .
3. In general, (variable/fixed) schedules produce little or no postreinforcement pausing because such schedules often provide the possibility of relatively i reinforcement, even if one has just obtained a reinforcer.
2. On schedules, the reinforcer is largely time contingent, meaning that the rapidity with which responses are emitted has (little/considerable)effect on how quickly the reinforcer is obtained
1. In general, (ratio/interval) schedules tend to produce a high rate of QUICK QUIZ F response. This is because the reinforcer in such schedules is entirely r contingent, meaning that the rapidity with which responses are emitted (does /does not) greatly affect how soon the reinforcer is obtained.
3. In general, variable interval schedules produce a (low/moderate/high)and (steady/fluctuating) rate of response with little or no .
2. You find that by frequently switching stations on your radio, you are able to hear your favorite song an average of once every 20 minutes. Your behavior of switching stations is thus being reinforced on a schedule.
1. On a variable interval schedule, reinforcement is contingent upon the response following a , un period of .
5. On a pure FI schedule, any response that occurs (during/following)the interval is irrelevant.
4. Responding on an FI schedule is often characterized by a sc pattern of responding consisting of a p p followed by a gradually(increasing/decreasing) rate of behavior as the interval draws to a close.
3. In the example in question 2, I will probably engage in (few/frequent)glances at the start of the interval, followed by a gradually (increasing/decreasing)rate of glancing as time passes.
2. If I have just missed the bus when I get to the bus stop, I know that I have to wait 15 minutes for the next one to come along. Given that it is absolutely freezing out, I snuggle into my parka as best I can and grimly wait out the interval.Every once in a while, though, I emerge from my cocoon
1. On a fixed interval schedule, reinforcement is contingent upon the QUICK QUIZ D response following a , pr period of .
4. As with an FR schedule, an extremely lean VR schedule can result in r s .
3. An average of 1 in 10 people approached by a panhandler actually gives him money. His behavior of panhandling is on a schedule of reinforcement.
2. A variable ratio schedule typically produces a (high/low) rate of behavior (with/without) a postreinforcement pause.
1. On a variable ratio schedule, reinforcement is contingent upon a un of responses.
11. Graduate students often have to complete an enormous amount of work in the initial year of their program. For some students, the workload involved is far beyond anything they have previously encountered. As a result, their study behavior may become increasingly (erratic/stereotyped) throughout
10. Over a period of a few months, Aaron changed from complying with each of his mother’s requests to complying with every other request, then with every third request, and so on. The mother’s behavior of making requests has been subjected to a procedure known as “s the r .”
9. A very dense schedule of reinforcement can also be referred to as a very r schedule.
8. An FR 12 schedule of reinforcement is (denser/leaner) than an FR 75 schedule.
7. The typical FR pattern is sometimes called a b -and-r pattern, with a pause that is followed immediately by a (high/low) rate of response.
6. An FR 200 schedule of reinforcement will result in a (longer/shorter)pause than an FR 50 schedule.
5. A fixed ratio schedule tends to produce a (high/low) rate of response, along with a p p .
4. An FR 1 schedule of reinforcement can also be called a schedule.
3. A mother finds that she always has to make the same request three times before her child complies. The mother’s behavior of making requests is on an schedule of reinforcement.
2. A schedule in which 15 responses are required for each reinforcer is abbreviated .
1. On a(n) schedule, reinforcement is contingent upon a fixed number of responses.
5. S e are the different effects on behavior produced by different response requirements. These are the stable patterns of behavior that emerge once the organism has had sufficient exposure to the schedule. Such stable patterns are known as st -st behaviors.
4. When the weather is very cold, you are sometimes unable to start your car.The behavior of starting your car in very cold weather is on a(n)schedule of reinforcement.
3. Each time you flick the light switch, the light comes on. The behavior of flicking the light switch is on a(n) schedule of reinforcement.
2. On a c reinforcement schedule (abbreviated ), each response is reinforced, whereas on an i reinforcement schedule, only some responses are reinforced. The latter is also called a p reinforcement schedule.
1. A s of reinforcement is the r requirement that must be QUICK QUIZ A met in order to obtain reinforcement.
14. Define shaping. What are two advantages of using a secondary reinforcer, such as a sound, as an aid to shaping?
13. Define natural and contrived reinforcers, and provide an example of each.
12. Under what three conditions does extrinsic reinforcement undermine intrinsic interest? Under what two conditions does extrinsic reinforcement enhance intrinsic interest?
11. Define intrinsic and extrinsic reinforcement, and provide an example of each.
10. What is a generalized reinforcer? What are two examples of such reinforcers?
9. Distinguish between primary and secondary reinforcers, and give an example of each.
8. How does immediacy affect the strength of a reinforcer? How does this often lead to difficulties for students in their academic studies?
7. What are similarities and differences between negative reinforcement and positive punishment?
6. Define positive punishment and diagram an example. Define negative punishment and diagram an example. Be sure to include the appropriate symbols for each component.
5. Define positive reinforcement and diagram an example. Define negative reinforcement and diagram an example. Be sure to include the appropriate symbols for each component.
4. What is a discriminative stimulus? Define the three-term contingency and diagram an example. Be sure to include the appropriate symbols for each component.
3. Define the terms reinforcer and punisher. How do those terms differ from the terms reinforcement and punishment?
2. Explain why operant behaviors are said to be emitted and why they are defined as a “class” of responses.
1. State Thorndike’s law of effect. What is operant conditioning (as defined by Skinner), and how does this definition differ from Thorndike’s law of effect?
3. The advantages of using the click as a reinforcer is that it can be delivered i . It can also prevent the animal from becoming s .
2. In clicker training with dogs, the click is a s reinforcer that has been established by first pairing it with f which is a p reinforcer.
1. Shaping is the creation of operant behavior through the reinforceQUICK QUIZ M ment of s a to that behavior.
5. In most cases, the most important consequence in developing a highly effective slapshot in hockey will be the (contrived/natural) consequence of where the puck goes and how fast it travels.
4. In applied behavior analysis, although one might initially use (contrived/natural) consequences to first develop a behavior, the hope is that, if possible, the behavior will become tr by the n c associated with that behavior.
3. You thank your roommate for helping out with the housework in an attempt to motivate her to help out more often. To the extent that this works, the thankyou is an example of a(n) (contrived/natural) reinforcer; it is also an example of an (intrinsic/extrinsic) reinforcer.
2. You flip the switch and the light comes on. The light coming on is an example of a(n) (contrived/natural) reinforcer; in general, it is also an example of an (intrinsic/extrinsic) reinforcer.
1. A(n) reinforcer is a reinforcer that typically occurs for that behavior QUICK QUIZ L in that setting; a(n) reinforcer is one that typically does not occur for that behavior in that setting.
4. They also found that extrinsic rewards generally increased intrinsic motivation when the rewards were (tangible/verbal) , and that tangible rewards increased intrinsic motivation when they were delivered contingent upon (high/low) quality performance.
3. In their meta-analysis of relevant research, Cameron and Pierce (1994) found that extrinsic rewards decrease intrinsic motivation only when they are (expected/unexpected) , (tangible/verbal) , and given for (performing well / merely engaging in the behavior) .
2. Running to lose weight is an example of an motivated activity; running because it “feels good” is an example of an motivated activity.
1. An motivated activity is one in which the activity is itself reinforcing;an motivated activity is one in which the reinforcer for the activity consists of some type of additional consequence that is external to the activity.
7. Behavior modification programs in institutional settings often utilize generalized reinforcers in the form of t . This type of arrangement is known as a t e .
6. Two generalized secondary reinforcers that have strong effects on human behavior are and .
5. A generalized reinforcer (or generalized secondary reinforcer) is a secondary reinforcer that has been associated with .
4. A (CS/US) that has been associated with an appetitive (CS/US)can serve as a secondary reinforcer for an operant response. As well, a stimulus that serves as a(n) for an operant response can also serve as a secondary reinforcer for some other response.
3. Honey is for most people an example of a reinforcer, while a coupon that is used to purchase the honey is an example of a reinforcer.
2. Events that become reinforcers through their association with other reinforcers are called s reinforcers. They are sometimes also called reinforcers.
1. Events that are innately reinforcing are called p reinforcers. They are sometimes also called un reinforcers.
3. It has been suggested that delayed reinforcers (do / do not) function in the same manner as immediate reinforcers. Rather, the effectiveness of delayed reinforcers in humans is largely dependent on the use of i or r to bridge the gap between the behavior and the delay.
2. It is sometimes difficult for students to study in that the reinforcers for studying are and therefore w , whereas the reinforcers for alternative activities are and therefore s .
1. In general, the more the reinforcer, the stronger its effect on behavior.
5. When Tenzing shared his toys with his brother, his mother stopped criticizing him.Tenzing now shares his toys with his brother quite often. The consequence for sharing the toys was the of a stimulus, and the behavior of sharing the toys subsequently in frequency; therefore, this is an example of
Showing 800 - 900
of 1493
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Step by Step Answers