Now showing 1 - 10 of 10
  • Publication
    A comparison of the effects of delay to reinforcement, amount of reinforcer and quality of reinforcer on essential value of demand with hens
    (New Zealand Association for Behaviour Analysis (NZABA), 2013)
    Foster, T Mary
    ;
    Jackson, Surrey
    ;
    ;
    McEwan, James
    ;
    Stuart, Stacey
    Hursh and Silberberg (2008) proposed an exponential function to describe the curvilinear demand functions obtained in much animal research. An advantage of this was that it gave a single measure of the value of the reinforcer, alpha, which they called essential value. This measure has scalar invariance and should not be affected by dose size, amount, or duration of the reinforcer. This paper examines the essential value measure obtained from studies with hens. In each study fixed-ratio schedules were used to generate demand functions. The properties of the reinforcer differed both within and across studies. Foster et al. (2009) and Lim (2010) varied food quality using 40-min sessions. Both found that the essential value was larger for the less preferred reinforcer when consumption was measured by number of reinforcers. Jackson (2011), using sessions terminated after 40 reinforcers and with body weight strictly controlled, found essential value (based on reinforcer rate) was the same for these same two foods. Lim (2010) found the preferred food had the greater essential value when the consumption was measured as weight of food consumed. Grant's (2005) data showed longer reinforcer durations were associated with lower essential values when consumption was measured as numbers of reinforcers. For these data the weight of food consumed generally resulted in the longest durations having the highest essential value. Harris (2011) varied delay to the reinforcer and found longer delays normally gave lower essential values. Stuart (2013) compared delays to the reinforcer and inter-trial-intervals (ITIs). She found essential was lower with the longer intervals for all hens with ITI and, for some hens, with delay and was lower with delays than with ITIs. Thus the measure of essential value has been found to vary in circumstances where this would not be predicted, to be the reverse of what might be expected in some cases, and to be affected by the procedure used. The present data show that essential value does not provide an easily interpretable measure.
  • Publication
    A Microanalysis of the Effect of Bodyweight on Operant Behaviour With Hens
    (Association for Behavior Analysis International (ABAI), 2015)
    Jackson, Surrey
    ;
    ;
    Foster, T Mary
    ;
    McEwan, James
    Motivating Operations (MOs) are frequently manipulated (by altering access to commodities and manipulating other variables such as body weight) in order to change responding. This study had two aims, firstly to investigate the effect of altering body weight on concurrent schedule performance of hens, secondly to investigate the effect of altering body weight on the time duration of each component of hens' pecks under these schedules when analysed from high speed videos filmed at 240 fps. Six hens (at 85% 5%) were shaped (three via the method of successive approximations and three via autoshaping) to respond for food reinforcers on an infra-red screen. Hens then responded under a range of concurrent VI VI schedules, with body weight held at 85% 5%, 95 5% and 100 5% over conditions. It was found that applying the Generalised Matching Law to the data did not result in any consistent differences in responding with the three body weights. However, response rates, inter-response times and video analysis of the individual components of the hens pecking responses did show consistent differences between responding at the three weights.
  • Publication
    Reinforced Behavioral Variability and Sequence Learning Across Species
    (Association for Behavior Analysis International (ABAI), 2012)
    Doolan, Kathleen
    ;
    ;
    McEwan, James
    Previous research shows that reinforcement of variable responding will facilitate sequence learning in rats (Neuringer, Deiss & Olson, 2000) but may interfere with sequence learning in humans (Maes & van der Goot, 2006). The present study aimed to replicate and extend previous research by assessing the role of behavioral variability in the learning of difficult target sequences across 3 species: humans (n = 60), hens (n = 18) and possums (n = 6). Participants were randomly allocated to one of three experimental conditions (Control, Variable, Any). In the Control conditions sequences were only reinforced if they were the target sequence, in the Variability conditions sequences were concurrently reinforced on a Variable Interval 60-s schedule if the just entered sequence met a variability criterion, and in the Any condition sequences were concurrently reinforced on a Variable Interval 60-s schedule for any sequence entered. The results support previous findings with animals and humans; hens and possums were more likely to learn the target sequence in the Variability condition, and human participants were more likely to learn the target sequence in the Control condition. Possible explanations for differences between the performance of humans and animals on this task will be discussed.
  • Publication
    The Role of a Variability Contingency on Sequence Learning in Humans
    (Association for Behavior Analysis International (ABAI), 2015)
    Doolan, Kathleen
    ;
    ter Veer-Burke, Stacey
    ;
    ;
    McEwan, James
    Research shows that reinforcement of variable responding facilitates sequence learning in rats but may interfere with sequence learning in humans. Experiment 1 examined sequence difficulty in humans by manipulating sequence length and task instruction. Experiment 2 investigated the effect of removing or adding a variability contingency within the experimental session for a 6-item sequence. Participants were allocated to either a Control or Variable group. The Control group only received reinforcement for production of the target sequences. The Variability group received reinforcers on a Variable Interval 60-s schedule if the sequence met a variability criterion and for production of the target sequence. In Experiment 2 after 10 reinforcer deliveries the variability contingency was either removed or added. In Experiment 1, the Control group produced more target sequences for the 6-digit conditions, the Variable group produced more target sequences for the 9-digit condition and there was no difference between groups for the 12-digit condition. Task instructions had little impact on the results. In Experiment 2 the Control performed better than the Variability group - addition or removal of the variability contingency had little effect on performance. Results will be discussed in relation to previously published research on sequence learning with animals and humans.
  • Publication
    Response resurgence in the peak procedure
    (New Zealand Association for Behaviour Analysis (NZABA), 2012) ;
    Lockhart, Rachael
    ;
    McHugh, Mark
    ;
    Foster, T Mary
    ;
    McEwan, James
    In separate experiments the timing abilities of brushtail possums and domestic hens on the peak procedure was investigated. This procedure involved animals responding on two trial types within an experimental session. On some trials responding was reinforced according to a Fixed Interval (FI) schedule, and on other trials, Peak Interval (PI) trials responding was not reinforced with food. Possums lever pressed and hens key pecked for food reinforcers on different FI schedules, and the duration of the PI was varied across a range. For 20% of trials, responding was not reinforced longer than the FI schedule that was in effect on the other 80% of trials when responding was reinforced. Response rates typically increased to a maximum at about the time the responses were normally reinforced and then decreased after the time that food would normally be reinforced, before increasing again towards the end of the PI regardless of the duration of the PI trial. When relative response rates were plotted as a function of relative time the function typically superposed for the ascending, but not descending portions of the function. The results are discussed in terms of Weber's law, and various quantitative models timing.
  • Publication
    Reinforcing Behavioural Variability: An Examination of it's Generalisability and an Analysis of U-Value as a Measure of Variability
    (Association for Behavior Analysis International (ABAI), 2015)
    Kong, Xiuyan
    ;
    McEwan, James
    ;
    ;
    Foster, Therese Mary
    Two experiments with college students were carried out to examine whether learned variability on two dimensions of a behaviour would generalise to a third dimension that occurred simultaneously using Ross and Neuringer's (2002) rectangle drawing task. The dimensions being measured were the sizes, shapes and the locations on the screen of the rectangles. Performances of a group receiving reinforcement independent of the variability of all three dimensions and another group receiving reinforcement contingent on the variability of two of the three dimensions were compared. Results showed that overall, the variability in the shapes and locations of the rectangles was higher when these two dimensions occurred with other two dimensions that were required to vary; however, no difference was found for the variability in sizes between the two groups. The results suggested it was likely there was generalization from reinforcing variability on sizes and locations to shape and from reinforcing variability on sizes and shapes to locations. U-value as a measure of variability was also examined, with simulated data and data collected from one of the experiments. Limitations of the measure were identified. The attentions needed to report U-values would be discussed. Cautions needed when interpreting U-values as a measure of variability would be highlighted.
  • Publication
    Reinforced variability and sequence learning in hens, possums and humans
    (New Zealand Association for Behaviour Analysis (NZABA), 2012)
    Doolan, Kathleen
    ;
    ;
    McEwan, James
    Previous research shows that reinforcement of variable responding will facilitate sequence learning in rats (Neuringer, Deiss & Olson, 2000) but may interfere with sequence learning in humans (Maes & van der Goot, 2006). The present study aimed to replicate and extend previous research by assessing the role of behavioural variability in the learning of difficult target sequences across 3 species: humans (n = 60), hens (n = 18) and possums (n = 6). Participants were randomly allocated to one of three experimental conditions (Control, Variable, Any). In the Control conditions sequences were only reinforced if they were the target sequence, in the Variability conditions sequences were concurrently reinforced on a Variable Interval 60-s schedule if the just entered sequence met a variability criterion, and in the Any condition sequences were concurrently reinforced on a Variable Interval 60-s schedule for any sequence entered. The results support previous findings with animals and humans; hens and possums were more likely to learn the target sequence in the Variability condition, and human participants were more likely to learn the target sequence in the Control condition. Possible explanations for differences between the performance of humans and animals on this task will be discussed.
  • Publication
    Effects of Schedules of Reinforcement on Behavioural Variability
    (Association for Behavior Analysis International (ABAI), 2012)
    Neshausen, Leanne
    ;
    McEwan, James
    ;
    As an extension of Boren, Moerschbaecher, & Whyte, (1978), Experiment 1 was a comparison of schedules of reinforcement on location variability. Hens worked in an operant chamber across five keys that were arranged horizontally. A peck to any key was equally effective. Interval schedules were yoked to ratio schedules. Eight schedules were examined: FR 40, FR 10, FI y-40, FI y-10, VR 40, VR 10, VI y-40 and VI y-10. Location variability was measured as the percentage of switching across keys from within trials, between trials (the reinforced peck location to the first peck location of the following trial), and the number of keys used. It was hypothesised that more variation would occur from schedules with large inter-reinforcer-intervals rather than short, and also that interval schedules would result in more variations than ratio schedules. Hypotheses were not up held, however a correlation of response rate and variability was found. In Experiment 2, six new hens worked on a series of (across session) incrementing DRL schedules, from DRL 0.5-sec to DRL 19.2-sec. Results found no correlation of response rate and variability. However, far more within trial switches were observed in Experiment 2 than Experiment 1, suggesting need for further study.
  • Publication
    Response Resurgence in the Peak Procedure
    (Association for Behavior Analysis International (ABAI), 2013) ;
    Lockhart, Rachael Anne
    ;
    McHugh, Mark
    ;
    Stanley, Christopher D
    ;
    Foster, Mary
    ;
    McEwan, James
    In three separate experiments the timing abilities of brush tail possums and domestic hens on the peak procedure was investigated. This procedure involved animals responding on two trial types within an experimental session. On some trials responding was reinforced according to a Fixed Interval (FI) schedule (in effect on 80% of trials), and on other 20% trials, Peak Interval (PI) trials, responding was not reinforced with food. Possums lever pressed, and hens key pecked, for food reinforcers on different FI schedules, and the duration of the PI was varied across a range. Response rates typically increased to a maximum at about the time the responses were normally reinforced and then decreased after the time that food would normally be reinforced, before increasing again towards the end of the PI regardless of the duration of the PI trial if that duration was fixed. When the PI was of variable rather than fixed duration, however, the rate of responding on PI trials decreased towards the end of the PI. When relative response rates were plotted as a function of relative time the function typically superposed for the ascending, but not descending portions of the function. The results are discussed in terms of Webers law, and various quantitative models timing.
  • Publication
    An Analysis of the Impact of Reinforcement on Behavioral Variability Across Multiple Dimensions
    (Association for Behavior Analysis International (ABAI), 2012)
    Kong, Xiuyan Kitt
    ;
    McEwan, James
    ;
    Foster, Therese Mary
    ;
    The independence of dimensions of operant responses by humans was investigated in two experiments using a computerized rectangle drawing task from Ross and Neuringer (2002). Variability on the dimensions of area, shape and location was required for reinforcement for one group (VAR); and variability was not required for the other (YOKE). For all three dimensions, U-values, a measure of variability, were higher for the VAR group than for YOKE group; and the number of trials that met the criteria for reinforcement was higher for the VAR group than for the YOKE group. In Experiment 2, reinforcement was contingent on variability on two dimensions regardless of variability on the third. Participants were divided into three groups; each group had one dimension that was not required to vary. U-values were higher for dimensions when reinforcement was contingent on varying shape and location, or area and location. However, U-values did not differ significantly across dimensions when reinforcement was contingent on varying just area and shape. The results of Experiment 1 and 2 are broadly consistent with those of Ross and Neuringer (2002). The importance of orthogonality of dimensions on this task will be discussed.