Now showing 1 - 10 of 14
  • Publication
    A comparison of the effects of delay to reinforcement, amount of reinforcer and quality of reinforcer on essential value of demand with hens
    (New Zealand Association for Behaviour Analysis (NZABA), 2013)
    Foster, T Mary
    ;
    Jackson, Surrey
    ;
    ;
    McEwan, James
    ;
    Stuart, Stacey
    Hursh and Silberberg (2008) proposed an exponential function to describe the curvilinear demand functions obtained in much animal research. An advantage of this was that it gave a single measure of the value of the reinforcer, alpha, which they called essential value. This measure has scalar invariance and should not be affected by dose size, amount, or duration of the reinforcer. This paper examines the essential value measure obtained from studies with hens. In each study fixed-ratio schedules were used to generate demand functions. The properties of the reinforcer differed both within and across studies. Foster et al. (2009) and Lim (2010) varied food quality using 40-min sessions. Both found that the essential value was larger for the less preferred reinforcer when consumption was measured by number of reinforcers. Jackson (2011), using sessions terminated after 40 reinforcers and with body weight strictly controlled, found essential value (based on reinforcer rate) was the same for these same two foods. Lim (2010) found the preferred food had the greater essential value when the consumption was measured as weight of food consumed. Grant's (2005) data showed longer reinforcer durations were associated with lower essential values when consumption was measured as numbers of reinforcers. For these data the weight of food consumed generally resulted in the longest durations having the highest essential value. Harris (2011) varied delay to the reinforcer and found longer delays normally gave lower essential values. Stuart (2013) compared delays to the reinforcer and inter-trial-intervals (ITIs). She found essential was lower with the longer intervals for all hens with ITI and, for some hens, with delay and was lower with delays than with ITIs. Thus the measure of essential value has been found to vary in circumstances where this would not be predicted, to be the reverse of what might be expected in some cases, and to be affected by the procedure used. The present data show that essential value does not provide an easily interpretable measure.
  • Publication
    A Microanalysis of the Effect of Bodyweight on Operant Behaviour With Hens
    (Association for Behavior Analysis International (ABAI), 2015)
    Jackson, Surrey
    ;
    ;
    Foster, T Mary
    ;
    McEwan, James
    Motivating Operations (MOs) are frequently manipulated (by altering access to commodities and manipulating other variables such as body weight) in order to change responding. This study had two aims, firstly to investigate the effect of altering body weight on concurrent schedule performance of hens, secondly to investigate the effect of altering body weight on the time duration of each component of hens' pecks under these schedules when analysed from high speed videos filmed at 240 fps. Six hens (at 85% 5%) were shaped (three via the method of successive approximations and three via autoshaping) to respond for food reinforcers on an infra-red screen. Hens then responded under a range of concurrent VI VI schedules, with body weight held at 85% 5%, 95 5% and 100 5% over conditions. It was found that applying the Generalised Matching Law to the data did not result in any consistent differences in responding with the three body weights. However, response rates, inter-response times and video analysis of the individual components of the hens pecking responses did show consistent differences between responding at the three weights.
  • Publication
    Reinforced Behavioral Variability and Sequence Learning Across Species
    (Association for Behavior Analysis International (ABAI), 2012)
    Doolan, Kathleen
    ;
    ;
    McEwan, James
    Previous research shows that reinforcement of variable responding will facilitate sequence learning in rats (Neuringer, Deiss & Olson, 2000) but may interfere with sequence learning in humans (Maes & van der Goot, 2006). The present study aimed to replicate and extend previous research by assessing the role of behavioral variability in the learning of difficult target sequences across 3 species: humans (n = 60), hens (n = 18) and possums (n = 6). Participants were randomly allocated to one of three experimental conditions (Control, Variable, Any). In the Control conditions sequences were only reinforced if they were the target sequence, in the Variability conditions sequences were concurrently reinforced on a Variable Interval 60-s schedule if the just entered sequence met a variability criterion, and in the Any condition sequences were concurrently reinforced on a Variable Interval 60-s schedule for any sequence entered. The results support previous findings with animals and humans; hens and possums were more likely to learn the target sequence in the Variability condition, and human participants were more likely to learn the target sequence in the Control condition. Possible explanations for differences between the performance of humans and animals on this task will be discussed.
  • Publication
    The Role of a Variability Contingency on Sequence Learning in Humans
    (Association for Behavior Analysis International (ABAI), 2015)
    Doolan, Kathleen
    ;
    ter Veer-Burke, Stacey
    ;
    ;
    McEwan, James
    Research shows that reinforcement of variable responding facilitates sequence learning in rats but may interfere with sequence learning in humans. Experiment 1 examined sequence difficulty in humans by manipulating sequence length and task instruction. Experiment 2 investigated the effect of removing or adding a variability contingency within the experimental session for a 6-item sequence. Participants were allocated to either a Control or Variable group. The Control group only received reinforcement for production of the target sequences. The Variability group received reinforcers on a Variable Interval 60-s schedule if the sequence met a variability criterion and for production of the target sequence. In Experiment 2 after 10 reinforcer deliveries the variability contingency was either removed or added. In Experiment 1, the Control group produced more target sequences for the 6-digit conditions, the Variable group produced more target sequences for the 9-digit condition and there was no difference between groups for the 12-digit condition. Task instructions had little impact on the results. In Experiment 2 the Control performed better than the Variability group - addition or removal of the variability contingency had little effect on performance. Results will be discussed in relation to previously published research on sequence learning with animals and humans.
  • Publication
    An Exploration of Reinforcing Behavioural Variability in Discrete Dimensions
    (Association for Behavior Analysis International (ABAI), 2014)
    Kong, Xiuyan
    ;
    McEwan, James
    ;
    ;
    Foster, Therese Mary
    In Experiment 1, 48 participants using a computer created 300 combinations of, shapes, colours and patterns. Half received points when they varied on these three dimensions (VAR) and the other half received the same number of points regardless (YOKE). Responses were more variable for the VAR group but only for colour. In Experiment 2, 114 participants were asked to fill 220 shapes with one of 135 colours. During the first and last 60 trials they received no feedback while for the remaining they received reinforcement when they used a colour that had never been used previously. Overall, the number of colours used increased when reinforcement was provided. Participants used more colours in the last 60 trials than the first; 60% of the colours used were never used during the first 60 trials. That is, the variability in the use of colours increased after participants had been reinforced to vary.
  • Publication
    Behavioural Variability and Sequence Learning Across Species: Hens, Possums, and Humans
    (University of New England, 2020-07-24)
    Doolan, Kathleen Elizabeth
    ;
    ; ;
    McEwan, James

    Understanding how reinforced variability contributes to both animal and human learning is critical in contexts where behavioural variability is an essential attribute of the operant behaviour. Reinforced variability may prove to have some benefits not evident in traditional operant learning procedures, such as its ability to promote generalisation of the operant to new contexts (e.g., Kong, McEwan, Bizo, & Foster, 2019; Neuringer, Deiss & Olson, 2000), and adding to its resistance to extinction (Neuringer, Kornell & Olufs, 2001). While inconsistencies exist between results from animal and human studies, there is evidence to suggest that reinforced variability may prove to be beneficial as a learning tool for humans in areas such as creativity, skill acquisition, as well as in the development of more productive treatments for some areas of psychopathology (e.g., Hopkinson & Neuringer, 2003; Saldana & Neuringer, 1998). Both empirical and applied studies provide evidence for the importance of understanding reinforced variability as a deeper understanding of it may allow for further development of learning technologies for promoting and maintaining variable responding in contexts where that is a desired characteristic of behaviour.

    The series of experiments in this dissertation addressed methodological concerns that have been raised by others in previous studies on reinforced variability (e.g., Doolan & Bizo, 2013; Maes & van der Goot, 2006; Neuringer et al., 2000) in an attempt to identify those factors that may moderate the learning of a response sequence by humans and non-humans. These experiments have explored the role reinforced variability plays in the learning of target sequences by modifying the methodology of previous studies to more closely replicate the work of Neuringer et al. (2000) with three species (humans, hens & possums).

    For the human component of the dissertation, three experiments explored the role of reinforced variability in sequence learning. In Experiment 1, in separate conditions, participants either had a visual record of the sequence components, as they were selected and displayed on a computer screen or had no record of the sequence components, and in both conditions, participants were given feedback after the last component was entered. Participants earned points for producing the target sequence. In conditions where variability in some aspect of the operant was a contingent requirement for reinforcement, participants experienced a secondary contingency for which they could earn points for producing sequences that met a variability criterion. In Experiment 2, the sequence length was manipulated and was either nine- or 12-digits long. Experiment 3 was a partial replication of Experiments 1 and 2 but with minimal task instruction. For the shorter six-digit sequences used in the No-Record condition of Experiment 1, direct reinforcement of the target sequence promoted higher production of the target sequence compared to the reinforcement of sequence variability. For a nine-digit sequence, the added requirement of variability promoted better learning of the target sequence than did direct reinforcement of the target sequence alone. There was no difference between the groups for the 12-digit sequence. The results of Experiment 1 replicate previous findings with humans on this procedure, while the results from Experiment 2, where a nine-digit sequence was required, were more consistent with reports from studies using a similar procedure with animals rather than human participants. The removal of detailed instructions in Experiment 3 appeared to increase the difference in the pattern of responding for the two groups for the six-digit sequence condition, suggesting that the difference between animal and human studies on behavioural variability is both a function of instruction and display of the just-completed sequence.

    For the non-human component of the dissertation, five separate experiments explored factors that affect behavioural variability and learning by hens (Experiments 4, 6, 7, & 8) and possums (Experiment 5). Experiments 4 and 5 were partial replications of Neuringer et al. (2000) and explored the role of reinforced variability in sequence learning in non-human animals. For Experiment 4, eighteen Shaver Starcross hens (Gallus gallus domesticus) served as subjects. The experiments consisted of the same five experimental phases, as described by Neuringer et al. (2000). The target sequences consisted of Left (L) and Right (R) key pecks and were the same for all hens and experimental groups across each phase (RLL, LLR, RRLR, LR, & RLLRL). The hens in the Control group could earn reinforcement for emitting the target sequence only. The hens in the Variable group could earn reinforcement for emitting the target sequence and producing a sequence that met the variability criteria. The hens in the Any group could also earn reinforcement for emitting the target sequence and on a variable interval (VI) 60-s schedule for any sequence they entered after the time interval had passed.

    Six Brushtail possums (Trichosurus vulpecula) served as the subjects for Experiment 5. The general procedure and first three phases of the experiment were the same as described above for Experiment 4, and the remaining phases were replaced with the remaining possible three-component sequences, RRL, LRL, RRR, LLL, RLR, LLR. Variability criteria for secondary reinforcement only facilitated more production of the target sequence compared to when there was secondary reinforcement for any sequence that was produced for the first phase of Experiment 4. There was no difference in target sequence production between the possums’ that only received direct reinforcement of the target sequence and those possums’ that were exposed to the secondary variability schedule in any of the five phases. In Phases 1-5 of Experiment 5, the pattern of responding was consistent with those reported by Neuringer et al. (2000) with their rats that were exposed to the secondary variability schedule producing more target sequences than the other experimental groups, however, the difference between groups was not significant. For the remaining phases, the Control group (i.e., direct reinforcement of the target sequence) produced the target sequence more frequently than the other experimental groups. However, the difference was not significant.

    In studies with humans, it has been suggested that responding may not be under the control of the reinforcement contingencies and that rules may influence responding such that behaviour must be considered rule-governed rather than contingency shaped, however, the comparable response patterns across species within this series of experiments suggest that there may be other individual differences that impact on the influence reinforced variability has on learning of a target sequence that had not yet been considered in previous research.

    A final series of experiments with hens explored the role that reinforced behavioural variability may play in the learning of a different non-sequence target behaviour by hens. Eighteen Shaver Starcross hens served as subjects in Experiments 6 - 8 and were required to make two screen pecks within a set target distance (distance bin) on a touchscreen to earn reinforcement. Experiment 6 was used as a baseline phase to ensure that the varying distance requirements were physically possible for the hens to complete. In Experiment 7, reinforcement was available for producing two consecutive pecks within the target distance bin, hens in the variability group would also earn reinforcement if their two consecutive pecks met a variability criterion. The hens could be exposed to both experimental conditions throughout the experiment as they were randomly allocated at the start of each phase. Experiment 8 compared six naive hens to six experienced hens from Experiment 7, to assess the role that previous exposure to the variability contingency may have on learning the target behaviour

    The original article by Neuringer et al. (2000) has been cited numerous times as a potential ‘game-changer' in both experimental and applied psychology, however, the findings of this series of experiments suggest that the benefits of reinforced variability in promoting the acquisition of a novel behaviour that Neuringer et al. reported with rats do not readily generalise across species or behavioural tasks. This calls into question the utility and potential benefits that might result from the application of these general principles in applied settings. It also highlights the limiting factors such as the nature of the operant, the difficulty of the task, and the instructions given to participants are important moderators of the impact of reinforcing behavioural variability on learning.

  • Publication
    Response resurgence in the peak procedure
    (New Zealand Association for Behaviour Analysis (NZABA), 2012) ;
    Lockhart, Rachael
    ;
    McHugh, Mark
    ;
    Foster, T Mary
    ;
    McEwan, James
    In separate experiments the timing abilities of brushtail possums and domestic hens on the peak procedure was investigated. This procedure involved animals responding on two trial types within an experimental session. On some trials responding was reinforced according to a Fixed Interval (FI) schedule, and on other trials, Peak Interval (PI) trials responding was not reinforced with food. Possums lever pressed and hens key pecked for food reinforcers on different FI schedules, and the duration of the PI was varied across a range. For 20% of trials, responding was not reinforced longer than the FI schedule that was in effect on the other 80% of trials when responding was reinforced. Response rates typically increased to a maximum at about the time the responses were normally reinforced and then decreased after the time that food would normally be reinforced, before increasing again towards the end of the PI regardless of the duration of the PI trial. When relative response rates were plotted as a function of relative time the function typically superposed for the ascending, but not descending portions of the function. The results are discussed in terms of Weber's law, and various quantitative models timing.
  • Publication
    Reinforcing Behavioural Variability: An Examination of it's Generalisability and an Analysis of U-Value as a Measure of Variability
    (Association for Behavior Analysis International (ABAI), 2015)
    Kong, Xiuyan
    ;
    McEwan, James
    ;
    ;
    Foster, Therese Mary
    Two experiments with college students were carried out to examine whether learned variability on two dimensions of a behaviour would generalise to a third dimension that occurred simultaneously using Ross and Neuringer's (2002) rectangle drawing task. The dimensions being measured were the sizes, shapes and the locations on the screen of the rectangles. Performances of a group receiving reinforcement independent of the variability of all three dimensions and another group receiving reinforcement contingent on the variability of two of the three dimensions were compared. Results showed that overall, the variability in the shapes and locations of the rectangles was higher when these two dimensions occurred with other two dimensions that were required to vary; however, no difference was found for the variability in sizes between the two groups. The results suggested it was likely there was generalization from reinforcing variability on sizes and locations to shape and from reinforcing variability on sizes and shapes to locations. U-value as a measure of variability was also examined, with simulated data and data collected from one of the experiments. Limitations of the measure were identified. The attentions needed to report U-values would be discussed. Cautions needed when interpreting U-values as a measure of variability would be highlighted.
  • Publication
    The Effect of Body Weight on Concurrent Schedule Performance and the Pecking Response With Hens
    (Association for Behavior Analysis International (ABAI), 2014)
    Jackson, Surrey
    ;
    ;
    Foster, Therese Mary
    ;
    McEwan, James
    Motivating Operations (MOs) are frequently manipulated (by changing access to commodities and manipulating other variables such as body weight) in order to change responding. Manipulations of body weight have been found to alter behaviour, for example obese and lean rats display differential sensitivity to reinforcers on concurrent schedules of reinforcement. What is not known is the effect that altering MOs may have on the topography of the response related to obtaining the reinforcer. This study has two aims, to investigate the effect of altering body weight on concurrent schedule performance and to investigate the effect that altering body weight may have on the time durations of each component of the hens peck response. Three hens held at 85%--5% were shaped via the method of successive approximations and three via autoshaping to respond for food reinforcers on a touch screen. Analysis of video footage of pecks into individual components (head fixation to beak contact to no movement) showed that the time duration of each component remained stable across both groups. Hens then worked for the same reinforcer under concurrent VI VI schedules with a range of reinforcer ratios with body weight held at 85%--5%, then 95--5%.
  • Publication
    Reinforced variability and sequence learning in hens, possums and humans
    (New Zealand Association for Behaviour Analysis (NZABA), 2012)
    Doolan, Kathleen
    ;
    ;
    McEwan, James
    Previous research shows that reinforcement of variable responding will facilitate sequence learning in rats (Neuringer, Deiss & Olson, 2000) but may interfere with sequence learning in humans (Maes & van der Goot, 2006). The present study aimed to replicate and extend previous research by assessing the role of behavioural variability in the learning of difficult target sequences across 3 species: humans (n = 60), hens (n = 18) and possums (n = 6). Participants were randomly allocated to one of three experimental conditions (Control, Variable, Any). In the Control conditions sequences were only reinforced if they were the target sequence, in the Variability conditions sequences were concurrently reinforced on a Variable Interval 60-s schedule if the just entered sequence met a variability criterion, and in the Any condition sequences were concurrently reinforced on a Variable Interval 60-s schedule for any sequence entered. The results support previous findings with animals and humans; hens and possums were more likely to learn the target sequence in the Variability condition, and human participants were more likely to learn the target sequence in the Control condition. Possible explanations for differences between the performance of humans and animals on this task will be discussed.