Site Loader
Rock Street, San Francisco

Animals do have
the ability to make decisions due to the use of neurotransmitters like dopamine
and the development of prefrontal cortex.

This paper will
discuss the different experiments that were conducted to determine whether or
not animals are capable of making simple decisions or more complex decisions. I
will go through articles that talk about the choices that animals make when
faced with different controlled and not controlled conditions. I will conclude
my paper by restating the thesis and whether it was supported by the evidence
in the articles I chose as well as state the limitations that were found in the
articles. I will also talk about some future research that is needed in the
field of animal research and decision making.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Animals have been
compared to humans for many years. Some theories say that we were created as
animals first and then evolved into what we are today. With our brains being
very similar to those of animals such as chimpanzees and bottlenose dolphins,
it’s hard to not support the evidence that has been presented about animals and
how they are able to do things like make logical decisions just like us.

Bardgett and
colleagues (2009) conducted a study to determine the individual contribution of
different dopamine receptors to effort-based decision making in rats. Just like
humans, different chemicals in the brain can influence decision making. In this
case, they looked at rats (Rattus norvegicus). There were 9 adult rats who
were grouped in 3 different cages with 3 rats per cage. They all had access to
food and water and had lighting that went along with day and night lighting. Rats
were trained in a T-maze to choose a large-reward arm that contained 8 pellets
of food over a small-reward arm that contained 2 pellets of food. The rats then
were trained to climb progressively higher barriers to obtain the food from the
large-reward arm. They used a discounting procedure on each test day, which was
up to nine trials. The rats had 3 days of testing with the discounted procedure
before any test. The rats could eat the rewards from both the large and small
rewarded arms. After this test, it went back to the regular testing which was
the rats had to pick one side of the arm and they got either a large or small
reward. It was found that rats were more likely to choose the small reward arm
after treatment with the D1 antagonist. However, on the first trial day of
training there was a barrier on the large reward arm and the number of rats
that went for that arm decreased. On the second and third day, without the
barrier, the number or rats increased for the large reward arm. Dopamine has a
lot to do with the everyday choices that we as humans make. In animals it’s the
same, they were presented with large amounts of food rewards but the barrier
was in the way so they didn’t want to have to work for it so they just went for
the small reward. The next article will also look at dopamine and novelty

Costa and
colleagues looked at whether dopamine helps with the choice to seek novelty
things versus the familiar options in monkeys (Macaca mulatta). They administered
systemic injections of saline which is a selective dopamine transporter (DAT)
inhibitor, to the monkeys and looked at their novelty seeking behavior during a
decision-making task. The task that the monkeys had to do involved pseudorandom
introductions of novel choice options. This allowed monkeys the opportunity to
explore novel options or to explore familiar options. After testing they found
that dopamine increased the monkeys’ preference for novel options. A
reinforcement learning (RL) model fit to the monkeys’ choice. The data showed increased
novelty seeking after DAT was driven by an increase in the initial value the monkeys
assigned to novel options (Costa et al. 2014). The test consisted of 3 male
rhesus monkeys ages 5-6. On test days, they monkeys were injected with either
dopamine or saline and they had to wait for at least a half an hour before
getting tested. The monkeys did 6 blocks of tests. Each test consisted of 650
trials where they had 3 images on a screen. When introducing a novel item, it
was replaced with one of the three images shown. A total of 32 novel items were
shown in the test. They found that the monkeys would choose the novel item
instead of the familiar items almost every time. With dopamine having an effect
of seeking novel items, we can put this on the list of things that help animals
make decisions. The next article I read looks at responses in both humans and
rats. Another article I looked at was called
Optimal Response Rates in Humans and Rats. The article had 60 rats and 15
humans. The rats had to press a lever to obtain food and the humans had to
press the spacebar on a computer to get points that would later be turned into
money. Both were only rewarded if the time between two consecutive responses
were greater than the target interval. Humans had a white square that was shown
on a black screen and they had to press the spacebar when they thought the
target interval passes. Once they pressed the bar the square would disappear. A
white vertical line indicated the target interval and a red line was the
reproduction interval in that single trial. After the reproduction block they
were tested in 8, 5 minute blocks of differential reinforcement of low rates
(DRL) testing. A white block was shown and they could respond at any time. If
the inter-response time (IRL) was greater or equal to the target, then a green
square was shown. If it was earlier than the target interval, a red square was
shown and a buzzer went off. 48 out of the 60 rats were tested daily for 1 hour
and the other 12 were tested at night in 3, 1 hour sessions per night. The rats
were placed into an operant chamber. They were rewarded for spacing lever
presses by at least the target interval. If they pushed it before, they didn’t
receive food. The results were found that both humans and rats came close to
optimizing reinforcement rate, but they respond faster than they were initially
thought to. His article compared both humans and rats. I think that when we
have a reward and are reinforced when doing something then we will continue to
do that. This is the same for animals too. There is a lot of evidence and
experiments that show animals doing a desired task in order to receive some
sort of reward whether it be food or a toy or just praise. The next article
looks at the choices of pigeons based on their prior investments. 

The article, the effect of a
prior investment on choice: The sunk cost effect, looked at this effect in
pigeons (Columba livia). The sunk cost effect is the tendency to continue an endeavor
once a prior investment has been made despite a better option being available.
There were 5 identical chambers that the pigeons were tested in. In the
chambers was a row of 3 keys that were 2.1 cm wide. They were placed 21 cm
above the ground and 6 cm apart from each other. The pigeons would peck at the
key that was lite up either red, green, or white. If the peck was hard enough,
they were rewarded with wheat. Native birds were trained before to peck using
auto-shaping. After this training, they were trained 7 days a week. The sessions
lasted for 60 reinforces, 30 red, and 30 green or until 1 hour passed. There
were 4 different experiments. The first was to demonstrate the sunk cost effect
in pigeons. 5 pigeons were in this experiment. This experiment had matching in
a standard concurrent-chains procedure with schedules as outcomes, but with no
prior investment before the choice phase. In Experiment 1b, they introduced a
20-peck prior investment before the sunk cost. The red components were the left
key and green was the right. Both number of responses in the choice phase
matched the log inverse effort ratio in the outcome phase. In Experiment 1a,
which the conditions in red and green components were identical, there were no
differences in preference between left and right keys between red and green
components. In Experiment 1b there was a tendency to choose the left key in red
components and a tendency to choose the right key in green components. This
result follows the sunk cost effect, because the prior investment was on the
left in red components and on the right in green components. Experiment 2 was
to examine whether greater investments will lead to a greater sunk cost effect
in the present concurrent-chains procedure with pigeons. This experiment had 3
pigeons. The procedure was the same as for Experiment 1b, except they manipulated
the size of the prior investment differentially within session between red and
green components. Each pigeon completed six conditions. The pigeons matched
their responses in the choice phase to the log inverse effort ratio in the
outcome phase. Experiment 3 had 4 pigeons that completed 6 conditions. The
conditions were the same as in experiment 1. This experiment confirmed just
like Experiments 1b and 2, that pigeons commit the sunk cost effect. Experiment
4 had 8 pigeons. The procedure was the same as in experiment 1 except that the
color of the keys were white instead of red and green. There was also a 1
second delay in between prior investment and the choice phase. They did at
least 15 sessions. This experiment confirmed the results of Experiment 1b, 2,
and 3 by showing an effect of prior investment on current choice or the sunk
cost effect. I think that with these experiments, it shows us that animals are
more likely to do something that they already know rather than doing another
task with the risk of not knowing. Even if the other task it a better option,
sticking with the original investment is safer. The next article will look at
pigeons again but will focus on sunk costs.

Sunk cost: Pigeons (Columba Livia), too, show bias to
complete a task rather than shift to another, pigeons were tested to see if
they would complete a task or switch to another one. There were 4 experiments
in the study. Experiment 1 had 8 pigeons and they were kept in cages. All the
pigeons had previous experiences in unrelated studies involving simple
simultaneous discriminations and matching to sample discriminations (Pattinson
& Zentall, 2012). The procedure was held in an operant chamber. The pigeons
received pre-training in which they had to peck 30 times for reinforcement on a
colored key. In each 90-session trial each color, either green or red, was
presented 15 times at each of the three response key locations. For the regular
training, after 5, 10, 15, 20, or 25 pecks to the center key, the key was
turned off and one of the side keys was presented with the 30-peck color, this
one had a reinforcement. On the test trials, the pigeons were first presented
with the high response color on the center key. After the pecking up to 25
times, it was turned off and this time keys on both sides were lite up. The
experiment fond that the pigeons tended to complete the 30-peck requirement,
even when that choice was not optimal, and it required investing more work and
time than would have been required had they chosen to switch to the fixed,
15-peck alternative. Also, this suggests that pigeons, like humans, show a bias
to stay with an initial investment.  In
experiment 2 the subjects and apparatus were the same as the first experiment.
The procedure in the second experiment had 30 trials each of two trial types
which were forced and choice. In the choice trial the pigeons were presented
with the 30-peck color on one of the two side keys and the 15-peck color on the
other. There were no initial investments on the center key like experiment 1.
Reinforcement occurred on the same number of pecks like the first experiment.
After a single peck to either the required key or the 15-peck key they were no
longer lite up. On the forced trial, only one side key was lite up. This went
on for 7 sessions. The results were found that when no initial investment was
required, the pigeons initially demonstrated a preference for the 15-peck
alternative. In experiment 3 they had 4 pigeons and they were tested in a
chamber with 3 circular keys separated by a grain feeder. The left and right
keys were red and green and the center one was white only. Each pigeon was
trained to peck the left (green) and right (red) response key. The number of
pecks required for reinforcement was gradually increased to 30 pecks. There
were six kinds of training trails in this experiment, 30-peck trials, 10-peck
trials, choice trials with no investment, choice trails with a fixed 10-peck
investment, choice trials with a fixed 15-peck investment, and a choice trail
with a fixed 20-peck investment. The results for this experiment were that with
no initial investment, all the pigeons preferred the fixed 10-peck alternative.
When there was an investment they preferred the 30-peck requirement (Pattinson
& Zentall, 2012). For the last experiment, the number of pigeons and the
procedure were the same as in experiment 3. The results were also the same in
this experiment as they were in 3. This article also states that pigeons would
rather stick to the easier investment rather than shifting to anther task. This
decision tells us that some animals do better when staying a task instead of
switching it up. Cache decision making is the topic of my next article.

Preston  and Jacobs (2015), examined the effect of a
dominant competitor species on the caching and behavior of Merriam’s kangaroo
rat (Preston & Jacobs, 2015). In the first experiment, there were 8 male
rats. The test was conducted in an arena with 4 white walls. A food dish with
seeds was placed at the space between the two sides of the arena. One side of
the arena was bare and the one side had decorations. There were three caching
trials. Premanipulation, manipulation, and postmanipulation. For the first one,
each subject was given 100 shelled sunflower seeds to cache. The experimenter
released each subject into the arena on a randomly determined side and left the
room. After caching, the subject was removed and returned to the home cage with
a new supply of oats and lettuce. The position of all seeds was recorded, and
all seeds were replaced for the next trial. For the second, subjects weren’t
given any new seeds, but all seeds from the premanipulation were available in
their prior locations. Each subject was released into the arena on the side
opposite that they were released on the previous trial and given time to
continue the cache session from the premanipulation. For the third, the
procedure was identical to the premanipulation, that is, all previous caches
were removed, 100 new seeds were given to each subject and the arena was
cleaned to eliminate any cues from the competitor’s prior presence. The results
were that on average, subjects cached 14% of their seeds on the bare side in
the premanipulation, 33% in the manipulation, and 56% in the postmanipulation
(Preston & Jacobs, 2015). For experiment 2 the methods and the procedure were
the same as in experiment 1. For the results, most subjects preferred to cache
on the rich side of the arena in the premanipulation. Only one control subject
cached only on the bare side. With the majority of the caching happening in the
postmanipulation trial, this tell us that after learning something the animals
were more comfortable with the task. With the articles I’ve talked about the
next also has to do with dopamine and the effects it has on decision making.

According to Salamone’s group
of researchers, forebrain dopamine (DA) systems are thought to be a critical
component of the brain circuitry regulating behavioral activation, work output
during instrumental behavior, and effort-related decision making (Salamone ea
al., 2009). The article describes a novel effort-discounting task that involves
the modification of a previously developed T-maze choice procedure (Salamone et
al., 2009). Each arm of the maze contained different amounts of food to
reinforce. The goal was to have the rats climb a barrier to get a larger
reward. With the choice to climb the higher barrier to get more food or have no
barrier but a small amount of food they could assess the effects of dopamine
drugs. This articles experiments were similar to my first one. Dopamine being
an important neurotransmitter has a great effect on animal’s decisions. The
rats had a choice and with the food being the end goal they had to decide if
they wanted to go for the easy food or work for the larger reward. Decisions
like these can tell us how much dopamine can play a part in the end decisions.

Another article was called, Age
Differences in Strategy Selection and Risk Preference During Risk-based
Decision Making. This article consisted of 22 rats that were housed
individually in guinea pig containers. The experiment lasted about 4 months.
The rats were tested in a chamber that had white noise playing. There was a
food cup and two levers on either side of it. A light was above each of the
levers. The reward was a vanilla ensure liquid and was delivered into the food
cup. The rats were trained on hippocampus-dependent and hippocampus-independent
versions of the Morris swim task. They did this training for 4 consecutive days
with 6 trials a day. These 6 trails were separated by 2 blocks and had a
20-minute break in between. There were 7 different releasing locations that the
rats were released. There was a little platform in the middle of the water
which was colored in a way so the rat couldn’t see through the water. The rats
went through 6 trials where the platform was clearly visible and moved around
for each trial. All that rats could find the platform within 20 seconds of
being released. The rats were then tested in the chambers. They were given the
vanilla ensure in their food to help reduce the effect of neophobic. They had 3
days of magazine training where both levers were available and food rewards
were given when the rats would poke their noses on the food cup. The rats were
able to press the lever within 5 days of training. After the lever pressing task
there was a reward task. In this task, the rats would press the lever and get
the vanilla ensure and pressing the other lever would give them a reward 4 time
the regular size. Each session contained 36 trials and they were given once per
day. The trails were broken up into 2, 12 forced trials where there was only
one lever and 10 free choice trials where there were two levers. The light
would appear above the desired lever and it would go off when the rat would
respond to that lever and food was given right after. It was found that the aged
rats learned or modified their choices toward them large reward associated
lever more slowly than did young rats. Also, that both young and aged rats
learned to discriminate between the lever associated with the small reward, and
the one associated with the large reward (Samson et al, 2015). Age has a lot to
do with making decisions. Animals that are older are less likely to work hard
for things like rewards. Although pressing a lever wasn’t a strenuous task it
was still found that the old rats were slower than the younger ones. My next
article is about discounting functions using concurrent-chain procedures.

            Researchers used a concurrent-chains
procedure within sessions combined with an adjusting-amount procedure across
sessions to determine the present, subjective values of food reinforcers to be
obtained after a delay (Vanderveldt et al., 2016). There were 10 male pigeons
in this study that had had previous experience with discounted procedures. There
were two chambers that the experiments were held in. There were two response
keys in each that were in front of a panel that was above the ground and had
white, red and green lights that would light up. A clicker was used as auditory
feedback. There was also a triple cue light that was in the middle of the panel
that would light up green, yellow, and red. There was a pellet dispenser behind
the panel where the food would come out. The procedure had two control
conditions and two experimental conditions. A concurrent-chains procedure was
used in all control and experimental conditions. During the initial link of the
chain, both keys were illuminated with white light. Red and green keys were
associated with either the smaller, immediate reinforcer or the larger, delayed
reinforce. In the first control condition, the pigeons chose between 32 pellets
to be delivered immediately and 32 pellets to be delivered after a 10-s delay.
In the second control condition, the pigeons chose between 16 pellets and 32
pellets, both of which were delivered immediately. This condition was done for
14 sessions. In the experimental condition there were two phases, each
consisting of five conditions. In the first phase, the delayed reinforcer was
32 food pellets, and in the second phase the delayed reinforcer was 16 food
pellets. Each of the two delayed amounts was studied at five delays, and in
both phases, each pigeon experienced the five delays in a different order. that
as the delay to a reinforcer increased, its present subjective value decreased,
and the data were well-fitted by a hyperbolic discounting function (Mazur, 1987). This result supports the use of the combination of
concurrent-chains with an adjusting-amount procedure as appropriate for
studying delay discounting. This article concluded that even though there is a
delay, the value of the food reward is still high. Animals like pigeons wouldn’t
work for something like praise so food is the best way to make sure they will
do the task.

 Effects of environmental enrichment on
decision-making behavior in pigs (porcus)
was another article that looked at about animal’s emotional state, potentially
modulated by environmental conditions, and its cognitive effects on processes
such as interpretation, judgement and decision making behavior (van der Staay,
2017). The pig got to choose between two alternatives. The pigs can make
advantageous or disadvantageous choices, where advantageous, low risk choices
deliver smaller, but more frequent rewards, whereas disadvantageous, high risk choices
yield larger, but less frequent rewards (van der Staay, 2017).  The study had 20 male piglets and they were
tested in a pig gambling test apparatus. The pigs were either in a barren or
enriched environment. There were two goal boxes, each with a food bowl covered
by a plastic ball. The pigs had to be habituated to the two experiments and
M&M’s which was their reward. There were 3 sessions per day. The M’s
were scattered in the corners and the pigs were encouraged to explore to get
used to the space. A plastic ball was hung above the food which had the
M&M’s in it. The pigs were trained to push the ball out of the way to get
the food. The ball was slowly lowered in each session to eventually cover the
goal bowl completely. They pigs were rewarded then in the central food bowl.
The pigs were then only rewarded in the central food bowl. Next both the goal
boxes were open and the pigs could choose any box. 10-trial sessions were held
for 6 days. It was found that housing conditions affected performances.
Barren-housed pig performed better and made more advantageous choices than the
enriched-housed pigs, whereas no differences between the two groups of pigs
were seen during the retention phase. The barren-housed pigs had higher hair
cortisol levels which suggested that the barren environment was more stressful
than the enriched environment. Environmental enrichment is a huge part of
animals and their overall well-being. Evidence of this has come about I many
articles. This particular article shows that the pigs in the barren houses and
those with higher levels of cortisol took more risks. This could be because of
the stressed environment that they were in. Not having an enriched environment
can make it so that you have to make more risky decisions and have to fight for
yourself. When in an enriched environment, the animals most likely have
everything they need and are feed, bathed, and treated with care. This will
make it harder for them when coming up against a decision like the ones in the
article because they aren’t used to doing things like this one their own.

With animals
having the ability to make decisions due to certain drugs and neurotransmitters
like dopamine and the development of prefrontal cortex, my initial hypothesis
was supported with the evidence of these articles. Animals are tested to see if
they can make their own choices and for the most part they can. More studies
should be done to see if there are other things like certain chemicals or
developmental processed go into making decisions. 

Post Author: admin


I'm Dora!

Would you like to get a custom essay? How about receiving a customized one?

Check it out