70 first year psychology students were used in the experiment, all of whom were from the University of Bath with ages ranging from 18 to 40. The participants contained nine males and 61 females. 23 had no previous psychology knowledge, in that they did not do psychology at A-level, compared to the 47 who had done psychology A-level, yet all had done background research on the subject as prior-to-course reading.
Design and Materials
A within participant design was used in this quasi-experimental study, as there was no control over whether or not participants had done psychology A-level. In order to test the relationship between previous psychology knowledge, confidence and performance in a psychology knowledge test, participants completed a 40 questioned multiple choice test (Appendix 1) based on one devised by McCutcheon (1986). In this study the test was a replication one used by Furnham (1996). It should be noted that certain questions previously in the original Furnham test were removed as they either were angled at an American audience, were ambiguous or simply out of date.
The 40 questions used covered a wide variety of psychology, in order to test fairly for psychological knowledge. The performance on the test was one of the dependant variables in the quasi-experimental hypotheses. The other dependant variable (in the quasi-experimental hypotheses) was the judgment of confidence by participants. It was devised that each participant was to write their expected score on the test paper, i.e. give a judgment about how many answers they thought they had got correct. By doing this, the confidence of each participant could be measured. In order to ascertain whether or not the participant had previous psychology knowledge, each student was required to note on the test paper whether or not he/she had done psychology A-level. This was the independent variable.
Statistic analysis (Pearsons testing for the correlational hypotheses and an independent T-test for the Quasi-Experimental hypotheses) was used to test the various correlations and relationships between previous knowledge, judgment of confidence and score on the psychology test.
Once briefed, participants were given the 40 item multiple choice test answer sheet on which to document their answer (there were a possible 4 answers which they had to choose from for each question). They were firstly instructed to write whether or not they had done psychology A-level on a blank section of the paper, by writing simply “did psychology A-level/ didn’t do psychology A-level”. The questions were then read out to the group, around twenty seconds was then given to the participants to think, after which the question was read out again. The participants then had around ten seconds to decide on an answer.
They were to mark either a, b, c or d as their answer on the test sheet. Once all questions had been asked, the participants were asked to write their expected score on the paper before receiving their real mark. The scores were then self-marked as the correct answers were read out. The total mark was then written on the sheet before collection for data analysis. Pearsons testing was used to test hypotheses 1, 2 and 3, and an independent T-test was used to test the validity of hypotheses 4 and 5.
Table 1 (shown below) shows descriptive statistics for the entire group. The table presents the minimum and maximum estimated scores and real scores for the group, the mean of each score (estimated and real) and the standard deviation for each. The results gained from the study must now be explained in a non-statistical way, in order to understand the psychological implications of the findings. The first set of results which should be looked at are the overall statistics for the entire group (see table 1 in results section).
From these, it can be gathered that the mean score was higher (16.9) than the mean estimate (15.6), this means that judgments of confidence were wrongly low. In other words, on average, participants scored higher than they thought they would, showing a lack of confidence. The standard deviations for both the estimated scores and the real scores were roughly the same, therefore no comment can be made about the varying accuracy of the mean scores.
Yet does this relate to the entire group or do confidence levels vary whether or not the participant did psychology A-level? By looking at table 2 in the results section, this question is easier to answer. Those who did not do A-level psychology, on average, estimated a score of 15.0, and the mean of their real scores was also 15.0. Standard deviation was low in each case, showing dispersal of results around the average was not greatly spread.
Those participants who did psychology A-level had an average estimated score of 15.9, and an averaged real score of 17.8. The standard deviation for both estimated and real scores were higher than those of participants who didn’t do A-level. This shows a greater level of dispersal of results around the mean, in other words, scores (estimated and real) of those who did psychology A-level were more uneven than those who did not do A-level. Those who did A-level also were less accurate in their estimates than those who did not do A-level. The mean estimate score was 15.9 (higher than the 15.0 average for those who did not do A-level), yet the mean real score was 17.8. This shows, on first impression, a less accurate judgment of confidence.
Figure 1 (see results) shows statistical analysis of results, determining the relationship between having done A-level psychology or not and performance on the test. It is first necessary to clarify the different values. R is the pearsons value (strength of correlation), n is the number of entries, and p is the probability of the results being due to chance. By looking at the correlational value (r = .319), it can be stated that the relationship is positive, yet weak. Thus meaning, that having done A-level psychology or not will have an impact on the performance on the test, more specifically, yet only to a certain extent as the correlation is weak. From looking at p < .01, it can be said that this correlation is statistically significant, which means that the correlation did not occur to chance.
Figure two shows the statistical correlation between judgments of confidence and performance on the test. There is a strong positive relationship between the two variables ( r = .600 ), meaning that the higher the estimated score, the higher the performance on the test cet. par. It can also be said that this correlation is statistically significant ( p < .001 ). Therefore the strong positive correlation did not occur due to chance and so is a significant relationship.
To investigate whether or not there was a correlation between having done A-level psychology or not, and judgments of confidence, a Pearsons test was again used. This gave the results shown in figure 3 in the results section. It can be gathered that this relationship does not hold any weight; with a measure of r = .095, there is next to no relationship between the two variables and not statistically significant.
In order to test the two remaining hypotheses (quasi-experimental), an independent T-test was carried out for each. Figure 4 (see results section) shows the T-test results from testing the difference in scores between A-level psychology students and non A-level students. This set out to see whether the difference between the means of A-level students and non A-level students was significant, or whether the difference occurred by chance. Indeed it can be stated that the difference between the means is significant.
By looking at figure 5, which shows the T-test for differences in judgments of confidence between A-level psychology students and non- psychology students, it can be ascertained that the difference between the means is not significant.
Are these results directly due to the tests carried out, or did certain design features of the test lead to distorted results? Firstly, McCutcheon’s test, although testing a wide variety of topics in psychology, claims to, but does not cover all area of psychology. For example, clinical psychology was not represented on the test. The test also did not allow for varying levels of correctness, participants could ether be right or wrong. This does not allow for varying levels of knowledge in the topic. Therefore a test which could assess varying levels of knowledge would be useful, e.g. participants could be able to get marks for a near to correct answer, in other words able to receive credit for a near correct answer.
The dependant variable for the experiment was whether or not students had done psychology at A-level or not, hence dividing those who had “previous knowledge” and those who didn’t. Yet there are many other factors to take into account concerning this; firstly, participants who had done psychology A-level received different teaching methods and different course outlines. This means that those who did do psychology A-level had varying levels of knowledge, yet there was no test for this in the design.
Secondly, it was required for all students to do background reading on the subject before arriving on the course. This would mean that those who did not do psychology A-level and so had “no previous knowledge” in fact would have a certain level of knowledge. In the same way, all participants had completed a semester in psychology at the University before carrying out the test, so all would have a certain degree of knowledge. Perhaps selecting participants who had never studied, or read psychology would be a better way to select those who had no previous knowledge.