01 Aug Incentivizing Excellence: How Rewards Can Boost Year 7 NAPLAN Performance
What happens when you pay Year 7 students to do better on NAPLAN? Jayanta Sarkar and Dipanwita Sarkar found out.
Next month, we are expecting the results from the annual NAPLAN tests, which students in years 3, 5, 7 and 9 sat earlier this year.
Each year, the tests are widely promoted as a marker of student progress and are used to inform decisions about what is needed in Australian schools.
There has been increasing concern about students’ performance in these tests. For example, last year, only two-thirds of students met the national standards, with headlines of “failed NAPLAN expectations”.
But what if students weren’t trying as hard as they could in these tests? Our research suggests that may be the case.
In our new study, Year 7 students were given small financial rewards if they reached personalised goals in their NAPLAN tests. We found this improved their results.
Who did we study?
Rewarding students to do better on tests is not a new concept and is unsurprisingly controversial in education policy.
But we wanted to explore this to better understand students’ motivation and effort while taking a test.
In our study, we used data on real students taking NAPLAN tests. We selected this test because it is a national test where performance improvement is vitally important for schools, yet the stakes are low for students. NAPLAN results do not impact their final grades.
We looked at three groups of Year 7 students in 2016, 2017 and 2018. The number of tests observed for each year was 537, 637 and 730, respectively.
The students were at a coeducational public high school in South East Queensland and mostly came from socio-educationally disadvantaged backgrounds.
How were the students rewarded?
In each year, students sat four tests. Before each test, students were given a personalised target score based on their Year 5 NAPLAN results. There was then a different approach for each test:
- in the conventions of language (spelling, grammar and punctuation) test, students were given no incentive to reach their target score. This provided the benchmark for comparison in our study.
- in the writing test, students were given a “fixed” incentive. This was a canteen voucher worth A$20 if they reached their target score.
- in the reading and numeracy tests, students were either given a “proportional” incentive or a “social” incentive.
For the proportional incentive, they got a $4 voucher for every percentage point over and above their target score, up to a maximum of $20. For the social incentive, they were organised into groups of about 25. They would each get a $20 voucher if their group had the highest average gain in scores between Year 5 and Year 7 of all the groups.
To ensure that any improvement in test scores could only occur through increased effort (and not increased preparation), the school principal announced the incentives in a prerecorded message just minutes before the start of each test.
The rewards were handed out by the school 12 weeks after the exams once the results were released.
We found scores improved with rewards
Our research found scores improved when students were offered a reward, particularly for tests done in 2017 and 2018.
When compared with the gains in conventions of language tests (where students were not given any incentive), the average scores improved by as much as 1.37% in writing, 0.81% in reading, and 0.28% in numeracy.
While these may not seem like huge overall gains, our analysis showed the rewards led to gains that went above and beyond the gains seen at similar schools (that didn’t offer incentives). Our results are backed by strong statistical validity (or precision), which provides a very high level of confidence these gains were due to the reward.
What reward worked best?
We saw the largest impact when students had a fixed incentive, followed by the proportional incentive and then the group incentive.
This is perhaps not surprising, as the fixed incentive paid the highest reward for an individual’s effort. But it is interesting the biggest gain was in writing, given this is an area where Australian students’ performance has dropped over the past decade, with a “pronounced” drop in high school.
It is also surprising that students’ individual efforts still increased when the reward depended on other students’ performances. All the test takers were new to high school and would not have had long to establish the kind of group cohesion that typically makes group rewards effective.
Another surprise is that the rewards had an impact even when they were delayed by 12 weeks. Previous research suggests that rewards would need to be given to students immediately or soon after their efforts in order to have an effect.
What does this mean?
Our findings suggest that small monetary incentives can motivate students to increase their test-taking effort in multiple subjects.
This is not to say we should pay students for their test performances more broadly. But it does suggest that poor performance in low-stakes tests may reflect students’ efforts rather than their ability or their learning.
These findings raise questions about the extent to which we use tests such as NAPLAN – and whether parents, teachers and policy-makers need to look further if they are making important decisions based on these test results.
Jayanta Sarkar, Associate professor of economics and Dipanwita Sarkar, Associate professor of economics
This article is republished from The Conversation under a Creative Commons license. Read the original article.