“Shy Prayut” Voters or “Scared Prayut” Voters? Election Polls: Accuracy and Bias

by Joel Sawat Selway

Over the last month, a flurry of outlets have released their predictions for the upcoming Thai elections, scheduled for May 14, 2023. All but one puts Pheu Thai as the frontrunner, though the percentage of the vote predicted varies from 35.75% to 49.85%. One poll had the Move Forward Party in front with 50.29% of the vote, 14% higher than the next highest prediction. Which of these various sources should you trust? In the first of this two-part series, I explore the performance of pollsters for the last elections in 2019. Specifically, I explore whether Thai polling agencies fell prey to a similar phenomenon as the 2016 US elections: “Shy Prayut” voters: respondents who did not want to admit that they were going to vote for the leader of the 2014 military coup.

Figure 1. Average of Thai polling in lead-up to Thai 2023 General Election

Public opinion surveys are imperfect attempts to measure the voting intention of the nation. Even in countries with completely free and fair elections and stable party systems, polls can differ. Pollsters in Thailand’s 2019 elections were criticized for their inability to predict the outcomes of the last election, but how off were the predictions? Political scientists measure this with something called the average absolute error (AAE). They simply take the difference between the percentage of votes a party actually received in the election and the predicted percentage generated by the poll. They then average this across all of the parties.

Given the high number of parties in Thailand’s last elections, I calculated the AAE across only the top-four parties: Palang Pracharath (PPRP), Pheu Thai (PT), Future Forward (FFP), and the Democrat Party (DP).

What is a good AAE score? In the 2016 US presidential elections, which produced a shock victory for Donald Trump, and which were regarded as the worst in recent memory, the AAE was a little over 3%. The best Thai polling company, Bangkok Poll, came close to that, but was still higher, with other polling agencies producing even higher AAEs. Why did they perform so badly?

Media Reporting and Undecided Voters

Numerous analyses on the 2016 US polls argued that the media’s interpretation of the polls was just as much to blame as the technical aspects of the polls themselves. Eight states with more than a third of the electoral votes needed to win the 2016 US presidency had polls showing a Clinton lead of three points or less: in other words, the media should have presented these states as having a vote differential between Trump and Clinton indistinguishable from zero. Likewise, a review of the Thai media’s coverage of the 2019 pre-election polls revealed that the huge number of undecided voters identified in the surveys was under-emphasized compared to the more concrete voting intentions of decided voters. The problem? Some of these polls showed an undecided rate of over 50% even a month before the election. A survey by Bangkok Poll administered on March 4-6th still revealed that 21.6% of voters were undecided!

I calculate the AAE of four survey firms in the months preceding the election. Table 1 displays the final poll results released by each firm. We can see that the most proximate survey to election day, by Bangkok Poll, had an AAE just a little higher than US pollsters in their predictions of the 2016 US presidential elections; impressive, given that Thai pollsters faced many more sources of uncertainty.

Table 1 also demonstrates that the higher the proportion of undecided voters in a survey, the higher the AAE. Bangkok Poll’s final survey, dated four days before election day, reported no undecided voters, while NIDA reported 4.78% of respondents undecided earlier In the previous month. This correlation makes sense and suggests that time and late deciders might be a good explanation for “poorer” predictions.

Table 1. Poll Accuracy for 2019 Thai Pollsters, by Average Absolute Error (AAE).

Listed polls are each pollster’s survey released closest to the March 24, 2019 general election. Figures are adjusted to calculate party percentages excluding undecided voters

An analysis of Bangkok Poll’s five surveys (Table 2) reveals that their earliest poll administered on February 12-13 was 5.20% off, on average, though it was still more inaccurate in its February polls compared to NIDA, Rangsit, and the Financial Times. Importantly, we can see that Bangkok Poll improved over time and that this improvement is strongly related to a decrease in undecided voters in successive surveys. Three of the four pollsters in Table 1 did not administer a survey on vote choice after mid-February, over a month before the elections took place. Whatever their reasons for that decision, it seems that Thai voters simply hadn’t made up their mind yet. It is possible, then, that had these other firms taken polls in late February or early March they could also have improved their predictions.

However, Bangkok Poll also seemed to be a lot better at capturing undecided voters. In Table 2 they report that 2/3 of voters had not made up their mind. Despite this, Bangkok Poll’s AAE is still superior to both NIDA and Rangsit with much lower numbers of undecided voters.

Table 2. Comparison of Bangkok Poll Average Absolute Error (AAE) over time

Source: Bangkok Poll, 2019

Sensitivity bias

Another common source of bias in surveys is sensitivity bias. Referred to as the “Shy Trump” hypothesis in the US, the historically high error of US polls in 2016 was put down to the fact that some voters simply were too embarrassed to admit in pre-election survey polls that they liked Trump. Thus, even after undecideds are accounted for, polls could not accurately predict who voters preferred. Can we similarly assess if there was a “Shy Prayut” effect going on in Thailand? Were respondents embarrassed to admit that they preferred the military-aligned party, PPRP, and its nomination for prime minister, head of the 2014 coup, Prayut Chan-o-cha?

What we would expect to see in polling is that the percentage of votes for PPRP is less than what they actually got, even after all undecideds are accounted for. However, this is only the case for the Financial Times poll where only 9% of voters said they preferred PPRP in mid February. The final polls for both Rangsit and NIDA are quite close to PPRP’s final vote percentage, especially when taking undecided voters into consideration. While in Bangkok Poll’s last three surveys (see Table 2), the vote proportion for PPRP was actually higher than what the party polled on election day. This suggests respondents in the survey may have been afraid to NOT show support for Prayuth. Given the authoritarian nature of the political environment at the time, this is highly possible. The predicted vote percentage for the Democrats (the other conservative party) is also consistently more generous than their actual vote tally.

The Uncertain Political Environment

Of course, the emergence of a military-aligned party with the 2014 junta leader as prime ministerial candidate was just one element of uncertainty in the 2019 elections. New electoral rules with opaqueness regarding the MP list allocation formula; a military-appointed senate that jointly elected the Prime Minister; and other new entrants to the party system, such as the progressive FFP, all served to make the 2019 elections highly uncertain. How did this political uncertainty affect pollsters’ abilities to accurately predict the vote?

First, the surveys clearly had a hard time picking up support for small parties. Notice that every single one of Bangkok Poll’s surveys overestimates support for Pheu Thai. In contrast, they had a much harder time assessing support for Future Forward, the newest of the larger parties. NIDA and Rangsit are off by 9-10%. Of the four largest parties, this contributes the most to pollsters’ AAE. This inability to accurately assess small parties may be due to the uncertain nature of the political environment, but it may also have something to do with the procedural quality of the various polling agencies, a topic I consider in the next post.

Second, the political environment was heavily stacked against a Pheu Thai victory. Respondents in the survey might have wanted to express their preference for Pheu Thai even if circumstances meant they were not going to vote for them. Some voters simply could not vote for Pheu Thai. Pheu Thai made the strategic decision to essentially split into two parties prior to the 2019 elections, afraid of being dissolved and shut out of running altogether. They thus organized a pre-electoral pact with the Thai Raksa Chart party to not contest in over 100 constituencies. When Thai Raksa Chart was disbanded in February for nominating Princess Ubolratana as prime ministerial candidate, voters in those constituencies were left unable to vote for Pheu Thai. This may also partially explain the higher support for Future Forward and the Bhumjaithai Party than polls predicted.

Other voters may have been consigned to the reality of unfair elections, but wanted to express support for Thai in these surveys. Changes to the electoral system meant that voters could not split their vote as in past elections, casting a party-list vote for Pheu Thai while voting for a candidate of a different party in their local constituency. Perhaps more importantly, the post of prime minister was to be decided by a joint session with the military-appointed senate. In short, even if Pheu Thai polled the highest number of votes it was not guaranteed to control the government. Voters may have still preferred Pheu Thai, and saw the surveys as a vehicle to express that, but ultimately voted for another party.

Conclusion

Thai pollsters seem to have done okay in the face of some pretty extreme circumstances compared to US elections. The fact that only one survey firm released results after mid-February perhaps indicates their own ambivalence about the value of polls at all. The one firm that did persist, Bangkok Poll, releasing results four days before the election did not perform too badly at all. But how much of this was due to its setup and methodology? Was Bangkok Poll simply better prepared to handle the 2019 elections. Indeed, polling methodology does have some well-worn good practices that differed amongst Thai pollsters. Do they explain why Bangkok Poll performed so well in 2019? Unfortunately, Bangkok Poll has not published any poll predictions for the 2023 elections.

Note: I dismissed the two polls released on the day of the election, which simply predicted the number of seats. Neither of these outlets published any procedural information, such as the sample size, estimated error, or the like. If that information appeared on their websites at the time of the election, it is no longer available. In terms of seats predictions, these two polls were significantly off, mis-predicting the number of seats on average by 37.5% (Suan Dusit) and 33.2% (SuperPoll). Comparing the predicted number of seats is a different exercise to simply predicting the vote proportions, so these are left out of this analysis.

COVID-19

Ivermectin not effective in treating Covid-19, joint Mahidol-Oxford study shows

Ivermectin is not shown to be effective against Covid-19 in clinical trials according to the findings of a joint...

Latest article