Friday, December 27, 2024
Home Science The physics and maths of keeping elections fair and representative | Explained

The physics and maths of keeping elections fair and representative | Explained

by
0 comment

There are about 60 national elections in 2024 involving two billion people, including the biggest of them — the national elections underway in India — and the election to the U.S. presidency. Across the world, elections are a volatile mixture of emotions, aspirations, competing ideologies, and sometimes even violence. It might then surprise many that, despite the cacophony, there is science behind the election’s processes.

About 2,500 years ago, the earliest form of elections in ancient Athens was a system that ultimately depended on the candidate’s luck. Among all the suitable candidates, one was randomly chosen. Since the winning criterion was based on random choice, campaigning or influence couldn’t help the candidate.

Tenth-century Chola inscriptions at Uthiramerur in Tamil Nadu reveal the practice of choosing village representatives through a ‘Kudavolai’ system. The final choice was made by randomly picking one among the candidates the people had voted for.

What is the ‘first past the post’ system?

Today, social choice theorists and mathematicians who study elections call this the approval voting system followed by a random choice. As a means of electing candidates, this process fails to reflect the will of the people. If this is a flawed process, what would be the right way to elect candidates? Surprisingly, mathematics tells us that there is no simple answer to this question.

The first-past-the-post (FPTP) system followed in India, the U.S., the U.K., and several other countries has many drawbacks. Critics have pointed out the disproportionate difference between the popular vote share and the seat share in many Parliaments. For example, in the 2015 Delhi Assembly elections, the Aam Aadmi Party received 54% of the popular vote but won 96% of the seats, whereas the Bharatiya Janata Party won 32% and 4%, respectively.

Second, winners in the FPTP system often secure far less than 50% of the vote share. No government in India, irrespective of its strength in the Lok Sabha (i.e. number of seats), has ever surpassed 50% vote share. Since 1918, only once, in 1931 in the U.K., did a government command more than 50%. So by the vote-share metric, rather than parliamentary seats, India and the U.K. were always ruled by “minority” governments. Expectedly, social choice theorists disfavour the FPTP system, though it continues to find wide use for its simplicity.

What are the Condorcet and Borda systems?

Are there better alternatives? Mathematical analysis to design better electoral systems dates back to the 13th century in the works of Ramon Llull, a missionary and theologian. His book ‘De Arte Eleccionis’, in the Catalan language, gives a detailed algorithm for a two-stage election process for church officials. It ensures that the winner, when pitted against each of the other contenders, receives more than 50% votes and is the most preferred candidate. This work was lost for centuries until it was discovered in the late 1980s.

Today, Llull’s method is called the Condorcet system after the 18th-century French mathematician Nicolas de Condorcet, who rediscovered it in the 1780s. While better than FPTP, the Condorcet system can be difficult to understand and isn’t used in any national election, not least because its mechanism allows participants to prevent the election of a particular candidate. Some smaller organisations use it to elect their leaders and board members, however.

The Borda electoral process, proposed by French mathematician Jean-Charles de Borda in 1784 — but first described by the 15th-century German astronomer Nicolas of Cusa — is a rank-based voting system (RVS) similar to the points table in sporting tournaments like the Indian Premier League. It allows voters to rank each candidate on the ballot paper, and through a process of vote redistribution, the winner is guaranteed to have at least 50% of the vote. Redistribution of votes can take several forms; the most common is to add the second and even third preference votes until one of the candidates crosses 50% vote share.

Are there problems with RVS?

The President of India is elected with the RVS system. In 1969, none of the 15 presidential candidates secured 50% of the first-preference votes. After adding second preference votes, V.V. Giri (who had 48% first preference votes) reached 50.8% and was declared the winner, defeating Neelam Sanjeeva Reddy. Like Condorcet, the original Borda method is complex and challenging to implement in large elections such as those in India.

In 1951, the American economist and Nobel laureate Kenneth Arrow proved that RVS can conflict with certain fairness criteria required of elections. This doesn’t imply such systems are unfair, even if occasionally the most popular candidate may fail to get elected.

Consider an RVS election with three candidates, A, B, and C, with nine voters ranking their preferences. The results can read thus: four voters prefer B over C, and prefer A over both B and C. This information can be represented as A > B > C (4). Similarly, other voters may yield different combinations: B > C > A (3) and C > A > B (2). The distribution indicates A received the maximum number of first preference votes and C the least. Suppose B withdraws from the election. In a fair election, we should expect the result to remain unaffected — but this isn’t the case with RVS. Here, with the same vote distribution, the result will now read  A > C (4 votes) and C > A (5 votes). So C has the most first preference votes now and wins. Arrow’s theorem asserts that such outcomes are unavoidable in an RVS election.

How can maths, physics help keep elections fair?

Ironically, while the cold rigour of mathematics sheds light on the inherently boisterous election processes, more grounded physics approaches draw on this lack of order to seek universal patterns irrespective of electoral systems. This is not unusual in physics.

For example, inside a balloon, billions of molecules, moving randomly and bumping against one another, conspire together to produce a constant pressure that keeps the balloon puffed up. This is the central lesson of statistical physics: order can emerge at the large scale even if dominated by disorder at smaller scales.

Two decades of election data analysis has revealed emergent patterns in the form of the distributions of quantities that matter to an electoral process. Despite the superficial chaos surrounding the elections, these patterns are robust and independent of finer details, such as where elections were held, the voting paradigm or the cultural context. Axiomatically, the absence of such order would suggest that elections are/were not fair and could be used to diagnose and flag electoral malpractices.

In short, while mathematical analysis helps sharpen an algorithm for the election process, a physics perspective serves to diagnose if the algorithm is fairly implemented in practice. The science of elections has a long way to go, but for millions of people across the world, the elections of 2024 provide hope that the future is in their hands.

M.S. Santhanam is a professor of physics, and Aanjaneya Kumar and Ritam Pal are doctoral students, all at the Indian Institute of Science Education and Research, Pune.

You may also like

Leave a Comment

About Us

Welcome to Janashakti.News, your trusted source for breaking news, insightful analysis, and captivating stories from around the globe. Whether you’re seeking updates on politics, technology, sports, entertainment, or beyond, we deliver timely and reliable coverage to keep you informed and engaged.

@2024 – All Right Reserved – Janashakti.news