Sunday, December 22, 2024
Home Comment University rankings and academic metrics that raise eyebrows

University rankings and academic metrics that raise eyebrows

by
0 comment

The Times Higher Education (THE) World University Rankings (WUR) 2025, released in early October 2024, have shaken public faith and sown doubts about the credibility of ranking Indian institutions based on research quality. India’s top research institution for decades, which is undisputed, namely, the Indian Institute of Science (IISc) Bangalore, has been ranked 50 with a score of 51.5. Surprisingly, institutions such as Chitkara University Chandigarh, Saveetha Institute Chennai, Shoolini University Solan, Lovely Professional University Phagwara, and Thapar Institute Patiala occupy the top five positions with scores of 88.9, 88.6, 87.2, 84.7, and 83.3, respectively. Notably, four of these five institutions are concentrated within a small geographical region of Chandigarh, Punjab and Himachal Pradesh, raising questions about the methodology and fairness of the rankings.

The announcement has sparked controversy regarding the assessment of Indian institutions, with doubts being raised over the objectivity and credibility of the ranking system. Further exacerbating these concerns, several of India’s premier institutions, including the older and some the newer Indian Institutes of Technology (IITs), chose not to participate in the rankings. Their abstention was driven by apprehensions regarding the transparency and fairness of THE’s evaluation processes, which they believe may lead to skewed and arbitrary rankings. This collective withdrawal underscores the broader scepticism surrounding the efficacy and the accuracy of global university ranking systems.

Contradictions

THE World University Rankings 2024 evaluated over 2,000 institutions from 115 countries and regions. The methodology assesses performance across five key areas: teaching (learning environment); research environment (includes volume, income, and reputation); research quality; international outlook (covering staff, students, and research); and industry (knowledge transfer). The first three categories each carry a weight of approximately 30%, while the fourth and fifth categories contribute 7.5% and 4%, respectively.

The research ranking is based on two key areas: research environment and research quality. The research environment includes factors such as research reputation, income, and productivity, while research quality is evaluated through citation impact, research strength, excellence, and influence. Research reputation carries a weight of 18%; citation impacts 15%, and the remaining factors, i.e., research strength, excellence, and influence, account for approximately 5% each.

The fundamental premise is that a robust research environment is the foundation for achieving research quality and ensures overall excellence. These two aspects are strongly correlated. However, this premise is under question when examining the scores of the top five universities ranked for research quality, which are surprisingly low: 11.4, 16.0, 21.3, 14.6, and 13.9, respectively.

In contrast, the 50th-ranked IISc in research quality holds the top position in the research environment, scoring 50. This ranking is undisputed, as the IISc is recognised as a leader in fostering a robust research environment. The low scores of the top five universities in research quality further highlight that none of these institutions is well-known for its research environments.

These observations suggest that achieving high research quality without robust research environments undermines the credibility of the THE ranking system, rendering it a subject of ridicule among the public and relevant stakeholders. This situation raises significant questions about the system’s objectivity, credibility, and fairness. Further, the metrics used to measure research quality appear flawed and susceptible to manipulation.

Focus on publications, citation impact

The primary factors contributing to research quality are the quantity and quality of publications. Various elements influence this landscape, which is the primary basis for research rankings and is susceptible to manipulation. Some of these factors include the following.

Many journals now accept content generated by generative Artificial Intelligence, allowing for accessible publication at little to no cost.

Publication in Scopus-indexed journals can often be secured by paying a few thousand rupees, with numerous journals such as these, guaranteeing acceptance even for papers that lack credible research.

Another method for inflating publication counts involves plagiarising existing papers and submitting them to Tier 1 and Tier 2 journals, where acceptance is frequently achieved by paying the article processing charge (APC). There is no shortage of such journals.

An emerging trend known as “gift and paper-mill authoring” involves listing individuals as authors who have made no substantial contributions, thus artificially inflating collaboration credentials across institutions and countries.

These practices are commonly employed to boost publication counts, leading to an apparent increase in citation impact and, consequently, research quality despite most publications lacking credibility.

The core principle of scientometrics is that citation impact (CI) reflects popularity rather than actual research quality. A popular meme humorously features theoretical physicist and cosmologist Stephen Hawking. He claims he enjoyed greater recognition despite having a lower h-index, based on citation impact, than scientists with significantly higher indices. Similarly, numerous high-quality journals may have a low citation impact. Further, citation impact varies across disciplines and does not consistently serve as a reliable indicator of true research quality.

Moreover, citation impact is susceptible to manipulation. Researchers may inflate their citation counts through self-citations, where they reference their previous work, or by engaging in quid pro quo citations, where they agree to cite each other’s work. These practices artificially enhance their citation metrics, rendering citation impact an imperfect, and sometimes misleading, metric for assessing the authentic influence or quality of research.

Ranking challenges

In view of this, THE’s assertion that governments and universities trust its data, serving as a vital resource for students to make informed choices about where to study seems questionable. Despite decades of experience, its data may not accurately represent the realities of performance among Indian universities. This potential misrepresentation could mislead stakeholders, particularly students and young individuals seeking guidance in their educational and career paths.

THE’s credibility and objective assessment of institutions is in focus and can be questioned. It must critically evaluate its entire ranking process. It may need to redefine its data integrity standards and establish more robust standard operating procedures for built-in data verification or to implement an additional layer of verification and validation with enhanced transparency. The primary goal of its rankings should be to provide reliable inferences that assist stakeholders in making informed decisions. This objective must not be compromised so that users are not misled regarding institutions.

Rajeev Kumar is a former computer science professor at IIT Kharagpur, IIT Kanpur, BITS Pilani, and JNU New Delhi. His research focuses on scientometrics

Published – November 03, 2024 05:30 am IST

You may also like

Leave a Comment

About Us

Welcome to Janashakti.News, your trusted source for breaking news, insightful analysis, and captivating stories from around the globe. Whether you’re seeking updates on politics, technology, sports, entertainment, or beyond, we deliver timely and reliable coverage to keep you informed and engaged.

@2024 – All Right Reserved – Janashakti.news