Sunday, October 20, 2024
Home Opinion Ways to design India’s AI safety architecture

Ways to design India’s AI safety architecture

by
0 comment

Oct 16, 2024 08:29 PM IST

The emerging discourse around AI safety shifts the social goals that should steer AI innovation, leading to a de-prioritisation of aligning AI developments with human rights and accountability

Following the United States, United Kingdom (UK), and Japan, India plans to establish an Artificial Intelligence (AI) Safety Institute by the end of the year. The established AI institutes focus on evaluating and ensuring the safety of the most advanced AI models, popularly known as frontier models, and prepare for the prospect of new AI agents with General Intelligence capabilities. Does India need an AI safety institute at all, and if so, how should it be modelled?

India has much to contribute to the global conversation on AI safety. (Photo by Kirill KUDRYAVTSEV / AFP) (AFP)
India has much to contribute to the global conversation on AI safety. (Photo by Kirill KUDRYAVTSEV / AFP) (AFP)

India has much to contribute to the global conversation on AI safety. While the West is debating the potential harms of frontier models, they are already being used in critical social sectors in India. For example, several pilots are underway to help frontline health workers access medical information and support teachers and students with new learning tools. India is thus uniquely positioned to share insights on the real-world impacts of these models.

However, India’s AI safety institute need not blindly follow the same mandate as other countries. For example, the UK AI Safety Institute’s core focus is testing and evaluating frontier models; the trouble with this is that these models are not static. The test you run today may have completely different results just a few months later. An essential measure for evaluation is that it should be reproducible, but as these models evolve, is such replicability even possible?

Moreover, the criteria against which we evaluate these models are unclear — what are the end goals for assessment? Goals such as ensuring safety or preventing harm are neither tangible nor measurable. And who should have the power to decide whether something is safe in a morally pluralistic world? We should be wary of creating new gatekeepers without a robust process to ensure they represent a wide range of social identities and contexts and are willing to be held to the highest accountability standards.

This is not to say that model evaluation and establishing standards for safety are not required. Instead, we must enter this space with a clear view of its challenges and limitations.

India’s AI safety institute could focus on four key goals in its early years. First, it should monitor the post-deployment impact. Given how widely these models are expected to be used, across diverse use cases and social contexts, this could help build a critical body of empirical evidence about societal impacts, including unintended ones. Such continuous monitoring and evaluation are particularly important with generative AI models because they rely on how users interact with these models.

Second, as India is in the early stages of building its language models, it has a unique opportunity to learn from the mistakes of existing model providers. Whether from Google, Facebook or other Big Tech companies, these are built by the non-consensual use of personal and copyrighted data. Many of the data sets used to train these models also contain illegal content, such as pornographic images of young children. Is there a way to build these models without these data harms? What kind of licensing arrangements are required to ensure fair use? This is the challenge India has an opportunity to address — the safety institute could help establish global standards for data collection, curation, and documentation.

Third, the institute should build critical AI literacy among key stakeholders. While certain sections of government are well-versed, for many others, AI is still a new technology, and their understanding of opportunities and risks is limited. Similarly, end-users need to be educated on the limitations and risks of these technologies so that they can exercise caution and avoid overreliance. Without these capacities, other measures to ensure safety and reliability will not realise their promise.

Finally, we must recognise that the discussion on AI safety and advanced capabilities distracts from some of AI systems’ current uses and harms. AI products and services that use prediction and classification-based algorithms are widely used in warfare, law and order, recruitment, welfare allocation, and numerous other areas of public life. The focus on frontier models must not shift attention from the governance of these models, which are already contributing to an erosion of rights, a loss of agency and autonomy, and new forms of monitoring and surveillance. This must be a core agenda of India’s new safety institute.

The emerging discourse around AI safety shifts the social goals that should steer AI innovation, leading to a de-prioritisation of aligning AI developments with human rights and accountability. Safety is essential, but it is not a high enough standard to judge AI and the companies building it. Restoring a rights and accountability-based agenda to AI governance is particularly important for countries like India. 

Urvashi Aneja is director, Digital Futures Lab.The views expressed are personal

Get Current Updates on…

See more

You may also like

Leave a Comment

About Us

Welcome to Janashakti.News, your trusted source for breaking news, insightful analysis, and captivating stories from around the globe. Whether you’re seeking updates on politics, technology, sports, entertainment, or beyond, we deliver timely and reliable coverage to keep you informed and engaged.

@2024 – All Right Reserved – Janashakti.news