Dec 17, 2024 10:08 AM IST
Apart from the IndiaAI Mission, MeitY is also funding two research projects to detect AI-generated fake videos, audios, and images.
The ministry of electronics and information technology has invited proposals from academia and industry under the IndiaAI Mission to develop tools and frameworks to detect deepfakes in real time, watermark and label AI-generated content, develop ethical AI frameworks, and red team AI models, among other things.
An amount of ₹20.46 crore, or less than a percent of the ₹10,371.92 crore ($1.25 billion) approved by the cabinet for the IndiaAI Mission, has been earmarked for the safe and trusted AI pillar. Eight projects for mitigating bias in AI, auditing algorithms, and other issues, were selected in October to ensure “responsible development, deployment, and adoption of AI technologies”.
Apart from the IndiaAI Mission, MeitY is also funding two research projects to detect AI-generated fake videos, audios, and images.
Under the new expression of interest, MeitY is seeking proposals for five kinds of projects.
First, the government is looking for tools that can detect deepfakes in real time and for these tools to be integrated into web browsers and social media platforms. Such integration, as per the proposal document, “could enable automated cross-modal content verification, providing real-time deepfake detection, enhancing security and ensuring integrity of digital information ecosystem”.
Secondly, MeitY is looking for tools to detect AI-generated content and embed it with traceable markers. “They should enable robust content authenticity, authentication, and provenance tracking,” the proposal said. Such tools could also prevent generation of harmful and illegal content to ensure compliance with laws and ethical standards. This tool should also have testing capabilities so that the performance and effectiveness of its authentication and labelling mechanisms can be continuously evaluated.
Thirdly, the government is looking for proposals to develop ethical AI framework so that AI systems “respect fundamental human values, uphold fairness, transparency, and accountability, and avoid perpetuating biases or discrimination”, and organisations using AI can minimise potential harm.
Fourthly, MeitY is looking for AI risk assessment and management tools by analysing threats and vulnerabilities of AI-specific risks in public AI use cases. This tool could potentially classify AI systems or applications by risk.
Lastly, the government wants to assess the resilience of AI systems to withstand high-stress scenarios such as natural disasters, cyberattacks, data disruptions, or operational failures.
Apart from academic institutions, start-ups and companies can also apply, provided start-ups are in operation for at least two years and at least 51% of its ownership is with Indian citizens or people of Indian origin. This entity cannot be a subsidiary of a foreign corporation. Companies that want to apply must be registered under the Companies Act and should have been operational for at least five years.
While applicants can apply for more than one project, the “chief investigator” and the “co-chief investigator” cannot be engaged with more than one project at a given time.
For projects that get grants from the IndiaAI Mission, the applicants will own the associated intellectual property rights, but the government and government bodies (including its public sector units, autonomous societies and not-for-profit Section 25 companies) will have the right to get a royalty-free licence for deployment for non-commercial purposes. For commercial usage, the terms of licensing can be mutually decided between the grantee and the government.
Get Current Updates on…
See more