Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
55 views4 pages

First Legally Binding AI Treaty

Uploaded by

supravatdey203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views4 pages

First Legally Binding AI Treaty

Uploaded by

supravatdey203
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

First Legally binding AI treaty

insightsonindia.com/2024/09/07/first-legally-binding-ai-treaty/

Insights Editor September 7, 2024

Syllabus: Governance: AI Governance

Source: IE

Context: The first legally binding AI treaty, signed by the US, EU, and UK, prioritizes
human rights and democracy, adopting a risk-based approach to AI regulation across
public and private sectors.

The Framework Convention on AI by the Council of Europe aims to mitigate AI


threats to human rights, democracy, and the rule of law. Unlike the EU AI Act, it ensures
AI lifecycle activities align with these principles.

Key Features of the AI Treaty:

1. Risk-Based Approach: Regulates AI systems based on risk levels throughout their


lifecycle.
2. Human Rights Focus: Ensures AI systems uphold human rights, democracy, and
the rule of law.
3. Public and Private Sector Applicability: Covers AI use in both public and private
sectors globally.
4. Accountability: Signatories are responsible for harmful and discriminatory AI
outcomes.
5. Legal Recourse: Victims of AI-related rights violations are guaranteed legal
protection.
6. Democratic Safeguards: AI systems must not undermine democratic institutions or
judicial independence.
7. Global Reach: Involves commitments from nations beyond Europe for a united
global AI standard.
8. No Punitive Sanctions: Relies on monitoring for compliance rather than fines or
penalties.

Significance of the AI treaty:

Area Significance of the Treaty

Human Rights Ensures AI systems respect human dignity and privacy, addressing
risks like discrimination and bias.

Democracy Protects democratic institutions by regulating AI’s role in political


discourse and decision-making.

Rule of Law Ensures legal safeguards against AI misuse, promoting


accountability and equality before the law.

1/4
Global Establishes a global framework for responsible AI use, harmonizing
Governance international AI regulations.

Accountability Holds AI developers and operators responsible for harmful outcomes,


ensuring legal recourse.

Key Components of the EU Framework for AI Regulation:

Components Details

Safeguards in Empowerment of Consumers: Individuals have the ability to launch


Legislation complaints against perceived AI violations.

Clear boundaries are set on AI usage by law


enforcement agencies.

Strong restrictions on facial recognition technology and AI


manipulation of human behaviour are specified.

Tough penalties for companies found breaking the rules.

Limitations on governments for use of real-time biometric


surveillance in public areas (allowed only in cases of serious
threats like terrorist attacks)

Categorization AI applications are classified into four risk categories based on their
of AI level of risk and invasiveness.
Applications
1. Banned Applications: Mass-scale facial recognition and
behavioural control AI applications are largely banned, with
exemptions for law enforcement.

2. High-Risk Applications: Allowed with certification and


provision for backend technique transparency, such as AI tools for
self-driving cars.

3. Medium-Risk Applications: Deployable without restrictions,


like generative AI chatbots, with explicit disclosure to users about AI
interaction, transparency obligations, and detailed tech
documentation.

4. Low-risk applications with transparency obligations.

Other EU’s General Data Protection Regulation (GDPR) Implementation:


Regulatory Enforced since May 2018, focusing on privacy and explicit consent
Achievements for data processing.

Digital Services Act (DSA): Focus on regulating hate speech,


counterfeit goods, etc.

Digital Markets Act (DMA): Identifying “dominant gatekeeper”


platforms and addressing non-competitive practices and dominance
abuse.

2/4
Concerns related to AI use:

Concerns Explanation

Privacy Risk of personal and sensitive information being used unethically,


such as for targeted advertising or political manipulation.

Responsibility AI models generating new content may lead to challenges in


identifying responsibility, causing ethical dilemmas over the
content’s origin.

Automation AI’s potential to automate processes raises concerns about job


and Job displacement, impacting those skilled in the automated areas.
Displacement

Bias and AI training on biased data may result in decisions that unfairly
Discrimination disadvantage certain groups, perpetuating societal inequalities
and discrimination.

Lack of Concerns about accountability for AI actions, questioning whether


Transparency responsibility lies with creators, deploying companies, or regulating
governments.

India’s Stance on AI Regulation:

India aims to develop sovereign AI and AI computing infrastructure, focusing on real-


life applications in healthcare, agriculture, governance, language translation, etc., to
catalyse economic development.

Currently, India is transitioning from a no-regulation stance on AI to actively


formulating regulations based on a risk-based, user-harm approach. India’s AI
penetration factor is 3.09, the highest among G20 and OECD countries. Some of the
initiatives are:

Initiatives:

1. Awareness Initiatives like #AIFORALL


2. NITI Aayog’s National Strategy for AI (2018) focused on inclusive and responsible
AI
3. NITI Aayog’s ‘Principle of Responsible AI,‘ outlines seven key principles for AI
use
4. Meity launched IndiaAI in 2023, covering all AI-related research.
5. TRAI proposed a domestic statutory authority for AI regulation in July 2023,
emphasizing a risk-based framework.
6. Ethical guidelines for AI in healthcare by the Indian Council of Medical Research
7. SEBI’s circular guiding AI policies in the capital market
8. National Education Policy 2020 recommends integrating AI awareness into school
courses.

Initiatives by other countries:

3/4
Country Initiative

G7 The Group of Seven (G7) has proposed a “risk-based” regulation for


artificial intelligence (AI) tools, which could be a first step towards
creating a template to regulate AI such as OpenAI’s ChatGPT and
Google’s Bard

Italy Became the first major Western country to ban OpenAI’s ChatGPT due to
privacy concerns.

UK Adopts a ‘light-touch’ approach to encourage innovation in the AI industry.

Japan Takes an accommodative approach to AI developers.

China Drafted a 20-point plan to regulate generative AI services, expected to


be enforced later this year.

US Proposed a nonbinding roadmap, the AI Bill of Rights Blueprint, outlining


five core principles for the responsible use of AI.

Conclusion:

Although the risks of AI are widely known, it remains unclear how the AI regulations would
address these risks and how grievances would be remedied. Previously, Elon Musk,
Steve Wozniak, and over 15,000 others had called for a six-month pause in AI
development, and for shared safety protocols to be implemented by labs and
independent experts.

Insta Links:

What are hallucinating chatbots?

Prelims Links:

Q1. With the present state of development, Artificial Intelligence can effectively do
which of the following? (UPSC 2020)

1. Bring down electricity consumption in industrial units


2. Create meaningful short stories and songs
3. Disease diagnosis
4. Text-to-Speech Conversion
5. Wireless transmission of electrical energy

Select the correct answer using the code given below:

(a) 1, 2, 3 and 5 only


(b) 1, 3 and 4 only
(c) 2, 4 and 5 only
(d) 1, 2, 3, 4 and 5

Ans: B

4/4

You might also like