ISSN : 2583-8725

Privacy, Data Protection and Human Rights in Ai Circuit

Neha Arora
LLL student criminal
CT university

Ms. Cheena Abrol
Assistant professor
CT university
(Supervisor)

Abstract
Artificial Intelligence (AI) has become one of the most radically changing technologies of the 21st century and has transformed the fields of governance, economy, as well in the personal life. However, its ever-increasing power has posed multilayered questions on privacy, data protection, as well as human rights. The current research paper investigates how AI is changing the legal environment in India and worldwide, and how this generation of automated decison making and  information processing through mass data, is approaching significant threatening interactions with fundamental rights and constitutional protections.

In India, the right to privacy was established as a right through the case of “Justice K.S. Puttaswamy v. Union of India (2017)”, which developed a constitutional base of data privacy and information independence. Nevertheless, there are still obstacles in the process of operationalizing these principles as per the Digital Personal Data Protection Act, 2023. Regardless of its potential, the act contains vast loopholes in the regulation of AI-driven systems, accountability of the algorithms, and surveillance by the states. The paper proposes that India needs to be transitioned to holistic AI governance regime, with data protection, human rights safeguard, technological transparency, and due-process protection.

Comparative studies of the global models, specifically those of the European Union with the GDPR and AI Act, the United States approach, and new OECD and UNESCO principles, indicate the global transition to the rights-based, human-centered regulation of AI. These theories underline the principles of fairness, explainability, and proportionality as the principles of ethical AI. In this context, the paper concludes that the constitutional vision of dignity, liberty, and equality in India should form the basis of its AI regulation. Innovation and the protection of rights require a balance between the laws, morals, and technology, making sure that AI is a source of empowerment and not control.

Introduction
There are two forces driving the world today of technology and human rights. Although Artificial Intelligence was once the domain of the sci-fi idea, nowadays it is embedded in nearly everything we do. It assists governments in managing their services and business decision making. Nevertheless, every new problem, in relation to the safety of personal data, providing people with control and safeguarding human rights in a broader perspective, is also a product of this tech boom.

      AI alters the perception of privacy and information security. Information is referred to as the new oil since it is what that powers AI. Opting to do every click, scan, or other digital action, big datasets are fed, which algorithms learn to predict or make decisions on our behalf. This brings in a contradiction of convenience and rights to privacy. Since AI decisions are made independently, the question arises: Who is to be held responsible? How can we observe what it undertakes and can it diminish the freedom of a person.

      Privacy is a fundamental right which demonstrates the dignity of a person. In 2017, the Supreme Court in India stated that privacy is a fundamental right.[1] But AI threatens it. Face-recognition, automatic profiling and data-driven rules are covert in action and one can hardly understand or dispute decisions made about them. The disparity of knowledge and power between tech makers, companies, and the state and ordinary people causes some concerns regarding the spying and a threat on privacy.

      Similar problems are being attempted to be resolved by laws around the world. In Japan, U.S and U.K, regulations are different to ensure safety and innovation. The UN and EU also state that AI should not violate human rights but demand a collection of ethics and law all at the world level.[2]

      The new “Digital Personal Data Protection Act of 2023” in India is a step in the right direction of creating a comprehensive data law. However, the law remains novel and critics believe it offers the state excessive exceptions and lacks clarity to safeguard the use of AI.[3] Due to the increased involvement of AI in the decision-making of people, such as assisting in the allocation of benefits, surveillance of individuals, and making judgments regarding crime, there is a tangible reason to watch the law.[4]

      This paper, therefore, examines the relationship between AI, data privacy and human rights in a legal perspective. It will determine the suitability of AI to the Constitution, Law, and ethics of India and compare it to other nations. By studying courts, laws and new rules we will ascertain whether existing projections are sufficient to protect people in a time of AI.

      Ultimately, this paper demonstrates that we should establish rules of AI that put people at the center. Regulations ought to maintain decency, liberty, and equality despite the accelerated speed of technology. The key is not to regulate technology, but to ensure that AI develops without forgetting the classic principles of justice, equity and the Constitution.

      A. Legal and Therotical Foundations
      The concept of privacy and data protection holds significance in this context because every privacy law mandates that data collected must not be disclosed to unauthorized individuals or agencies.

      a) Privacy and Data Protection.

      Privacy has been a concept under discussion over centuries. It became a legal right during the late 1800s. In 1890 Louis Brandeis and Samuel Warren authored the article on the right to privacy in the Harvard Law Review by stating that the right to privacy is the right to be left alone.[1] The idea expanded in the long-run to the defense of space. It now entails regulation of personal information people post, its handling, and its purpose of usage.

      Nowadays, the right to privacy is implemented by protecting data. It is striving to ensure that personal data are not abused, obtained in an unauthorized manner, or even over processed. Dignity and personal freedom bring about privacy which is a very important human right. Data protection provides us with rules, processes and institutions that bring it to practice. These concepts collectively constitute the notion of informational self-determination which describes the capacity to regulate one’s online identity and personal information in a data-driven world.

      Artificial intelligence engages in the consumption of personal information that is massive, to make predictions. In most cases, individuals are not even aware of their data being used and processed. The era of the big data, with data being gathered everywhere, analyzed through algorithms and applied to predict behavior, questions historic concepts of consent, boundaries in purpose, and data being shared. Due to this, law regulations cannot keep pace with the technological advancements that enable the machine to give speculations regarding the possible privacy of things based solely on data that appeared quite innocent.

      b) The Human Right of Privacy and Data Protection.

      Human rights have always been considered to safeguard privacy and protect data. According to Article 12 of the Universal Declaration of Human Rights[1], nobody must arbitrarily have their privacy disrupted[2]. The International Covenant of Civil and Political Rights, Article 17, mandates the states to uphold individuals against illegal breach of privacy.[3] European Convention on Human Rights includes a similar rule of right to respect to personal life and communication with lawful exceptions.[4]

      These fundamental principles have formed the legislations of data protection in most regions and nations. In the Charter of Fundamental Rights of the European Union,[5] the notion that personal data protection is a distinct right transforms the concept previously unconsciously privileged into a definite right that can be enforced[6]. Such a change demonstrates that in the digital era, the issue of personal data control becomes significant in terms of dignity, freedom of expression and equality.

      This discussion of rights is complicated with the emergence of AI. Data-mining resolutions, data profiling, and biometric surveillance may have a direct impact on human rights, including equality, fair trial, freedom of speech, and assembly. As an illustration, the yield of AI-based policing may be unfair, and the use of facial recognition may infringe upon the principle of anonymity and free movement. Privacy safeguarding in the era of AI, however, does not only concern regulations of the data; it constitutes an expansive endeavor aimed toward preserving human dignity, fairness, and justice.

      c) AI Ethical and Legal Standards abroad

      Due to the ability to transform the society and introduce risks, there are numerous international groups which attempted to establish norms. The OECD AI Principles grow inventive and dependable AI that upholds human rights, democratic principles and direct AI actions and policy-makers[1]. In November 2021, the 193 Member States in the “UNESCO Recommendation on the Ethics of AI”[2] declared that the actors of AI must make every reasonable attempt to reduce, prevent and stop applications and discriminatory or bias outcomes across the lifecycle of the AI system to ensure the fairness of such systems.

      According to the UN Human Rights Council, the current human-rights law is applicable to AI entirely.[3] In 2021, the UN High Commissioner to human rights requested a halt on AI technologies which do not conform to human-rights standards, and particularly those that include mass surveillance or discrimination[4]. A highly persuasive statement of the applicability of existing human rights obligations to AI activities, the proposed “Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (2024)” by the Council of Europe sets a minimum baseline in the context of the international law of the rights-responsible activities of AI by the government.[5]

      These structures combined, demonstrate that technology is powerless without regulations. The regulation of AI should safeguard data, but more importantly, safeguard other wider civil liberties that such technology may be used to benefit individuals.

      B. The Indian Legal Landscape

        a) Constitutional View: Privacy as a Constitutional Right.

        The Constitution of India has evolved significantly throughout the years and currently privacy and data protection are regarded as the freedom and dignity of an individual. The Constitution makes no mention of the word privacy but courts have interpreted the concept in Article 21 that states that all people have the right to life and personal liberty.

        The Supreme Court in the case of KS Puttaswamy vs. Union of India[1] claimed that the privacy is a natural right which existed prior to the Constitution and that the security of personal information is a major issue in the digital world. The Court further added that Article 21 is the source of the right to be forgotten and management of personal information. This is at par with international regulations such as the GDPR of the EU. Another judge cautioned the state and the private companies against gathering personal information without regulations, by stating that data is the new oil and citizens should have consent over their data.

        b) Legislative Framework.

        The Information Technology Act, 2000 was very little paying attention to digital rights protection; with the addition of the Digital Personal Data Protection Act, 2023, the efforts have been aimed at addressing the contemporary problems. These regulations primarily related to the way the companies could utilise data and were insufficient to work with AI and big-data nowadays.

        Things are different with the DPDP Act of 2023 that establishes an entire data-governance system. Key points are:

        • Information may be utilized only with the explicit permission of the individual or valid governmental activities.
        • Individuals are also entitled to observe, amend and contest their data, although the Act does not provide a vigorous right to migrate information and also to justify computer-based choices.
        • Data handling companies should state purpose for using the data and only the necessary data should be kept.
        • The rules can be enforced by a Data Protection Board of India on the violators.

        However, Section 17 makes the government waive state agency many policies under pretexts of sovereignty or security, which may undermine Puttaswamy case privacy rights. The Act does not also say much about AI risks, such as bias or automated profiling.

        c) Artificial Intelligence in Governance and Public Sector: Rising Problems.

        The government of India hopes to apply AI in its initiatives including: Digital India, and the National Strategy of AI (NITI Aayog, 2018), to make welfare and growth choices. Nonetheless, the application of the AI also provokes constitutional and ethical concerns.[1]

        Facial-recognition systems are employed by police in cities such as Delhi and Hyderabad, and many people believe these chipped away their privacy and enabled the police to spy on the masses without due legislation.[2] Internet Freedom Foundation lawsuits, which challenge whether this is legal in the Puttaswamy proportionality test, ask the same question.

        Welfare schemes, such as automatic verification of recipients or prediction of needs, are also run by AI. In case of errors in these algorithms, individuals may lose benefits and be unable to protest, which violates the constitution and safeguards the Articles 14 and 21 the right to a fair and due process.[3]

        The courts have begun to become aware of these issues. In Manohar Lal Sharma v. Union of India (Pegasus Case, 2021),[4] the Supreme Court claimed that unlawful digital spying infringes privacy and freedom. It demanded a special committee to investigate the use of spywares by the government and claimed that national security cannot supersede the rights to privacy. The same can be said about AI surveillance relying on biometric and personal information on a massive scale.[5]

        d) Regulatory and Policy Development.

        On the policy front, the government claims that they are interested in having an AI that is responsible. According to the NITI Aayog 2021 paper on Responsible AI, safety, accountability, and inclusivity are some of the guiding principles.[1] In the 2022 Guidelines on Digital Lending, RBI also seeks to govern the use of AI financial instruments to ensure fairness and the protection of consumer rights.

        In spite of these measures, there is no single regulator to keep an eye on AI usage given the mass spread. Instead of relying on a single legal body, current laws are dependent on the decision of companies to follow them and the observation of other sectors. In India, the current debate is whether to continue with innovations while still protecting basic rights.

        C. Global Comparative Perspective

        a) The European Union

        The most comprehensive regulations that safeguard the data of people and control AI are provided by the EU. According to the GDPR of 2018, privacy is a fundamental right, and people should consent to it; it also guarantees that the use of data is just and that companies are responsible. It provides individuals with such rights concerning how their data shall be used, transferred and deleted (right to be forgotten), and immunity against decisions made by technology solely. Artificial Intelligence Act introduced by the European Union considers the risk level, and prohibits high-risk AI such as the social-scoring, and requires explanations and human scrutiny for using AI on sensitive, important usage. Some court cases, like the one with Google Spain (2014)[1] and Schrems II (2020)[2] demonstrate that the EU is a strong advocate of guarding data and the rights and freedoms of people.

        b) The United States

        In the U.S., privacy laws are different for each industry. HIPAA protects health data, COPPA protects children’s data, and state laws like California’s CCPA are specific but not as strong as the EU’s GDPR. The AI Bill of Rights 2022[1] lists ideas like fairness in algorithms, protecting data privacy, and human intervention when needed, but they can’t be enforced. The Federal Trade Commission is the main group that makes sure AI is fair.[2] It can fine companies that use AI that is unfair or biased.

        c) United Kingdom and Commonwealth Nations

        The UK introduced a similar law to the GDPR in the Data Protection Act of 2018, but it is also seeking to open to fresh ideas via the Data Reform Bill, following its exit from the EU[1]. The Information Commissioner Office of the UK provides unambiguous guidelines to AI, and these include fairness, accountability for people, and ensuring that the decisions are explicable. Other commonwealth nations, like Canada[2] with its PIPEDA and AIDA regulations and Australia with its 1988 Privacy Act, are developing AI principles based on ethics and require explicit explanations and human control.

        d) Norms of international and global character.

        Many groups agree that AI should be made and used in a way that puts people first, promotes fairness, makes things clear, and protects rights. The OECD (2019)[1] published its Principles on AI, UNESCO (2021) published its AI Ethics Recommendation, and the Council of Europe (2024) published its draft AI Convention.[2] These kinds of agreements show that digital technology should be seen as an important part of a modern constitution. It should be judged not only on how well it works but also on how well it upholds fairness and dignity.

        D. Ai, Privacy, and Human Rights: the Essentials.

        Artificial intelligence will be able to transform many things, yet it is also a challenge to essential human rights, responsibility, and the law. Since AI is able to foresee actions of people, make decisions independent of their actions, and manipulate the government, it raises serious concerns regarding fairness, independence, and justice. In the following sections, the authors describe the key human rights concerns that arise when using AI in the social domain, both in the government and business.

        a) Algorithmic Bias and Discrimination.

        AI tends to reinforce and multiply the biases that are present in the data it is trained on. By discriminating against people in jobs, loans, or policing, the machines infringe on their right to receive equal treatment, as circumscribed in the Indian Constitution. Such examples are facial-recognition tools mistaken on gender or caste, which interferes with fairness. Lately in the U.S., with the Loomis case in 2016[1], the secret algorithms in sentencing were revealed to be dangerous and a subject of global debate concerning fairness.[2] Similarly to the risks that the spread of AI is facing in the decisions made daily, India is a country that is still developing its data rules. According to a Supreme Court case, Puttaswamy (2017)[3], technology should remain within the Constitution and should be just and not discriminatory.

        b) Surveillance, Profiling, and Predictive Governance.

        AI is also able to scan vast data volumes, and this allows a government to monitor and cluster individuals in ways that have never existed previously. The proposal of facial-recognition technology by the police in India frequently violates the privacy and freedom guaranteed by Article 21 and is adopted without any legal authority. In 2021 the Supreme Court indicated that spying on people is a violation of the privacy right, although a nation may claim that it does so because of security[1]. In 2021, the European Court of Human Rights stated that unlimited spying on individuals is wrong and should be justified by sufficient and proportionate reasons. India is not yet ready with laws that would prevent such government misuse of AI in policing, heightening chances of governments going overboard in their rights.[2]

        c) Consent, Autonomy, and Data Ownership.

        Human beings have little control over how an AI spies and conjectures about them. The new Digital Personal Data Protection Act[1] provides that consent should be free, unambiguous, and not concealed; however, in reality, individuals lack power over how the AI in welfare, fintech, or e-government data analyzes their information.[2] In Puttaswamy Case, Justice Kaul stated that individuals should have the opportunity to choose how, according to their own personal information, even personal data.[3] To do that, AI must be developed in such a way that it preserves privacy, allows individuals to opt out, and provides a straightforward description of how the data is used.

        d) Transparency, Accountability, and the Black Box Problem.

        Since even its creators cannot explain the rules within a machine-learning model, AI often remains a mystery. This renders it difficult to blame it on people or even courts. In the case of 1978 Maneka Gandhi,[1] the principle that fairness and reasonableness must accompany the action of the State is incorrect because the action lacks such an explanation. The GDPR[2] provides that individuals may request human assistance when making decisions based on computers, and India does not offer such a right at the moment. Without this right, individuals cannot question unfair outcomes such as loans or state benefits.[3] The privacy laws in India should be amended to include the right to request an explanation, ensuring justice for individuals.

        e) Wide Implications

        AI is not only about privacy. It may interfere with freedom of speech, the right to hold gatherings, and equality. Predictive spying and automated filters may shut down individuals and deter them from voicing their opposition. According to the UN, AI surveillance can undermine democracy[1], and UNESCO states that AI must encourage diversity and inclusion but not maintain social positions[2]. In India, where technologies govern such aspects as welfare, law, and media, AI should adhere to the principle that technology will not harm dignity or freedom. The most suitable protection against such risks is a rights-based approach to dealing with AI, founded on fairness and transparency.

        E. Intersection of AI Ethics and Legal Norms

        Integration of ethics and legal aspects of AI is amongst the greatest issues of this century. While ethical standards demand that we treat each other fairly, responsibly, and with dignity, not all nations apply laws equally.

        In essence, there are four values behind AI ethics:

        • Autonomy
        • Justice and Fairness
        • Transparency and Accountability
        • Non-Malfeasance

        The NITI Aayog in India has a strategy called Responsible AI for All (2021),[1] which encourages AI with adherence to the values stated above. But since there is no law that enforces these principles, they end up being objectives rather than enforceable requirements. To make ethics into law, we require specific regulations, such as the requirement of transparency of AI, the requirement that its effects are considered, and the requirement that people are entitled to an explanation. We urgently need to implement this in various spheres of life where AI has become a necessity.

        The examples of the alignment of AI rules with human rights and the law, provided by the EU AI Act (2024)[2], OECD AI Principles (2019),[3] and AI Ethics Recommendation by UNESCO (2021)[4], are found all around the world. They demonstrate that artificial intelligence that is ethical should be enforced in law, which can be controlled by the democratic process and judicial review.

        In India the focus in the constitution is on ethics, exemplified in Puttaswamy and Navtej Singh Johar v. Union of India (2018)[5]. It establishes the basis for incorporating human dignity and fairness into AI laws. The future of AI law might therefore be in trying to merge right-to-fair rules with laws that individuals ought to obey such that advancement does not deprive people of their fundamental rights.[6]

        Conclusion

        Today, artificial intelligence, privacy and human rights issues are one of the most significant problems of the modern legal order. Civil law, commercial law, information privacy, data protection, data privacy cases may be part of the jurisdiction of the following courts. With the development of AI systems in governance, commerce and everyday life, the relationship between an individual and a State increasingly depends on AI-moderated interactions. Thus, this transformation results in serious risks like loss of autonomy, generalized surveillance, lack of judgement and making abrupt decisions instead of humans.

        With its anchoring in dignity, liberty and equality, the Indian constitutional framework provides a robust foundation for addressing these challenges. Its recognition of privacy as a fundamental right by the supreme court in Justice K.S. Puttaswamy 2017 represents a landmark reaffirmation that technological progress is possible only within constitutional parameters. However, the journey from constitutional idealism to effective protection is still pending. Finally, while a significant breakthrough, the Digital Personal Data Protection Act, 2023 is not thorough enough to deal with issues such as algorithmic bias, automated decision-making, and state surveillance.

        Moreover, throughout the world, there is an increasing consensus that technological governance should be led by human rights, illustrated in the emergence of secure comprehensive regimes, such as the EU’s GDPR and AI Act. International instruments — from the OECD AI Principles (2019) to UNESCO’s AI Ethics Recommendation (2021) — underscore that AI must be lawful, ethical, and human-centered. India, as a digital democracy and a rising technological power, has both the opportunity and the responsibility to lead this rights-based evolution in the Global South.

        Suggestions:

        1. AI-Specific Regulation: India should make a full AI law that sorts AI systems by risk level, requires human oversight for high-risk uses, and bans uses that violate human rights. The EU AI Act (2024) can be used as a guide.
        2. Include a “Right to Explanation”: People who are affected by decisions made by AI should have the right to know how the algorithm works and to challenge its results. This will make sure that the process is fair and open.
        3. Independent Oversight Authority: Create a special AI and Data Protection Commission to make sure that everyone follows the rules, do algorithmic audits, and hold both the government and businesses accountable.
        4. Human Rights Impact Assessments: All government AI uses, especially for surveillance, welfare, and policing, should have to go through assessments that look at how necessary and proportional they are.
        5. Ethical Design Mandate: Make ethical compliance a legal requirement instead of just a goal. This will help protect privacy and fairness.
        6. Global Cooperation: India should take part in international efforts like the OECD AI Principles and UNESCO’s AI Ethics Framework to help create a global, human-centered governance model.

        To move forward, we need to make sure that constitutional morality is part of AI governance. This will make sure that technology improves, not hurts, the human experience. To make systems that are fair, open, and accountable, we need changes to the law, more watchfulness from the courts, and more ethical design. AI must assist humanity, rather than supplant human judgment or undermine the intrinsic dignity fundamental to human rights.


        In the end, the measure of a fair society in the age of AI will not be how smart its algorithms are, but how strong its commitment is to the individual— to privacy, freedom, and the unbreakable promise of human dignity.

        Bibliography

        Books

        1. Bygrave, Lee A., Data Protection Law: Approaching Its Rationale, Logic and Limits (Oxford University Press, 2002).
        2. Solove, Daniel J., Understanding Privacy (Harvard University Press, 2008).
        3. Westin, Alan F., Privacy and Freedom (Atheneum, 1967).
        4. Baxi, Upendra, The Future of Human Rights (Oxford University Press, 3rd ed., 2012).

        Articles and Journals

        1. Warren, Samuel D. and Louis D. Brandeis, “The Right to Privacy,” (1890) 4 Harvard Law Review 193.
        2. Arun, Chinmayi, “AI Surveillance and the Indian Constitution: A Proportionality Perspective,” (2022) 5 Indian Journal of Law and Technology 33.
        3. Raj, Shreya and Abhishek Malhotra, “Algorithmic Accountability and Due Process in India,” (2021) 10 Indian Journal of Law and Technology 55.
        4. Baxi, S., “Artificial Intelligence, Privacy and Human Rights: Legal and Ethical Dilemmas,” (2022) 64 Journal of the Indian Law Institute 221.
        5. Arun, Chinmayi, “Democratic Values and AI Regulation in India,” (2023) 65 Journal of Indian Law Institute 117.
        6. Ranjit Singh and Shweta Bhatt, “AI Surveillance and Privacy in India: Constitutional and Ethical Concerns,” (2022) 64 Journal of the Indian Law Institute 145.

        Legislations, Reports, and Policy Documents

        1. The Constitution of India (as amended up to 2024).
        2. Digital Personal Data Protection Act, 2023 (No. 22 of 2023).
        3. Information Technology Act, 2000 (Act No. 21 of 2000).
        4. European Union, General Data Protection Regulation (Regulation (EU) 2016/679).

        Case Laws

        1. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.
        2. K.S. Puttaswamy (Aadhaar-II) v. Union of India, (2019) 1 SCC 1.
        3. Manohar Lal Sharma v. Union of India (Pegasus Case), (2021) 10 SCC 1.
        4. Anuradha Bhasin v. Union of India, (2020) 3 SCC 637.
        5. Maneka Gandhi v. Union of India, (1978) 1 SCC 248.
        6. M.P. Sharma v. Satish Chandra, AIR 1954 SC 300.
        7. Kharak Singh v. State of Uttar Pradesh, AIR 1963 SC 1295.
        8. Internet Freedom Foundation v. Union of India, W.P. (C) No. 390/2021, Delhi High Court.

        [1] NITI Aayog, Responsible AI for All: Discussion Paper (2021).

        [2] European Commission, EU Artificial Intelligence Act (2024).

        [3] OECD, Principles on AI (2019).

        [4] UNESCO, Recommendation on the Ethics of AI (2021).

        [5] Navtej Singh Johar v. Union of India, (2018) 10 SCC 1.

        [6] Binns, Reuben, “Fairness in Machine Learning: Lessons from Political Philosophy,” (2018) 81 Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency 149.


        [1] UN Human Rights Council, The Right to Privacy in the Digital Age: Report of the UN High Commissioner for Human Rights (A/HRC/48/31, 2021).

        [2] UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).


        [1] Maneka Gandhi v. Union of India, (1978) 1 SCC 248.

        [2] Regulation (EU) 2016/679, General Data Protection Regulation (GDPR), Art. 22.

        [3] Sandra Wachter, Brent Mittelstadt and Chris Russell, “Counterfactual Explanations Without Opening the Black Box,” (2018) 31 Harvard Journal of Law & Technology 841.


        [1] Digital Personal Data Protection Act, 2023 (No. 22 of 2023).

        [2] Daniel J. Solove, Understanding Privacy (Harvard University Press, 2008).

        [3] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 (per Kaul, J., concurring).


        [1] Manohar Lal Sharma v. Union of India, (2021) 10 SCC 1 (Pegasus Case).

        [2] Chinmayi Arun, “AI Surveillance and the Indian Constitution: A Proportionality Perspective,” (2022) 5 Indian Journal of Law and Technology 33.


        [1] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

        [2] R. Binns, “Fairness in Machine Learning: Lessons from Political Philosophy,” (2018) Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency 149.

        [3] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.


        [1] OECD, Artificial Intelligence Policy Observatory (AI Principles) (2019).

        [2] Council of Europe, Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (2024).


        [1] UK Information Commissioner’s Office, Guidance on AI and Data Protection (2022).

        [2] Government of Canada, Artificial Intelligence and Data Act (AIDA), Bill C-27 (2022).


        [1] United States, Blueprint for an AI Bill of Rights (White House OSTP, 2022).

        [2] Federal Trade Commission (FTC), Business Guidance on AI and Algorithms (2021).


        [1] Google Spain SL v. AEPD and Mario Costeja González, Case C-131/12 (2014) ECR I-317.

        [2] Data Protection Commissioner v. Facebook Ireland Ltd and Maximillian Schrems (Schrems II), Case C-311/18 (2020)


        [1] NITI Aayog, Responsible AI for All: Strategy and Approach (2021).


        [1] NITI Aayog, National Strategy for Artificial Intelligence #AIForAll (Government of India, 2018).

        [2] The Indian Express, “Facial Recognition in Policing: The Legal and Privacy Questions,” The Indian Express, 12 May 2022.

        [3] The Constitution of India, Articles 14 and 21.

        [4] Manohar Lal Sharma v. Union of India, (2021) 10 SCC 1 (Pegasus Case).

        [5] Ranjit Singh and Shweta Bhatt, “AI Surveillance and Privacy in India: Constitutional and Ethical Concerns,” (2022) 64 Journal of the Indian Law Institute 145.


        [1] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.


        [1] OECD, Principles on Artificial Intelligence (2019).

        [2] UNESCO, Recommendation on the Ethics of Artificial Intelligence (2021).

        [3] Westin, Alan F., Privacy and Freedom (Atheneum, 1967).

        [4] UN High Commissioner for Human Rights, Report of the United Nations High Commissioner for Human Rights on the Right to Privacy in the Digital Age (13 September 2021) UN Doc A/HRC/48/31, paras 59-60. 

        [5] Bygrave, Lee A., “The Meaning of Privacy in the Information Age,” (2020) 10 Computer Law & Security Review 471.


        [1] U.N.G.A. Res. 217A (III), Art. 12, U.N. Doc. A/810 (1948). 

        [2] Universal Declaration of Human Rights, 1948, Art. 12.

        [3] International Covenant on Civil and Political Rights, 1966, Art. 17.

        [4] European Convention on Human Rights, 1950, Art. 8.

        [5] Charter of Fundamental Rights of the European Union 2012, OJ C 326, 26.10.2012, p. 391.

        [6] Ibid.


        [1] Warren, Samuel D. and Louis D. Brandeis, “The Right to Privacy,” (1890) 4 Harvard Law Review 193.


        [1] Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1.

        [2] Solove, Daniel J., Understanding Privacy (Harvard University Press, 2008).

        [3] NITI Aayog, National Strategy for Artificial Intelligence #AIForAll (2018).

        [4] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM/2021/206 Final.

        Hot this week

        Criminal Accountability of Healthcare Providers in ART clinics

        Mridhu DaveAmity University, Noida I. Introduction Health and happiness go together....

        Copyright Law and the Digital Publishing Industry: Licensing and Privacy Issues

        Shruti RaiCourse LLM(IP)University - Amity university Dr Kritika NagpalCo-Author (Supervisor) Abstract The...

        Criminal Liability for Biometric Data Theft Under the Bharatiya Nyaya Sanhita, 2023

        Manish TundelkarLLM (Criminal Law)Amity University Dr Vaishnavi YashasviAssistant ProfessorAmity University Introduction The...

        Adultery In India: A Critical Study of Law, Morality, And Justice

        Nikhil GautamReg. no: 72520705LLM criminal 1 yearCT UNIVERSITY, LUDHIANANeha...

        Measures For Prevention and Protection Against Domestic Violence in India:  A Critical Legal Study

        Kashish TanwarStudent (Ba.llb)Amity Law school , Amity University , Noida...

        Topics

        Criminal Accountability of Healthcare Providers in ART clinics

        Mridhu DaveAmity University, Noida I. Introduction Health and happiness go together....

        Copyright Law and the Digital Publishing Industry: Licensing and Privacy Issues

        Shruti RaiCourse LLM(IP)University - Amity university Dr Kritika NagpalCo-Author (Supervisor) Abstract The...

        Criminal Liability for Biometric Data Theft Under the Bharatiya Nyaya Sanhita, 2023

        Manish TundelkarLLM (Criminal Law)Amity University Dr Vaishnavi YashasviAssistant ProfessorAmity University Introduction The...

        Adultery In India: A Critical Study of Law, Morality, And Justice

        Nikhil GautamReg. no: 72520705LLM criminal 1 yearCT UNIVERSITY, LUDHIANANeha...

        Measures For Prevention and Protection Against Domestic Violence in India:  A Critical Legal Study

        Kashish TanwarStudent (Ba.llb)Amity Law school , Amity University , Noida...

        Cyber Financial Crimes : Legal Framework and Enforcement Challenges

        LavishStudent, Amity Law School,Amity University, Noida, Uttar Pradesh Introduction In every...

        Copyright Protection of Computer Software and Databases in India: Issues and Challenges

        Jaspreet KaurRayat Bahra UniversityDr. Swapanpreet Kaurco-author AbstractThe fast growth of...

        Judicial Review of Arbitrary Removal of Civil Servants in India: An Examination of Constitutional Safeguards

        Jaspreet KaurRayat Bahra University AbstractThe constitutional regulation that regulates the...
        spot_img

        Related Articles

        Popular Categories

        spot_imgspot_img