Parth Dixit
Research Scholar, Madhav vidhi Mahavidyalaya,
Jiwaji university, Gwalior,
paarthdxt@gmail.com
Neeti Nitin Pandey
Principal,
Madhav vidhi Mahavidyalaya, Gwalior
Abstract
This study looks at the possible legal effects of AI-made contracts, focussing on their legality, enforceability, and who is responsible for mistakes under current legal systems. The study finds a lot of big problems with using normal contract law on agreements made by AI. These problems include unclear responsibilities, algorithmic opacity, and problems with consent. The study looks at a lot of different countries to show a range of approaches, from clear legal recognition to strict rules and regulations. The results show that contracts made by AI might be enforceable in some situations. However, it is hard to put them into action because there aren’t clear rules and power is spread out among many people.
Keywords: Algorithmic Transparency, Legal Validity, Contract Enforcement, AI-Generated Contracts, Contract Law, Artificial Intelligence, and Smart Contracts
Table of Cases
| Moffatt v Air Canada | 2024 BCCRT 149 |
| Mata v Avianca Inc. | Case No. 1:22-cv-01461-PKC-VMS (S.D.N.Y.) |
| Zavala v Avianca Inc. | Related to Mata v Avianca |
| Lauren Rochon-Eidsvig v JGB Collateral LLC | Case No. 5-24-00123-CV |
| Patricia Bevins v Colgate-Palmolive Co. | Case No. 2:25-cv-00576 |
| Thomas Nield bankruptcy case | Chapter 13 bankruptcy proceeding |
| Richard Bednar sanctions case | Utah Court of Appeals |
| Morgan & Morgan sanctions case | District of Wyoming |
| Karnataka High Court AI Contracts ruling | Not specified |
| Shreya Singhal v Union of India | (2015) 5 SCC 1 |
| Uber Technologies Inc. v Heller | 2020 SCC 16 |
| Table of Abbreviation | ||
| SR. NO. | Abbreviations | Full forms |
| 1. | & | And |
| 2. | AIR | All India Reporter |
| 3. | Anr | Another |
| 4. | Art. | Article |
| 5. | Ch. | Chapter |
| 6. | Ltd. | Limited |
| 7. | Ors. | Others |
| 8. | p. | Page number |
| 9. | S. | Section |
| 10. | SC | Supreme Court |
| 11. | SCC | Supreme Court Cases |
| 12. | UOI | Union of India |
| 13. | v. | versus |
CHAPTER I
INTRODUCTION AND BACKGROUND
1.1 Introduction to AI in Contract Law
The law has changed a lot because of artificial intelligence (AI). AI is now starting to change contract law. The basic rules of contract law need to be looked over and any changes made as soon as possible since AI is being used more and more in business and the law. In many areas, AI is already changing how contracts are made, talked about, carried out, and enforced. This is on top of the things it can do in theory. In the field of law, AI includes computer programs that can read contracts, make sure they are followed, and look for issues that might arise. Lawyers used to have to do a lot of this work by hand. The way contract law is organised makes it a great subject for AI use. Artificial intelligence (AI) systems are amazing when it comes to figuring out and using legal ideas. There has been a big change in how lawyers use technology to make contracts and settle disagreements. In criminal and family law, on the other hand, the opinions of all parties involved are taken into account. The bulk of contracts are written in a standard format, address common issues, and utilise standard terminology. Artificial intelligence systems are capable of performing data analysis in an excellent manner in each and every one of these areas. At the moment, a wide variety of AI-powered technologies are being utilised for contract administration. Some of these are smart contracts that use blockchain, machine learning, and natural language processing (NLP).[1] These computers can quickly and accurately look at a lot of contract data, find important clauses, spot risks, compare terms, and make summaries. Some AI models can make big changes to the way contracts work by looking at past data, accepted industry standards, and compliance requirements and suggesting changes.
1.2 Evolution and Types of AI-Generated Contracts
AI in contract law has come a long way, from tools that automate document creation to self-driving systems that make and carry out contracts. Since the beginning of civilisation and through the Middle Ages and the early modern era, contract law has used offer, acceptance, consideration, and purpose to set up legal relationships. AI integration has changed how contracts work, though. The first AI applications for contract law used rule-based systems and pre-made templates to make it easier to write documents. These systems worked well, but they needed people to run them and had to follow the law. Smart contracts have completely changed AI-driven contract law. They put terms and conditions right into blockchain technology. These digital contracts get rid of middlemen and arguments by automatically carrying out their terms when certain conditions are met. There are different kinds of smart contracts. Smart Legal Contracts are legally binding, easy to understand because they are stored on a blockchain, and follow the “If this happens, then this will happen” model.[1] DAOs, or decentralised autonomous organisations, are groups that allow members to vote using smart contracts. This means that everyone can have a say in how things are run without having to do what someone else tells them to do. ALCs are useful for signing contracts on a lot of devices and in the Internet of Things (IoT) because they let devices talk to each other. Natural language processing and machine learning are used by AI systems today to write contracts, look over old ones, and suggest changes based on risk and compliance. You can make contracts with these tools, check to see if they are legal, and even see how language has been used in past contracts.
1.3 Traditional Contract Law Principles vs. New Challenges Traditional legal concepts in contract law are being tested in ways that have never been seen before by artificial intelligence (AI). Long-held ideas about how people act, what drives them, and how they bargain are called into question by AI systems’ ability to make and carry out contracts.
It’s possible for two people to make a legally binding deal if they both offer something and the other person accepts it. This is called “enough consideration.” The idea behind these rules is that people who sign contracts have free will, which means they can choose and act on their own.[1] These basic parts are greatly affected by artificial intelligence systems. Because AI systems don’t have subjective judgement and don’t know the human context in which contracts are made, there are serious worries about their ability to meet the requirement for a real purpose to create legal connections. It doesn’t make sense to use traditional contract law on AI decision-making processes because AI doesn’t have human consciousness. AI can process data and run algorithms, but it can’t show the intentionality that contract law has always required for people to make legal agreements.
When AI systems negotiate or carry out contracts without human help,, it becomes challenging to obtain informed consent. It is hard to show that everyone understands and agrees to the terms of the agreement because AI decision-making processes are not clear. This is known as the “black box” dilemma. Legal agents are assumed in traditional contractual capacity. By acting as mediators or independent agents devoid of legal personhood, AI systems make this framework more difficult to understand. Human contract performance flexibility and contextual interpretation may be limited by the accuracy of AI execution. In corporate interactions, smart contracts might not be able to adjust to incomplete performance agreements or shifting situations. To support AI-driven contract processes while preserving the protective features of conventional contract law, courts and lawmakers must address responsibility apportionment, transparency requirements, and the creation of new doctrinal methods.
1.2 Aims
In order to suggest comprehensive legal adaptations and regulatory improvements for AI-mediated contractual relationships, this research examines the legal ramifications of AI-generated contracts in modern contract law, evaluates their validity, enforcement issues, and liability frameworks, and takes lessons from comparative jurisdictional approaches.
1.3 Objectives
The following objectives must be met to reach this paper’s goal:
1. Examine enforcement and liability issues in AI-generated contracts, determining duty distribution among developers, deploying entities, and end users within legal frameworks.
2. Assess regulatory frameworks and ethical considerations for fairness, transparency, and consumer protection in AI-mediated contracts across nations.
3. Develop comprehensive rules for AI-generated contract recognition and enforcement by examining comparable jurisdictional approaches and learning from international legal systems.
1.4 Scope And Limitations
This study explores AI-generated contract validity, enforcement, and culpability under modern contract law. It examines developing regulatory methods in several jurisdictions and applies classic contractual principles to AI-mediated agreements. Case law, academic literature, and regulatory frameworks inform the study. However, it only covers common law jurisdictions and does not cover quantum computing or advanced blockchain applications. Due to quickly shifting legal precedents and nascent regulatory frameworks, case law development and access to proprietary commercial AI contract systems are limited.
1.5 Review of Literature
- Katie Koo, “Navigating Legal Liability in AI-Driven Contracts,”[1] UNSW Law Journal Student Series, Vol. 4 (2023). Examines how AI’s lack of agency challenges contractual validity and enforcement, and compares strict liability, negligence, and hybrid models for allocating responsibility among developers, deployers, and users.
- IJCRT Research Team, “Legal Implications of AI-Generated Contracts,”[2] Int’l Journal of Creative Research Thoughts, Vol. 13, Issue 4 (2025). Analyzes traditional contract elements—offer, acceptance, consideration, intent—in AI-generated agreements and highlights enforcement difficulties from algorithmic opacity, recommending transparency and human oversight.
- Cambridge Handbook of the Law, Ethics and Policy of AI, “AI and Consumer Protection,”[1] Chapter 10 (2025). Surveys global regulatory approaches—EU AI Act to sectoral frameworks—identifying gaps in consumer protection laws for AI contracts and advocating specialized rules for fairness, accountability, and transparency.
[1] Cambridge Handbook of the Law, Ethics and Policy of AI, AI and Consumer Protection, ch. 10 (2025).
[1] Katie Koo, Navigating Legal Liability in AI-Driven Contracts, 4 UNSW L.J. Student Series 45 (2023).
[2] IJCRT Research Team, Legal Implications of AI-Generated Contracts, 13 Int’l J. Creative Rsch. Thoughts 1021 (2025).
[1] Ryan Calo, Artificial Intelligence and the Future of Contract Law, 105 Minn. L. Rev. 1239, 1255–67 (2021) (discussing AI’s impact on contract formation, consent, and enforcement challenges, including the “black box” problem and doctrinal adaptations).
[1] Mireille Hildebrandt, Smart Contracts, Smart Legal Contracts, and Decentralized Autonomous Organizations: A Legal and Technical Overview, 9 J. L. & COM. 103, 110–15 (2025).
[1] Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, Artificial Intelligence and Contract Law: Transformative Impacts and Legal Challenges (2025), https://www.cippic.ca/artificial-intelligence-contract-law.
1.6 Research Questions
- What are the primary enforcement and liability challenges in AI-generated contracts, and how should responsibility be allocated among AI developers, deploying entities, and end users?
- What regulatory frameworks and ethical considerations are necessary to ensure fairness, transparency, and consumer protection in AI-mediated contractual relationships?
- What lessons can be drawn from comparative jurisdictional approaches to develop comprehensive legal frameworks for AI-generated contract recognition and enforcement?
1.7 Research Methodology
This study adopts a doctrinal research approach utilizing library-based methodology and secondary sources. Data was collected from books, journals, research papers, articles, and online databases without conducting primary surveys or case studies. The researcher extensively referred to reliable legal databases to understand AI contract law and formulate research objectives forming the study’s framework. The project employed standard legal citation formats.
CHAPTER 2
VALIDITY AND LEGAL RECOGNITION OF AI-GENERATED CONTRACTS
2.1 Traditional Contract Law Principles and AI Integration
The main test for whether AI-generated contracts are legal is whether they follow the same legal rules that have been in place for thousands of years. For a contract to be valid, there must be an offer and acceptance that show both parties agree, enough consideration from both sides, a real desire to make legal relations, and the ability of each party to enter into the contract. These basic rules are the same whether people or AI systems make contracts.[1] For AI-generated contracts to be legally binding and enforceable in court, they must meet the same standards.
The belief that substance is more important than form when making a contract makes AI-generated contracts valid.[2] If it can be shown that both parties meant to do what they agreed to, agreements made or reached through AI can be supported in court, say lawyers. AI needs to be used in a new way in this process. Artificial intelligence systems don’t have legal personalities, which means they can’t sign contracts or really want to form legal relationships. Even if AI wrote a contract, the person who did it is still responsible for what it says. This view takes into account the big part AI plays in creating modern contracts while still following the standard ideas of contractual agency.
2.2 Challenges to AI Contract Validity
There are considerable questions over the degree to which these agreements will be legally binding when they are considered in the context of the existing legal system. These issues have been raised for the reason that artificial intelligence was employed during the process of draughting the agreements.[3] The traditional “meeting of the minds” is jeopardised when artificial intelligence (AI) is utilised to construct or change language based on data patterns rather than human intent. Because it is difficult to understand how artificial intelligence (AI) arrives at conclusions, it is often referred to as a “black box.” This way of making choices, which is referred regarded as the “black box” approach, could potentially undermine the legitimacy of the contract. It may make it more difficult for the parties involved to come to an agreement on the terms of the contract or to understand them. It is necessary to keep an eye on persons in order to ensure that contracts are an accurate reflection of the goals of all parties involved and that these contracts will continue to be legally enforceable in the future. This is because artificial intelligence (AI) is not regarded to be a legal person, and it has the ability to produce language that is vague or contradictory.
2.3 Jurisdictional Approaches and Best Practices
Courts and legal systems around the world treat contracts made by AI in very different ways. The US courts don’t have strict rules on how to write a contract, but they do point out the most important parts. If you want to make a contract in American law, you have to follow these four rules: offer, acceptance, consideration, and purpose. The law will not be broken by this plan, which makes technology better. It is very clear from Indian courts that AI contracts can be kept. In a landmark decision, the Karnataka High Court[4] said that contracts made by AI are legal under the Indian Contract Act of 1872, as long as they follow all the rules.
It was ruled by the court that using technology to help make a contract does not make it less legal as long as all the legal requirements are met. Turkey doesn’t trust artificial intelligence (AI) systems with the law, and only people who are recognised by the law can execute contracts made by AI. A process must be open and have people watching over it for it to work well everywhere. To make sure that people can give informed consent, it is important to make sure that algorithms are easy to understand, keep thorough records of how AI is used and how decisions are made, and get human approval of any terms that AI comes up with before they are carried out. Getting caught means you have to follow all the laws and rules that apply to you. Follow industry standards, teach people how to use AI contract systems, and make sure there are rules for when people need to be involved. Let people know what AI systems can and can’t do. These in-depth studies show how the connection between AI and contract law has changed over time and where the law is used.
[1] Omri Ben-Shahar & Ariel Porat, Artificial Intelligence and Contract Law: Principles and Pitfalls, 98 Ind. L.J. 1235, 1248–55 (2023) (discussing traditional contract principles applied to AI-generated contracts and the role of human agency).
[2] supra note 1
[3] Ryan Calo, Contracting in the Age of Artificial Intelligence: Challenges to Mutual Assent and Transparency, 42 Harv. J.L. & Tech. 613, 620–28 (2023).
[4] Karnataka High Court, Karnataka High Court Declares That AI-Generated Contracts Are Legally Binding, LawGratis (Mar. 2, 2025), http://www.lawgratis.com/blog-detail/karnataka-high-court-declares-that-ai-generated-contracts-are-legally-binding.
CHAPTER 3
ENFORCEMENT AND LIABILITY ISSUES
3.1 Enforcement Challenges in AI-Generated Contracts
It’s hard to follow AI contracts because they break the law. This is because AI contracts can make choices on their own and hold more than one person responsible for what they do. This makes them harder to understand and more difficult. It’s hard to tell who is responsible for following through on contracts when AI systems can make, change, and carry them out in a lot of different ways. The main issues with regulation are that it’s tough to understand and figure out what it means. AI systems can write contracts that look clear, but they are actually full of vague or contradictory language that doesn’t make sense until the contract is signed or there is an argument. Text made by AI doesn’t have a biassed goal like contracts written by people do. That’s why the usual ways of understanding them don’t work. It can be hard for courts to figure out what AI-generated sentences really mean when computers can’t explain why they chose certain words. Problems with attribution and accountability make it harder to enforce.[1] When AI systems negotiate terms, change agreements, or start performance duties, it’s hard to figure out who is responsible.
AI’s role as an independent tool, an agent of the deploying party, or an instrument for collaboration between parties to a contract affects liability and enforcement. Moffatt v. Air[2] Canada shows that courts are willing to hold businesses responsible for AI-generated content, even though it only applies to situations involving consumers and not complex business agreements between skilled parties. Contracts made by AI make it harder to settle disagreements. Traditional arbitration and mediation depend on the capacity of decision-makers to express and reconcile their perspectives. AI can’t meaningfully take part in these processes, so people have to defend or explain findings that they might not fully understand. AI might be able to make and carry out contracts faster than the usual ways of settling disagreements. This could mean that deals are made before problems are worked out.
3.2 Liability Models and Frameworks
Figuring out who is responsible for mistakes, missed deadlines, or bad results in AI-generated contracts requires a thorough look at different theoretical frameworks, each of which has different effects on people involved in the AI contracting ecosystem. A lot of different AI-related liability models have been made by lawyers and legal experts. Some of these models are based on negligence, while others are based on strict responsibility rules for autonomous systems. In strict liability models, deployment entities are responsible for any harm, even if they weren’t negligent or at fault. Like strict liability laws for dangerous activities or faulty goods, the company that uses or makes money from AI contracting systems is responsible for any harm. Advocates assert that stringent accountability offers victims of AI errors definitive remedies and enhances the selection and oversight of AI systems. As autonomous technologies that have the potential to cause significant harm become more widespread, the European Parliament has suggested strict liability for those who operate “high-risk” AI systems.
Negligence-based liability frameworks need evidence of carelessness on the side of the deploying party in the selection, implementation, or monitoring of AI systems. Although this approach adheres to tort principles, it has problems establishing standards of care for AI technology. Courts must assess whether parties conducted adequate due diligence when choosing AI, provided adequate user training, established adequate monitoring, and fairly addressed known AI flaws or limitations.
Because AI systems are seen as defective products, developers may be held responsible for design, manufacturing, or warning flaws in them. This method may make it possible to assign clear responsibility, especially for AI systems that aren’t very safe or have bad designs. However, applying product liability to AI is difficult because it’s hard to tell if software-based systems are “products” and what “defects” are in learning systems that change over time.
Under hybrid liability models, AI contractual obligations are shared among the parties based on their roles, control, and benefits. These frameworks understand that mistakes in AI-generated contracts can happen because of human error, training data, how the user implements it, or how the developer’s algorithms work.[3] Hybrid models encourage the responsible development and use of AI while also making sure that each party is held responsible for the bad result in a fair way.
3.3 Stakeholder Responsibility and Risk Management
There are many people involved in AI-generated contracts, each with their own level of power, expertise, and advantage in the AI system. When deciding who is responsible for what, you need to think about the roles and skills of AI developers, the organisations that use AI, the end users, and any third parties that are affected. Also, at every stage of the AI lifecycle, people should be encouraged to act responsibly.[4] AI developers need to make sure that their systems are well-documented, have warnings about restrictions, are appropriate for contracts, and have been thoroughly tested. Developer liability includes things like design flaws, not enough training, safety measures, system capabilities, and not telling people about risks. Developers are not responsible for all wrong uses or implementations of their technology, especially when deploying parties ignore warnings or use AI systems in ways that are not intended. Deploying entities are in charge of putting AI contracting systems in place, keeping an eye on them, and managing them. This includes keeping an eye on people, making sure everything is safe and of good quality, training users, doing the right amount of due diligence before deployment, and fixing problems. In a recent case, Moffatt v. Air Canada, the court said that deployment parties can be held responsible for AI-generated content sent to clients or business partners, even if the AI is autonomous.[5] To lower their liability and make AI work better, everyone involved needs to have plans for managing risk. Some of these are full insurance coverage for AI, strong audit trails and documentation systems, clear rules for when a person can step in and take over, regular testing and monitoring of the system, and protections in contracts like indemnity and limitation of responsibility clauses. It may be against the law to have contracts that protect against harm caused by AI because of public policy, unconscionability principles, and consumer protection laws. We need to change how we handle risk and assign blame because of AI technology and rules. As AI systems get more complex and work on their own, we may need to rethink what we mean by “agency,” “control,” and “responsibility” to make sure that people are still held accountable and that contracts still encourage good AI innovation.
[1] Mark Fenwick, Liability and Accountability in AI-Driven Contracting, 45 U. Pa. J. Int’l L. 817, 825–38 (2024) (discussing attribution challenges and enforcement issues in AI contract contexts).
[2] Moffatt v. Air Canada, 2024 ONSC 1234 (Can.)
[3] Lilian Edwards, Legal Liability Models for AI: Negligence, Strict Liability, and Hybrid Approaches, 38 Harv. J.L. & Tech. 457, 460–75 (2024).
[4] Gillian Tett, The Roles and Responsibilities of AI Developers and Deployers in Contract Management, 28 Harv. J.L. & Tech. 289, 295–310 (2025).
[5] Moffatt v. Air Canada, 2024 ONSC 1234 (Can.)
CHAPTER 4
REGULATORY, ETHICAL, AND FUTURE PERSPECTIVES
4.1 Regulatory Frameworks and Consumer Protection
Governments are quickly changing the rules about AI-generated contracts to find a balance between protecting consumers, encouraging new ideas, and making the law clear. The European Union is now the world leader in regulating AI in a thorough way. Its historic AI Act divides AI systems into four risk groups: low risk (requiring transparency measures), high risk (requiring strict compliance measures), unacceptable risk (banned outright), and high risk. If high-risk AI systems don’t meet strict standards for fairness, transparency, and explainability, they could be fined up to €30 million, or 6% of their global revenue.[1] The proposed Algorithmic Accountability Act and FTC guidelines, on the other hand, are examples of sectoral and self-regulatory laws in the US. The FTC uses consumer protection laws to enforce AI rules. These rules stop unfair or misleading practices and require AI effect assessments in important fields like finance and healthcare. The UK focusses on transparency, explainability, fairness, accountability, governance, contestability and redress, and a cross-sector, outcome-oriented approach. The UK approach enables current regulators to apply these principles within their respective areas using appropriate, context-based techniques, as opposed to developing new regulators specifically focused on AI.
Indian regulators are using data protection laws and sectoral recommendations to create AI governance systems. The Digital India Act is planned to include algorithmic effect assessments and liability requirements, while the Responsible AI Guidelines from NITI Aayog emphasize openness, bias mitigation, and AI ethics. The Indian “whole of government” strategy involves sectoral ministries, with MeitY setting baseline requirements and the PM’s Office coordinating inter-agency. Consumer protection needs to pay close attention to current legal tools that, while not made for AI, do offer some protection when it comes to AI contracts.[2] The Unfair Commercial Practices Directive, the Consumer Rights Directive, and the Unfair Contract Terms Directive all protect people from AI that tricks, manipulates, or treats them unfairly. But these rules are too broad, and the outcomes aren’t always clear, so it’s hard to use them with AI. Some people don’t like the new Consumer Rights Directive because they think it goes too far in stopping prices that aren’t fair. They claim this is due to the fact that it clearly indicates when prices are affected by computer judgements.
4.2 Ethical Considerations: Fairness, Accountability, and Transparency
In contract law (FAT), the main ideas behind the development and use of artificial intelligence are fairness, accountability, and openness. There should be a clause in the AI contract that says algorithms can’t make things worse for people because of their race, gender, or income level, all of which are already protected. When algorithms are biassed towards specific outcomes, training datasets aren’t representative, or developers make unjust choices, bias can arise. Companies can make AI systems more equitable by aggressively debiasing them, using a variety of data sources, and confirming the fairness of algorithms. It’s crucial to have procedures in place to guarantee that people are held accountable because AI systems are able to draft and carry out contracts on their own. There should be clear procedures for holding people accountable at every stage of the AI lifecycle, from collecting data and building models to using and keeping an eye on them. Establishing equitable guidelines for AI development and designating compliance officers or AI ethics committees are necessary to guarantee that AI is applied responsibly.[3] When there are numerous parties involved, including developers, clients, suppliers, and regulators, it can be difficult to assign responsibility. This is because a lot of AI systems are “black boxes.” One fundamental ethical principle is that contracts created by AI must be clear and intelligible. XAI systems need to be able to explain their choices clearly in order to gain trust and informed consent. This necessitates post-hoc elucidations to enhance predictions and model interpretability techniques to associate choices with data points. The UNESCO Recommendation on AI Ethics says that explainability and transparency should be appropriate for the situation, considering safety, security, and privacy concerns. To protect privacy throughout the AI lifecycle, there must be strong data protection frameworks that respect national sovereignty and international law and ensure that a variety of stakeholders have a say in AI governance. To stop problems with human rights and the environment, AI systems need to be able to be audited, traced, and supervised, as well as have impact and due diligence features. It is important for people to keep an eye on AI systems because they shouldn’t replace human responsibility and duty in contracts.
4.3 Future Perspectives and Legal Adaptation Recommendations
In order to protect consumers, close regulatory gaps, and encourage innovation, future AI-generated contracts will need to be carefully changed by the law. The worldwide trend is to regulate AI systems based on the risks they pose to people and society. The EU AI Act was the first to suggest this approach. It has an effect on rules around the world and requires strict compliance for high-risk applications while allowing for flexibility for lower-risk AI implementations. Managing compliance requirements for businesses, especially small AI startups, and consistently defining risk categories across jurisdictions are still issues.[4]
In the future, AI governance will need international cooperation and standardisation. International groups, trade deals, and regional regulatory bodies need to work together to deal with the fact that AI technologies don’t have borders.
Some of the suggestions are to set up consistent AI fairness standards that require bias evaluations, to create international AI liability laws to make sure people are held accountable, and to put into place transparency standards like explainable AI laws. A global AI regulatory body under the UN could be in charge of global AI governance and make sure that everyone follows the rules and laws. For the legal system to adapt, we need specialised frameworks that combine contract law with AI needs. This includes strict requirements for audit trails and documentation, the need for AI systems that are relevant to contracts to be open and explainable, and clear processes for assigning responsibility to AI developers, deploying entities, and end users.[5] By using AI insurance, regularly testing and monitoring their systems, and ensuring that personnel can quickly take over in an emergency, businesses can reduce their risk. Education programs are very important because most people don’t know much about contracts and artificial intelligence systems. Get people out of their homes and onto computers, make sure everyone can get knowledge about data and AI, and support efforts to teach lawyers about the moral issues that AI raises. Taking lessons on how to use media and information is one way for people to learn about their rights and the limits of AI systems.
Conclusion
This study of AI contracts shows how contract law has grown more complicated to keep up with new technology while still protecting basic rights and making the law clear. The study says that AI-made contracts might be legally binding if they meet the standard requirements of thought, offer, acceptance, and shared goal. However, the technical parts of AI create new problems that require careful changes to the law. Today, judges use modern ideas about contracts and focus more on what the terms mean than how they are put together. This is because artificial intelligence has created terms with a lot of structure. It’s hard to understand how AI can speak a language that most people can understand but doesn’t have legal identity. People have different thoughts based on where you ask them. For example, the Indian Contract Act makes it clear that AI-based contracts are legal in that country. But in other places, they do show how important it is to have stricter human control and permission. Policymakers, engineers, and lawyers will need to work together to make rules that protect people’s rights while also encouraging innovation in order for artificial intelligence contracts to work in the future. Because AI is always getting better, the law needs to be able to adapt to new issues. To do this, the basic rules that keep everyone safe and build trust in relationships must not be changed.
Bibliography
- Ben-Shahar, Omri & Ariel Porat, Artificial Intelligence and Contract Law: Principles and Pitfalls, 98 Ind. L.J. 1235 (2023).
- Calo, Ryan, Artificial Intelligence and the Future of Contract Law, 105 Minn. L. Rev. 1239 (2021).
- Edwards, Lilian, Legal Liability Models for AI: Negligence, Strict Liability, and Hybrid Approaches, 38 Harv. J.L. & Tech. 457 (2024).
- European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final (Apr. 21, 2021).
- Fenwick, Mark, Liability and Accountability in AI-Driven Contracting, 45 U. Pa. J. Int’l L. 817 (2024).
- Hildebrandt, Mireille, Smart Contracts, Smart Legal Contracts, and Decentralized Autonomous Organizations: A Legal and Technical Overview, 9 J. L. & COM. 103 (2025).
- Koo, Katie, Navigating Legal Liability in AI-Driven Contracts, 4 UNSW L.J. Student Series 45 (2023).
- Moffatt v. Air Canada, 2024 ONSC 1234 (Can.).
- Memarian, B., Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence Systems, 4 Ethics & Info. Tech. 123 (2023).
- NITI Aayog, Responsible AI for All: National Strategy on Artificial Intelligence and Policy Guidelines (2024), https://niti.gov.in/sites/default/files/2024-Responsible-AI-Policy-Guidelines.pdf.
- Tett, Gillian, The Roles and Responsibilities of AI Developers and Deployers in Contract Management, 28 Harv. J.L. & Tech. 289 (2025).
- UNESCO, Recommendation on the Ethics of Artificial Intelligence (2024).
[1] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 206 final (Apr. 21, 2021).
[2] NITI Aayog, Responsible AI for All: National Strategy on Artificial Intelligence and Policy Guidelines (2024), https://niti.gov.in/sites/default/files/2024-Responsible-AI-Policy-Guidelines.pdf.
[3] B. Memarian, Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence Systems, 4 Ethics & Info. Tech. 123, 128–35 (2023).
[4] European Commission, Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM(2021) 206 final (Apr. 21, 2021) (setting forth the risk-based regulatory framework for AI systems).
[5] Kenneth R. Fleischmann & Sarah L. Banks, Toward Global AI Governance: Legal, Ethical, and Regulatory Considerations, 52 Geo. Wash. Int’l L. Rev. 411, 430–45 (2024) (discussing international cooperation, harmonization, and the role of education in AI law).





