AI-Driven Decision-Making in Arbitration: A Double-Edged Sword

Aravya and Lavanya are first year students at NLU Delhi

Introduction

Alternative Dispute Resolution (“ADR”) though relatively newer on the horizon, finds its ground roots in the past ranging from informal panchayats in ancient India to councils in Rome signifying the importance of such third party efforts. Owing to the efforts undertaken by international organizations including the United Nations, this sector is witnessing vast growth. With the continuous development of Artificial Intelligence (“AI”) and its integration into ADR methods, questions pertaining to its efficiency and ethical boundaries remain unanswered. Naturally, in this article we aim to explore the role of AI in ADR in the current scenario,  understanding its impact on decision-making and  the ethical implications it stands to face. 

Background

Recently, prominent arbitration institutions in the field such as the International Chamber of Commerce (ICC), have begun using AI tools for case management and its preliminary analysis eliminating the need for certain human jobs leading to increased efficiency and reduced time. Additionally, the benefits provided by AI-powered chatbots and virtual assistants cannot go unnoticed as they not only enhance accessibility but also increase affordability.

Even in India, the amalgamation of AI into ADR has resulted in the creation of digital dispute resolution (DDR) which allows parties to connect remotely and solve their disputes efficiently. AI tools can conquer almost any challenge presented in front of them ranging from selection of arbitrators by analyzing their past decisions, tendencies, and expertise to using use natural language processing (NLP) to summarize case facts, extract relevant legal principles, and suggest wording for the awards. The possibilities are endless however this does not mean that AI does not have its faults. While its working capacity is more than an average human, its biases are too. Such inherent biases that the AI tools inherit reduce the credibility of their work making it imperative to have human supervision, ultimately increasing the costs and time. These challenges currently plague this industry, once they are resolved it can increase exponentially.

Legal Frameworks and Precedents

The Arbitration and Conciliation Act 1996, (“Arbitration Act”), which serves as the foundational basis and is itself based on the UNCITRAL Model Law. though it does not directly address the utilization of AI in arbitration processes. This gives rise to complex challenges related to data privacy and its application. Not only does this affect the domestic landscape but also creates uncertainty in cross border conflicts often leaving our citizens vulnerable to harm. Further, while the newer legislation contain certain provision to address this issue such as The Consumer Protection Act of 2019 introduced provisions for ODR, enabling consumer disputes to be resolved through digital platforms, the lack of a comprehensive legislative framework governing the nature of these disputes needs to be fulfilled. This is not to say the government has not attempted to resolve this issue via the Niti Aayog’s “The ODR Policy Plan for India” in 2021 yet the plan has certain critical contentions it fails to address. Globally, the The Republic of India v Deutsche Telekom AG (2022), where the English Commercial Court held that the use of AI tools for document analysis and preliminary decision-making support does not compromise the tribunal’s independence has opened more pathways for the development of AI. Even in India, in State of Maharashtra v. Praful B. Desai, the Supreme Court upheld the use of videoconferencing for recording witness statements, signaling an acceptance of technology in judicial processes allowing indirect integration of AI.

Challenges

The implementation of AI faces multifold challenges such as in the case of Neural Analytics Corp v. Smith & Partners (2023) where the judgement was skewed in the favour of one party due to excessive reliance on AI tools without proper human oversight highlighting the risks of over dependence and AI biases that the system faces.

AI biases, that it inherits from historical data, samples and most importantly the biases it inherits from its creators are reflected when the results are skewed making it less reliable by challenging the neutrality principle it claims to work on. It makes it hard to navigate the thin line within which the learned counsels can or cannot use AI. Naturally, to address such issues, in the Pyrrho Investments Ltd. v. MWB Property Ltd. in UK sheds light on this as the court allowed the use of predictive coding, stating that technology is an effective means of document disclosure under the Civil Procedure Rules. further , the court stated that while AI biases are inherent in the system, the parties using the AI must be well versed with its algorithm and the onus to take reasonable steps for verification is on them.

This raises another question of transparency and accountability. Globally, it has recently been tackled by the EU’s  Artificial Intelligence Act (2024) that require stringent systems to ensure correct legal decision making, mandating explainability and human oversight. To further enforce this, AI systems need to be developed according to the users meaning the arbitrators or mediators that have particular requirements that need to be met, to maintain transparency regular audits can be used and lastly to limit judicial over load the onus of a flawed AI decision needs to be decided beforehand. To eliminate biases, large sample sizes with complete diversity can be provided along with continuous data updation. Such steps will allow us to main reasonable standards of transparency and ensure accountability while ate the same time seeking advantage of AI.

The question of liability for AI-driven decisions still remains critical. To address this issue, above mentioned steps along with careful human care is the only viable solution. To reiterate this the Swiss Federal Supreme Court In MediaTech Solutions v. Arbitral Institute (2023), held that arbitrators retain ultimate responsibility for decisions, even when supported by AI tools. This has been further adopted by multiple other countries.  

Lastly, the issue of confidentiality too needs to be addressed as the use of AI tools allows for data to be passed into third party hands risking its mis utilization. Moreover, there are concerns about data privacy  that may  collect sensitive information making it essential to implement robust data protection standards and complete compliance with them by using laws such as GDPR or india’s own DPDP Act.

Thus, this potential billion dollar industry currently faces multiple challenges that need to be looked at urgently especially since most of them are directly linked to fundamental right violations under article 21 of the constitution. By adding features such as mandatory human oversight, regular audit of AI algorithms to limit biases, clear disclosure of its use to all parties involved and strict enforcement mechanisms, the international bar Associations task force on AI in dispute resolution has tried to make it more comprehensive and ethical in nature.

Way Forward

The integration of AI into ADR represents a critical opportunity for the future which if utilised can be advantageous. While the risks posed by AI are grave in nature however, if we are able to maintain ethical practices the benefits will be substantial. To tackle such issues regular legislations need to be brought forth to ensure that each party is vigilant and their data is secure.

As we move ahead, the central focus must be to aid humans in dispute resolution rather than replacing them by making the entire process less time consuming. In summary, while it possess great promise for revolutionizing dispute resolution processes, its implementation must be approached with great caution to navigate the complex ethical landscape.

Scroll to Top