Ups And Downs Of AI And Algorithms In American Criminal Justice System: Premonish Lessons For Indian Criminal Justice System

Amalendu Upadhyaya
Posted By -
0

Ups And Downs Of Artificial Intelligence And Algorithms In American Criminal Justice System: Premonish Lessons For Indian Criminal Justice System. 

By Mohammad Taha Amir

 LL.B., LL.M.(BHU, IND.)

Topic and its an overview

1- AI and algorithms are often referred to as “weapons of math destruction.” Many systems are also credibly described as “a sophisticated form of racial profiling.”These views are widespread in many current discussions of AI and algorithms.

2- This assignment provides an important first look at the potential use and regulation of AI and algorithms in the Indian Criminal Justice System/Proceedings. The assignment identifies important legal, policy and practical issues and choices that Indian policymakers and justice stakeholders should consider before these technologies are widely adopted in this huge and diverse country.

3-The specific subject of this assignment is algorithmic pretrial risk assessments. These are AI or algorithmic tools that aid criminal courts in pretrial custody or bail decision-making.

4-The use of these tools has expanded rapidly across the United States, to the point where these systems are probably the most widely implemented algorithmic tool to aid decision-making in criminal proceedings in the world. This expansion has been the catalyst for an unprecedented and rapid evaluation of how algorithmic tools in the criminal justice system are designed, developed and deployed.

5-The American experience with pretrial algorithmic risk assessments has not gone smoothly. In the space of a few short years, there has been an extraordinary backlash against the use of these tools, including by many of the same organizations and stakeholders who enthusiastically supported their development in the first place.

Few Lessons That Need To Be remembered

There are several important lessons identified in this assignment.

AI, algorithms and automated decision-making are a significant new frontier in human rights, due process and access to justice. AI, algorithms and automated decision-making are expanding rapidly in justice systems across the world. This expansion raises new and crucial questions about equality, access to justice and due process in legal decision-making affecting fundamental rights of the human.

Simple solutions and complex problems. AI and algorithms offer many benefits, including the potential to provide consistent, “evidence-based” and efficient predictions. Unfortunately, experience demonstrates the risk of adopting unproven and under-evaluated technologies too quickly to address long-standing, complex and structural problems, such as systemic racism, casteism, gender biases, in the justice system.

AI and algorithms often embed, and obscure, important legal and policy choices. AI and algorithms are not “objective” or neutral because they are based on data. Seemingly technical decisions often embed far-reaching policy or legal choices without public discussion or accountability. These choices can have far-reaching consequences for individual liberty and fairness in legal decision-making. The distinction between “code choices” and “policy choices” is sometimes difficult to appreciate.

There are data issues and choices at every stage of AI and algorithmic decision-making. Data issues and choices are endemic to every aspect of AI or algorithms. Data issues and choices can be both consequential and controversial.

For example, many AI and algorithms are criticized because they use historically racist, discriminatory or biased data. Other important data issues include statistical “metrics of fairness” and the accuracy, reliability and validity of data sets.

Simply stated, data issues and choices are foundational to the success and legitimacy of any AI or algorithmic tool used by government, courts or tribunals.

AI and algorithmic systems make predictions; they do not set policy, make legal rules or decisions. AI, algorithms and risk assessments are statistical tools that help make predictions. Courts, legislatures and policymakers decide how to turn those predictions into “action directives” or legal decisions. Whether algorithms worsen or lessen bias, or are coercive or supportive, depends on how human decision-makers decide these tools are used.

• Legal protections regarding disclosure, accountability, equality and due process for AI and algorithmic systems are often inadequate. In the criminal justice system, AI and algorithmic tools must be held to high legal standards. Unfortunately, many of the legal issues raised by this technology are unexplored, unregulated and poorly understood. Current models of legal regulation and accountability have not kept pace with technology. Emerging models of “technological due process” suggest a constructive way forward.

• The use of AI and algorithms in criminal proceedings raise important access to justice issues. The use of AI and algorithms in criminal proceedings means that criminal defendants potentially face even higher hurdles in presenting a full answer and defence to the charges against them. These additional hurdles may compound existing barriers to access to justice and lead to greater over-representation of low-income and racialized communities, schedule caste, scheduled tribes, females in the criminal justice system.

• The criticisms of AI and algorithms are legitimate, but there are also opportunities and emerging best practices. There are many significant and legitimate criticisms of algorithmic tools. At the same time, much has been learned about how to design, develop, implement and evaluate these systems. Many legal organizations, technologists and academics have begun to develop best practices and/or legal regimes necessary to improve these tools.

• There must be broad participation in the design, development and deployment of these systems. Unequal access to information and participation in AI and algorithmic decision-making can significantly worsen existing biases and inequality. Broad participation must include Law Professors, Judges, Lawyers, technologists, policymakers, lawmakers, and, crucially, the communities who are likely to be most affected by this technology.

• Comprehensive law reform is needed. The systemic legal issues raised by this technology cannot be addressed through individual litigation, best practices or piecemeal legislation. Comprehensive law reform is required. There are many potential legislative or regulatory responses, but the choices and options between them are complex and consequential. The Indian legal system must proactively address important issues and options prior to the widespread implementation of these systems.

• Incremental reforms, deliberately. Indians need to be thoughtful, deliberate and incremental when adopting technologies in the justice system that potentially have such an extraordinary impact on individual rights and justice system fairness and transparency.

Artificial Intelligence And Algorithms in Criminal Justice System

AI and algorithms are being used by Judiciary, governments and related agencies to make or aid decision-making in a wide range of government applications across the US, UK and Europe. AI and algorithms are being used to determine government benefits, write tribunal decisions, conduct risk assessments in child welfare and domestic violence matters, decide immigration status and assist government investigations and regulation in many sectors. The area of government activity where these systems have been used most extensively, however, is criminal justice, including:

• Bail and sentencing algorithms that predict recidivism;

 • Predictive policing algorithms that predict who is likely to commit the crime or the location of the crime;

• Photo and video algorithms, including facial recognition;

 • DNA profiling and evidence algorithms, including predictive genomics;

• “Scoring victims” algorithms that predict the likelihood of being a victim of a crime; and,

• Correctional algorithms that predict the likelihood to re-offend within an institution.

To date, few of these applications appear to be used in India in either the civil/administrative or criminal justice systems, although government interest in these systems is growing day by day without any law or policy. That is my fundamental concern.

Transposed to the Indian context, the applications in use internationally would affect significant government entitlements, crucial human rights, and important access to justice issues, including “poverty law”, child welfare, criminal law, and refugee/immigration issues. They would also affect some of India’s most important government services and the jurisdiction and workload of Supreme Courts, High Courts, administrative tribunals, ministries, agencies and municipalities.

Algorithms And Bail Reform In The United States Of America

Risk assessments are statistical models used to predict the probability of a particular future outcome. In the pretrial context, risk assessment tools are used to predict how likely it is that an accused will miss an upcoming court date or commit a crime before trial. The growth of algorithmic pretrial risk assessments in the US was driven largely by the American bail reform movement, the purpose of which is to “end wealth-based [bail] system and move pretrial justice systems to a risk-based model.” Algorithmic pretrial risk assessments quickly emerged as the “favoured reform” to advance these initiatives. According to the Center on Court Innovation, a New York-based non-profit research organization, “[t]he appeals of pretrial risk assessment—especially in large, overburdened court systems—is of a fast and objective evaluation, harnessing the power of data to aid decision-making.  Significantly, pretrial risk assessments were “strongly” endorsed as a “necessary component of a fair pretrial release system” by a broad coalition of American public defenders and civil rights organizations. With this kind of support, it is not surprising that the growth of pretrial risk assessments in the US has been “breathtaking” and “hard to overestimate”.

 In 2017 alone, as many as 14 states made provisions to adopt or investigate the use of pretrial risk assessment tools. In California, 49 of 58 counties use algorithmic risk assessment tools. Notwithstanding this rapid expansion, there has been a remarkable reversal in the legal and political support for these tools. This reassessment has been driven by several factors, including the experience of jurisdictions that implemented pretrial risk assessments, new research asserting that pretrial risk assessments actually perpetuate racial bias, and a reconsideration of the utility of risk assessments relative to other bail reform strategies. Many of the original supporters of these systems now argue that algorithmic risk assessments should be opposed “entirely.”

Just recently, more than 100 civil rights, social justice, and digital rights groups issued “A Shared Statement of Civil Rights Concerns” declaring that risk assessment instruments should not be used in pretrial proceedings, or at least that their use should be severely circumscribed.

Forewarning Lessons For Indian Criminal Justice System

Lesson No.1- Bias In, Bias Out

The most trenchant and troubling criticism of pretrial risk assessments – and many other forms of AI and algorithms in criminal justice – is that they are racist.

For these reasons, organizations such as Human Rights Watch believe algorithmic risk assessments to be “a sophisticated form of racial profiling.” In its most reductive form, this argument is straightforward: Because the training data or “inputs” used by risk assessment algorithms – arrests, convictions, incarceration sentences, education, employment – are themselves the result of racially disparate practices, the results or scores of pretrial risk assessments are inevitably biased.

For many in the US, the “bias in, bias out” argument is conclusive proof that algorithmic risk assessments and similar tools should never be used in criminal justice proceedings. Racial data discrimination in AI and algorithmic systems have been studied and analyzed extensively in the American criminal justice system.

I believe that Governments through the help of Law Professors, Judges, Lawyers, Human Rights activists,Law Students, should proactively and comprehensively address these issues before developing or implementing any AI or algorithmic tools in the Indian criminal Justice System/ proceedings.

Lesson No.2- The “Metrics of Fairness”

Risk assessment controversies in the US have demonstrated how different measures of statistical fairness are crucial in determining whether an algorithm should be considered discriminatory or race-neutral. More importantly, these controversies have also demonstrated that the burden for an algorithm’s statistical errors may not be shared equally: a statistical measure that over-classifies racialized accused as risky may effectively replicate (or worsen) existing patterns of racial disparity.

Lesson No.3- Data Transparency

One of the most public and significant issues that have arisen in the United States regarding pretrial risk assessments, and many other types of AI or algorithmic tools, is the lack of transparency about data and how these tools work. These criticisms often are part of a larger “black box” critique of AI and algorithms.

 Any introduction of algorithmic risk assessments or tools in the Indian justice system will inevitably raise questions about data transparency and accountability.

These debates mirror American debates on data transparency and risk assessments closely.

As a result, there is an urgent need to consider these issues from the Indian perspective.

This effort must be multidisciplinary and involve multiple stakeholders, especially from communities who are most likely to be affected by these technologies.

Lesson No.4- India, Data Accuracy, Reliability And Validity

Experience in the US demonstrates that algorithmic data issues and choices are both consequential and controversial. In the US, issues such as the reasonableness or appropriateness of a dataset, whether or not a dataset is sufficiently accurate or reliable, and the characteristics selected by developers as most relevant can have important practical and legal consequences.

American debates also reveal that questions about data accuracy, reliability and validity are not technical questions best left to developers or statisticians.

 India considering or developing algorithmic tools must be mindful of the choices, consequences, best practices and requirements inherent in data practices before implementing risk assessments or any other AI or algorithmic tools in the criminal justice system/Proceedings. For example, I believe, before deploying this technology in Criminal Justice System/Proceedings or even in any other manner which is incidental to or in the related fields, governments should work towards transparent data standards applicable to AI and algorithmic systems in collaboration with appropriate stakeholders.

Lesson No.5- India, Data Literacy: Risk Scores And Automation Bias

AI and algorithms typically give an individual a low, medium or high score for some kind of activity (including but not limited to recidivism, criminality, welfare fraud, eligibility for services, the likelihood of child abuse, the likelihood of defaulting on a loan, etc.). At first glance, scoring appears to provide simple, easy understand and usable summaries of complex statistical predictions. It is important to understand, however, that the determination of what constitutes a low, medium or high score is an explicit policy choice, not a statistical or technical outcome.

Moreover, the labelling of risk has important consequences: A high-risk score in criminal justice is obviously stigmatizing. In contrast to popular perceptions, a person with a high pretrial risk assessment score may actually be more likely to be successful than not. Similarly, many algorithmic tools make predictions about whether something negative (such as rearrest, committing a crime, welfare fraud, etc.) is likely to occur. By emphasizing the prospect of failure — rather than the more likely prospect of success — AI and algorithmic tools can effectively stigmatize individuals, particularly low income, racialized or vulnerable communities.

An emphasis on the prospect of failure can also erode the presumption of innocence. The qualifications and subtleties about risk scores can be easily overlooked in individual cases, especially in busy courts. A high-risk score can become a convenient, critical and quick measure of a defendant’s suitability for release.

 Absent procedural protections and a proper understanding of the limits of data scoring, there may be significant risk or prejudice to a defendant’s right to a fair hearing and to present arguments on their own behalf. As in other areas, failure to make this literacy equal risks further entrenching existing biases and inequality.

Lesson No.6- The Distinction Between Predictions, Law And Policy

The American experience has demonstrated the important distinction between an AI or algorithmic prediction and the “decision-making framework” that renders that prediction into a recommended course of action.

 In other words, algorithms are used to measure risk, while decision-making frameworks are used to manage risk. In the pretrial context, the developers of decision-making frameworks (sometimes called a “release matrix” in the pretrial context) must consider some of the following issues:

 • Does the release matrix conform with constitutional law, relevant statutes, judicial decisions and practice guidelines?

• What conditions or recommendations are suggested for high, medium or low-risk scores?

 • What risk score justifies pretrial release or pretrial detention?

• How should the release matrix account for local services? These are complicated, contested and consequential questions of law, Criminology, social policy and social services. These questions cannot and should not be answered by an algorithm’s developers or a small, closed group of decision-makers.

Unfortunately, the American experience demonstrates that the choices embedded in a decision framework or release matrix can lack transparency or appropriate public participation and scrutiny.

Lesson No.7- Best Practices In Risk Assessments

The introduction and widespread implementation of pretrial risk assessments in the US has spurred an extraordinary outpouring of research, policy development, community organizing, reassessment and reflection. One of the key developments in this period has been the development of a wide number of best practices and reform proposals.

For example, many organizations, notwithstanding their opposition to pretrial risk assessments in general, have proposed detailed protocols or “minimum requirements” for the development, deployment and governance of these systems. Importantly, best practices development has not been limited to the legal community. The AI technical community has also developed comprehensive best practices. A consistent theme in these proposals is the need to incorporate the principles of equality, due process, the presumption of liberty, community participation, transparency and accountability into all aspects of pretrial risk assessment.

Lesson No.8- Public Participation

The American experience demonstrates the need for broad participation in the design, development, deployment and governance of AI and algorithmic systems in the justice system. A particularly high-profile, recent example of the controversies and issues surrounding public participation concerns the New York City AI Task Force.

This Task Force was set up in 2018 by New York City to provide recommendations on a broad range of topics related to the use of AI and algorithms by New York City agencies.

The Task Force report includes a comprehensive list of recommendations but did not reach a consensus on many issues. Shortly after the Task Force’s report was published, a large group of community advocates and NGOs published a “shadow report” which included blistering criticisms of the Task Force’s recommendations and public process. The New York City example is just one of many American controversies demonstrating the need for broad participation in the development and oversight of criminal AI and algorithms.

Unequal access to information and participation in decision-making about data and technology can significantly worsen existing biases and inequality.

Debates about public participation, AI and algorithms in the American criminal justice system echo debates in India regarding police carding, gender, religion, racial and caste profiling. As a result, participation issues will very likely come to the forefront in India if, or when, AI and algorithmic tools are more widely introduced here.

 Accordingly, I support broad participation in the design, development, deployment and governance of AI and algorithmic systems in the Indian justice system with a just law and policy. This participation must include Governments, Law Professors, Judges, Lawyers, Police, Law Students, Technologists, Policymakers, Lawmakers, and, crucially,person who are likely to be most affected by this technology.

Lesson No.9- Algorithmic Accountability The American and International “digital rights”

, legal and technology communities have been focused on overlapping questions regarding the transparency, accountability and legal protections regarding AI and algorithmic systems for several years.

Many of them are following:

 Technological Due Process

 Many of the emerging American proposals to ensure AI and algorithmic accountability are based on “technological due process” principles and priorities.

This concept, based on a seminal 2008 article by Professor Danielle Keats Citron, suggests that AI and algorithms require deeper analysis of due process and regulatory issues than traditional legal models may suggest. This concept is grounded in a belief that

“the accountability mechanisms and legal standards that govern decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed primarily to oversee human decision-makers…our current frameworks are not well-adapted for situations in which a potentially incorrect, unjustified, or unfair outcome emerges from a computer. The key elements of technological due process are transparency, accuracy, accountability, participation, and fairness.”

Algorithmic Transparency

Algorithmic transparency is intended to remedy, or mitigate, concerns about the opacity of algorithmic systems and decision-making. Advocates identify various methods/strategies for achieving algorithmic transparency, including 1) disclosure and 2) auditing, impact assessments, evaluation and testing.

The disclosure includes both the existence of a system and a broad range of tools and processes used by the system. Disclosure issues include questions regarding the definition of AI, algorithms or automated decisions; the timing of a disclosure requirement; whether the disclosure requirement applies to new systems, existing systems or both; who has the responsibility to disclose; and the extent of disclosure. Many of the best practices identified in the US would generally require any or all of the following information to be disclosed:

 • Training data

• Source code

• Complete description of design and testing policies and criteria

• List of factors that tools use and how they are weighted

• Thresholds and data used to determine scoring labels • Outcome data used to validate tools

 • Definitions of what instrument forecast and for what time period

• Evaluation and validation criteria and results in US advocates strongly urge governments not to deploy proprietary tools that may rely on trade secret claims to prevent disclosure and transparency. In the US, there also has been a strong emphasis on ensuring that risk assessments and other AI or algorithmic systems are subject to extensive auditing, evaluation and testing.

Bias And Equality

Some scholars are pessimistic that American constitutional law offers a useful framework for assessing bias and equality issues in criminal algorithms, including one who suggests “[i]f there is a lesson here, indeed, it is about the woeful inadequacy of our constitutional equality norms for the contemporary world.”

 In the face of these challenges – or perhaps because of them – many American stakeholders, scholars and advocates offer a wide range of policy-based or regulatory initiatives to ensure AI and algorithms do not discriminate, including

• Improved transparency, testing, auditing and evaluation of algorithmic systems;

• Improved the collection, accuracy, reliability and disclosure of algorithmic data;

 • Statutory or regulatory prohibitions on the use of certain factors in algorithmic decision-making, including race or potential proxy factors, such as education, employment, geography;

• Statutory or regulatory provisions stating that no racial group should bear the undue burden of errors made by an algorithmic instrument;

• Refocusing algorithmic tools towards eliminating racial disparities;

• Ensuring greater community participation in design, development, implementation and oversight of AI and algorithms, particularly from racialized communities; and,

• Mandatory education on the racial effects of algorithmic risk assessments.

Due Process, Evidence, Remedies And The Right To Counsel

Like 14th Amendment Equal Protection issues, American constitutional law has been inconclusive on due process issues. Many American scholars and advocates offer a wide range of policy-based or regulatory initiatives to ensure procedural safeguards if and when algorithmic tools are used in American criminal proceedings. Many of these proposals are directed at ensuring the oversight role of courts while placing explicit restrictions on how and when the tools are used. Other proposals are directed at ensuring an effective right to challenge the operation or use of a tool in individual cases.

These proposals include:

• Explicit prohibitions of algorithms recommending detention;

 • Explicit requirements that tools be applied in a manner consistent with the presumption of innocence and the right to an individualized hearing; • Explicit directions as to how tools may be used by decision-makers;

• Explicit recognition of the right to inspect and cross-examine risk assessment tools and recommendations in individual cases, including the right to introduce evidence that contradicts algorithmic recommendations;

• Explicit rules governing how scoring is to be developed and framed;

 • Expedited and broad appellate review of decisions based, in part, on algorithmic risk assessments;

• Modification of rules of practice or evidence to support procedural safeguards;

 • Mandatory training for all justice system professionals; and,

 • Ensuring defence counsel have time, training and resources to challenge risk assessment recommendations.

Lesson No.10- The Limits Of Litigation

Litigation obviously has an important role in regulating AI and algorithms in the criminal justice system. Many issues will always be best addressed in open court with the benefit of an evidential record and high-quality and well-resourced counsel. Litigation, while obviously necessary to address specific cases, is insufficient to address the systemic statistical, technical, policy and legal issues that have been identified in this report.

As a result, I believe the most effective response to these issues must ultimately be grounded in some kind of systemic regulation or statutory framework, in addition to litigation, best practices, algorithmic audits, evaluations, etc.

Comprehensive Law Reform

Questions regarding disclosure, accountability, equality and due process will surface quickly, repeatedly and urgently in India if and when these systems are used in the Indian criminal Justice System/proceedings. It is clear that the systemic legal issues raised by this technology cannot be addressed through individual litigation, best practices or piecemeal legislation.

Comprehensive law reform is required.

Fortunately, there are many good examples and precedents to help guide and inform Indian discussions, including an impressive body of American analysis, academic research, operational experience, community evaluation, best practices and lessons learned. At this point, it is not clear if or how the Government of India’s initiatives will apply to the use of AI and algorithms in the criminal justice system. Notwithstanding these efforts,I believe that governance of these systems can be best achieved through what is sometimes called a “smart mix” or “mixed model” of AI and algorithmic regulation. This model is premised on the belief that no single statute, rule or practice is likely to be sufficient to govern AI and algorithmic systems.

Accordingly, a comprehensive regime to ensure algorithmic accountability in the Indian criminal Justice System/ proceedings should likely include:

• National standards, regulations or policy governing the development, disclosure and use of AI and algorithmic systems used by the Union Government, Supreme Court and Tribunals etc;

• State standards, regulations or policy governing the development, disclosure and use of AI and algorithmic data and systems used by different State Governments, High Courts and Lower judiciaries etc.;

• Amendments to evidence Act;

 • Criminal justice-specific statutory or regulatory provisions prescribing the parameters of use for AI and algorithmic tools;

 • Criminal justice-specific disclosure and due process regulations, legislation or practice directions; • Union And States standards or regulations guaranteeing public participation in the design, development, implementation and oversight of these systems;

• Training and education for criminal justice system participants; and,

• Ethical design standards.

I believe comprehensive regulation is justified on access to justice principles as well.

It is inconceivable a criminal defendant (particularly one represented by legal aid or self-represented) will be able to mount an effective challenge to the complex statistical, technical and legal issues raised by algorithmic risk assessments.

In these circumstances, the absence of comprehensive regulation may actually compound the over-representation of low-income and racialized communities, Schedule Caste and Scheduled Tribes,

Minorities, gender biases are already present in the Indian criminal justice system.

Conclusion: Revisiting Risk Assessments

My  analysis suggests Jurists, Law Professors, Judges, Lawyers, Policymakers and all the related stakeholders in this country need to address at least four threshold questions:

1. Should there be a moratorium on algorithmic risk assessments or similar tools in the Indian criminal justice system?

 2. What is the potential for algorithmic risk assessments in the Indian criminal justice system?

3. Is there a future where algorithmic risk assessments are used as part of a comprehensive strategy to advance equity, access to justice and systemic efficiency?

4. What is the path forward?

I believe that widely deploying algorithmic risk assessments in the Indian criminal Justice System/ proceedings at this time would be a mistake.

India would be mindful enough of the many proposals and developments in the United States that rethink and refocus algorithmic risk assessments in significant ways.

For example, it may be possible to use algorithmic risk assessments or similar tools to more effectively identify criminogenic needs, identify biased decision-making, identify community support or support evidence-based recommendations about bail conditions. These strategies, when combined with appropriate reforms to the procedural and legal regimes governing algorithmic risk assessments, may have the potential to contribute to a more efficient, effective and fair Indian criminal justice system.

Where, then, should we go from here?

Is there potential for algorithmic risk assessments in the Indian criminal justice system?

Can these tools be used as part of a comprehensive strategy to advance equity, access to justice and systemic efficiency? If so,

What is the path forward?

In response to these questions, I’m offering a modest recommendation:

This assignment has identified a series of issues and options that should be addressed prior to the widespread implementation of  AI or algorithmic systems in the Indian criminal justice system. In these circumstances, perhaps the first step is for policymakers and stakeholders to agree to collectively address these issues and on an appropriate process for doing so.

I believe that successful law reform depends on broad and accessible consultations with individuals, communities and organizations across India.

(Research article on the current issue Artificial Intelligence it’s used in Indian Criminal Justice System and few lessons from American Criminal Justice System)

Post a Comment

0Comments

Post a Comment (0)