Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
62 views11 pages

NYC Automated Decision Systems Task Force

The document is Janai Nelson's testimony to the NYC Automated Decision Systems Task Force regarding the NAACP Legal Defense and Educational Fund's (LDF) concerns about law enforcement's use of algorithms, machine learning, and automated decision systems (ADS). Some key points: 1) LDF is concerned that law enforcement's, including the NYPD's, increasing use of ADS threatens to exacerbate racial inequities in NYC. 2) The Task Force must ensure all ADS are fair, transparent, rigorously evaluated, and do not undermine the city's commitment to non-discriminatory public safety practices. 3) The Task Force should move swiftly to issue recommendations while ensuring thorough

Uploaded by

milan v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views11 pages

NYC Automated Decision Systems Task Force

The document is Janai Nelson's testimony to the NYC Automated Decision Systems Task Force regarding the NAACP Legal Defense and Educational Fund's (LDF) concerns about law enforcement's use of algorithms, machine learning, and automated decision systems (ADS). Some key points: 1) LDF is concerned that law enforcement's, including the NYPD's, increasing use of ADS threatens to exacerbate racial inequities in NYC. 2) The Task Force must ensure all ADS are fair, transparent, rigorously evaluated, and do not undermine the city's commitment to non-discriminatory public safety practices. 3) The Task Force should move swiftly to issue recommendations while ensuring thorough

Uploaded by

milan v
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

New York Office Washington, D.C.

Office
40 Rector Street, 5th Floor 700 14th Street, NW, Suite 600
New York, NY 10006-1738 Washington, D.C. 20005

T 212.965.2200 T 202.682.1300
F 212.226.7592 F 202.682.1312

www.naacpldf.org

Testimony of Janai Nelson


Associate Director-Counsel
NAACP Legal Defense and Educational Fund, Inc.

NYC Automated Decision Systems Task Force


April 30, 2019 Public Forum

NAACP Legal Defense &


Educational Fund, Inc.
40 Rector Street, 5th Floor
New York, NY 10006

May 6, 2019
Good Evening Chairperson Thamkittikasem and Members of the Task Force:

On behalf of the NAACP Legal Defense and Educational Fund (LDF), we thank
the NYC Automated Decision Systems Task Force (the Task Force) for holding this
crucial public forum to address the core components of Local Law 49 concerning
fairness and accountability.

LDF is the nation’s first and foremost civil and human rights law organization.
Since its founding nearly eighty years ago, LDF has worked at the national, state,
and local levels to pursue racial justice and eliminate structural barriers for African-
Americans in the areas of criminal justice, economic justice, education, and political
participation.1 As part of that work, LDF has engaged on the use of data and
technology in the perpetuation of racial discrimination. LDF has also forged
longstanding partnerships with local advocates, activists, and attorneys to challenge
and reform unlawful and discriminatory policing in New York City, including serving
as co-counsel in the Davis v. City of New York case which challenged the New York
Police Department’s (NYPD) policy and practice of unlawfully stopping and arresting
New York City Housing Authority (NYCHA) residents and their visitors.2 While Local
Law 49 affects decision-making in a wide variety of contexts, we focus, today, on the
discrete, durable, and disproportionate racial impact it threatens to impose in the
area of policing and law enforcement.

Indeed, LDF is deeply concerned about law enforcement’s, including the


NYPD’s, increasing reliance on machine-learning algorithms, bias data, and
Automated Decision Systems (ADS), that rely on both. The NYPD’s deployment and
implementation of ADSs threaten to exacerbate racial inequity in New York City.
Given this concern, the Task Force’s recommendations to Mayor de Blasio and City
Council Speaker Corey Johnson must ensure all ADSs are fair, transparent,
rigorously evaluated, and, critically, do not undermine the City’s commitment to
public safety practices that are constitutional and non-discriminatory. Accordingly,
the Legal Defense Fund makes the following nine preliminary recommendations to
this Task Force:

1
https://www.naacpldf.org/about-us/.
2
https://www.naacpldf.org/case-issue/davis-v-city-new-york/.
1. The ADS Task Force Should Move Swiftly to Issue Its
Recommendations and Request Additional Time, If Needed

As an initial matter, we stress the Task Force’s duty to thoroughly research


and understand the many ways ADSs affect the City’s most vulnerable residents—
including low-income communities and communities of color. The Task Force is also
charged with ensuring the City’s residents are well-educated on ADSs, understand
how ADSs will affect their everyday life, and importantly, have ways to meaningfully
voice their concerns and feedback. Finally, the Task Force must issue critical
recommendations that incorporate community feedback, prevent bias and
discrimination in ADS use, and establish procedures for ADS accountability and
transparency. Equally important, these recommendations must also include
procedures to ensure that impacted communities can hold any City agency
accountable when ADSs are not fair and equitable.3

Commissioned in May 2018, the Task Force has only until November 16, 2019
to fulfill its mandate pursuant to Local Law 49.4 To the extent that it is foreseeable
that the Task Force will be unable to complete its charge within this timeframe,
including and especially soliciting robust public engagement, it should seek to extend
its deadline. By the same token, however, the longer the Task Force takes to issue its
recommendations, the longer ADSs will operate in New York without necessary
guidance on fairness, transparency, and accountability. Accordingly, we strongly
urge the Task Force to move swiftly and diligently in issuing its recommendations
without compromising the necessary information gathering and careful study that
this undertaking requires, and the City’s residents deserve.

2. The City Must Adopt a Uniform Definition of Automated


Decision Systems

Under the current definition in Local Law 49, an ADS is a “computerized


implementation of algorithms, including those derived from machine learning or
other data processing or artificial intelligence techniques, which are used to make or
assist in making decisions.” This definition is too limiting and may fail to capture the
full range of systems that agencies are considering or have already implemented.

3See https://www1.nyc.gov/site/adstaskforce/about/about-ads.page.
4“No later than 18 months after such task force is established, it shall electronically submit to the
mayor and the speaker of the council a report . . . .” Local Law 49 of 2018.

2
We therefore recommend adopting the same ADS definition that advocates and
experts recommended more than eight months ago, in an August 17, 2018, letter to
the Task Force. The group captured a full range of potential ADSs by defining an
ADS as:

An automated decision system is any software, system, or process that


aims to aid or replace human decision making. Automated decision
systems can include analyzing complex datasets to generate scores,
predictions, classifications, or some recommended action(s), which are
used by agencies to make decisions that impact human welfare.5

This more expansive definition helps to ensure that all automated decisions
that affect New Yorkers will be subject to the appropriate scrutiny and the public will
be protected.

3. The City Must Clarify that All Agencies Using an ADS are
Within the Task Force’s Purview and Subject to its
Recommendations

Neither accountability, fairness, nor transparency can be achieved if some


ADSs are excluded from the Task Force’s purview. Indeed, given the far-reaching
consequences of technological advances in the hands of the NYPD, coupled with the
department’s well-documented history of discriminatory and unconstitutional
policing and enforcement practices, any decision to exclude the NYPD from the Task
Force’s purview or recommendations would be antithetical to Local Law 49’s intent
and purpose.

The City must therefore clarify that no agency’s ADSs are excluded from its
review. Failing that, it must create an independent review process before any system
can be excluded that includes an opportunity for the public to challenge the exclusion
of any ADS from the Task Force’s purview and recommendations.

5 See August 17, 2018 letter to ADS Task Force Chairs Newman and Saunders, at
http://assets.ctfassets.net/8wprhhvnpfc0/1T0KpNv3U0EKAcQKseIsqA/52fee9a932837948e3698a658
d6a8d50/NYC_ADS_Task_Force_Recs_Letter.pdf.

3
4. The City Must Commit to Full Transparency and Disclose
Information About the NYPD’s Automated Decision Systems
and How They Operate

The NYPD has already implemented or considered implementing the following


ADSs: Automated License Plate Readers,6 Facial Surveillance,7 Predictive Policing,8
and Social Media Monitoring,9 without meaningful oversight or community
engagement.10 This list, however, is most likely underinclusive because the NYPD
continues to consistently conceal the internal development and use of ADSs from the
public. By concealing its use of ADSs, the NYPD prevents the public from adequately
studying the impact of these systems and shields itself from accountability.

Equally alarming, the NYPD plans to continue embedding these systems in


their law enforcement and decision-making processes at a disturbingly aggressive
pace. For example, just this month, on April 3, 2019, the NYPD’s Deputy Chief of
Policy and Programs, Thomas Taffe, explained that the Department hired more than
100 civilian analysts since 2017 to use ADS software to analyze the NYPD’s crime
data.11 For these reasons, at a minimum, the Task Force must recommend that the

6 Information about the NYPD’s license plate reader is available at https://www.nydailynews.com/new-


york/nyc-crime/ny-metro-license-plate-reader-grab-20190119-story.html. For more information on
automated license plate readers generally, see “Automated License Plate Readers (ALPRs),”
Electronic Frontier Foundation, https://www.eff.org/pages/automated-license-plate-readers-alpr.
7 George Joseph and Kenneth Lipp, IBM Used NYPD Surveillance Footage To Develop Technology

That Lets Police Search By Skin Color, THE INTERCEPT, Sept. 6, 2018,
https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/.
8 Rachel Levinson-Waldman and Erica Posey, Court: Public Deserves to Know How NYPD Uses

Predictive Policing Software, The Brennan Center, Jan. 26, 2018,


https://www.brennancenter.org/blog/court-rejects-nypd-attempts-shield-predictive-policing-
disclosure.
9 America’s cops take an interest in social media, THE ECONOMIST, Feb. 21, 2019,
https://www.economist.com/united-states/2019/02/21/americas-cops-take-an-interest-in-social-media;
Annie McDonough, Privacy advocates score win against NYPD over surveillance technology, CITY &
STATE NEW YORK, Jan. 22, 2019, https://cityandstateny.com/articles/policy/technology/privacy-
advocates-score-win-against-nypd-over-surveillance-technology; Clay Dillow, NYPD Creates
Facebook-Police Task Force to Mine Social Media for Clues, POPULAR SCIENCE, Aug. 10, 2011,
https://www.popsci.com/technology/article/2011-08/nypds-facebook-police-mine-social-media-clues-
about-crime.
10 See “Automated Decision Systems: Examples of Government Use Cases,” at 1-5,
https://ainowinstitute.org/nycadschart.pdf.
11 Zolan Kanno-Youngs, NYPD Number-Crunchers Fight Crime with Spreadsheets, THE WALL STREET

JOURNAL, July 23, 2018, https://www.wsj.com/articles/nypd-number-crunchers-fight-crime-with-


spreadsheets-1532381806.

4
NYPD publicly identify, categorize, and share a list of all ADSs that the NYPD has
implemented, plans to implement, or is developing. Once created, this list of ADSs
should be updated in real time moving forward.

5. The City Must Ban the Use of Data Derived from Discriminatory
and Biased Enforcement Policies and Practices in Automated
Decision Systems

Because algorithms learn and transform through exposure to data,12 an


algorithm is only as good as the data that is selected to inform the algorithm;
meaning, an algorithm will replicate any biases within its training data—called
“training bias.”13 Bias in, bias out. This training bias can lead to discrimination in
at least two ways: (1) reproducing the biases in the data, and (2) drawing inference
from biases in the data.14 In the policing context, this means that data derived from
and reflecting the NYPD’s discriminatory, illegal, and unconstitutional enforcement
practices infect any algorithm and ADS that is trained with that data. The resulting
algorithm or ADS will then carry out and perpetuate that same discrimination—
making all of the ADSs’ decisions flawed.

For decades, the NYPD engaged in widespread racial profiling against Black
and Latinx residents. Between 2004-2012, the NYPD conducted an astounding 4.4
million stops of City residents, as they simply engaged in their daily lives. A
staggering 88% of these stops resulted in no further action — meaning a vast majority
of those stopped were not engaged in unlawful conduct. In about 83% of cases, the
person stopped was Black or Latinx, even though the two groups combined accounted
for just over half the population.15 When these discriminatory practices were
challenged in Floyd v. City of New York, a federal court found the NYPD liable for a
pattern and practices that violated the Fourth Amendment rights of New Yorkers to
be free from unreasonable searches and seizures. The court also found that the
NYPD’s practices were racially discriminatory in violation of the Equal Protection
Clause of the Fourteenth Amendment.16 That was just one case.

12 Solon Barocas and Andrew Selvst, Big Data’s Disparate Impact, 104 Cal. L. R. 671, 680-81 (2016).
13 Id. at 680-87.
14 Id. at 681.
15 Racial Discrimination in Stop-and-Frisk, THE NEW YORK TIMES, Aug. 12, 2013,
https://www.nytimes.com/2013/08/13/opinion/racial-discrimination-in-stop-and-frisk.html.
16 See The New York Civil Liberties Union report listing, “Stop-And-Frisk Data,”
https://www.nyclu.org/en/stop-and-frisk-data; see also “Stop-And-Frisk In The De Blasio Era (2019),”
Mar. 14, 2019, https://www.nyclu.org/en/publications/stop-and-frisk-de-blasio-era-2019.

5
Similarly, in Davis v. City of New York the NYPD unlawfully stopped and
arrested people of color who lived in or visited NYCHA apartments, without
reasonable suspicion or probable cause. The NYPD justified its racially
discriminatory arrests by alleging the residents and their visitors were “criminally
trespassing.”17 Currently, the Department’s aggressive, military style gang
“takedowns” primarily target public housing residents, the overwhelming majority of
whom are people of color.18 Prior to executing these sweeping gang takedowns, the
NYPD conducts criminal investigations relying, in part, on a secret database that
erroneously designates thousands of New Yorkers as members of gangs or local street
“crews,” often without informing the individual or offering any due process
protections.19 Officers executing gang policing strategies rely on vague—and
troubling—terms and generalizations to justify their frequently erroneous
designation of individuals as gang members.20

As a result of these and many other discriminatory practices,21 the NYPD


datasets are infected with deeply-rooted biases and racial disparities.22
Consequently, any predictions or output from an ADS that relies on such data, in any
capacity, will reproduce and reinforce these biases and disparities.23 We are skeptical

17 See “Stop and Frisk The Human Impact,” Center for Constitutional Rights, (July 2012),
https://ccrjustice.org/sites/default/files/attach/2015/08/the-human-impact-report.pdf.
18 NAACP LDF’s Written Testimony on The NYPD’s Gang Takedown Efforts, June 13, 2018,

https://web.archive.org/web/20181009111929/http://www.naacpldf.org/files/case_issue/City%20Counc
il%20Testimony%20combined%206.13.18.pdf.
19 Id.
20 The NYPD provided its IDS Gang Entry Street and the criteria by which gang members are certified

in response to Professor Babe Howell’s Freedom of Information Law request, filed on September 2,
2011. In addition to these criteria, the NYPD may certify someone as a gang member if an individual
admits membership during a debrief or if, through the course of an investigation, an individual is
reasonably believed to belong to a gang and is identified as such by two independent sources, which
could include other New York City agencies. K. Babe Howell, Gang Policing: The Post Stop-and-Frisk
Justification for Profile-Based Policing, 5 UNIV. DENVER CRIM. L. REV. 1, 16 (2015).
21 See for example, Benjamin Mueller, Using Data to Make Sense of a Racial Disparity in NYC

Marijuana Arrests, THE NEW YORK TIMES, May 13, 2018,


https://www.nytimes.com/2018/05/13/insider/data-marijuana-arrests-racial-disparity.html; Stephen
Rex Brown, Advocates say NYPD's unconstitutional 'stop and frisk' persist as federal monitor notes
numerous stops go unreported, NEW YORK DAILY NEWS, Jan. 11, 2019,
https://www.nydailynews.com/new-york/ny-metro-stop-frisk-monitor-report-20190111-story.html.
22 Rashida Richardson, Jason Schultz, and Kate Crawford, Dirty Data, Bad Predictions: How Civil

Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, NYU Law
(forthcoming 2019).
23 “[C]onsider the potential harm done when police departments like these use their crime data to feed

the algorithms and models used to predict behavior. . .The data provides a distorted picture of the

6
that such “dirty” data can ever be cleansed to separate the “good” from the “bad,” the
tainted from the untainted.24 Therefore, we ask this Task Force to recommend that
no ADS system incorporate or use any data based derived from discriminatory and
biased law enforcement practices.

6. The City Must Adopt Processes for Determining if an ADS has a


Disproportionate Impact on an Individual or Population

Any data derived from discriminatory, illegal, or unconstitutional policing


enforcement or practices that informs or is incorporated into an ADS despite the
previous recommendation should be presumed to produce a disproportionate impact.
This assumption places an affirmative burden on the NYPD—rather than an
individual or community group—to demonstrate that (a) the biased data has been
removed and (b) allow an independent third-party to conduct, at a minimum, a racial
equity impact assessment.25 In addition, for all data used in any ADS, including data
alleged as not deriving from discriminatory, illegal, or unconstitutional practices, and
data derived from sources other than the NYPD—the following steps must occur to
analyze the impact of the ADS on individuals or community groups:

a. An equity impact assessment;26


b. A surveillance impact report;27

neighborhoods where crime is happening that, in turn, drives more police to those neighborhoods.
Police then come into contact with more people from those communities, and by virtue of more contact,
make more arrests. Those arrests — regardless of their validity or constitutionality — are interpreted
as indicative of criminal activity in a neighborhood, leading to a greater police presence. The result . .
is ‘a pernicious feedback loop,’ where ‘the policing itself spawns new data, which justifies more
policing.” https://www.aclu.org/issues/privacy-technology/surveillance-technologies/ai-and-criminal-
justice-devil-data.
24 See Rashida Richardson, Jason Schultz, and Kate Crawford, Dirty Data, Bad Predictions: How Civil

Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, NYU Law
(forthcoming 2019).
25 Laura M. Moy, How Police Technology Aggravates Racial Inequity: A Taxonomy of Problems and a

Path Forward, (forthcoming 2019),


https://poseidon01.ssrn.com/delivery.php?ID=4110201260821181130921231120731041180970520730
580680051220691241010081151220261121100520500190070450261150860871131131130180750500
520350050350041241100660281220650050530010360990921250091131200670671210120260271191
23070118002086003109101104071070000099&EXT=pdf.
26 Id.
27 http://www2.oaklandnet.com/oakca1/groups/cityadministrator/documents/standard/oak070617.pdf.

7
c. A pre-acquisition or development procedure to ensure non-agency
experts and representatives from directly affected communities are
consulted during the development of an ADS; and
d. Agencies must maintain a public record of external participation.

These commitments and fail safes, along with the previous recommendations,
are a strong starting point for the City and Task Force to fight against the
perpetuation of racial bias through data and technology.

7. Remedy and Account for Proxy Factors that Also Produce


Discriminatory Results

In addition, to “dirty data” informed by racial discrimination and bias,


algorithms can learn biased behavior through proxy factors—factors that may appear
neutral but reflect societal and structural biases.28 For example, an algorithm may
purposefully exclude all references to race and ethnicity. However, if the algorithm
still considers factors that, due to societal constructs, correlate to race—such as low-
income neighborhoods or employment history—the algorithm’s outputs may
nonetheless be racially-skewed.29 To ensure against racial discrimination and bias by
proxy, the Task Force must also develop recommendations that require agencies,
experts, and community members to address societal and systemic factors that
contribute to discriminatory ADSs and to determine ways to mitigate the influence
of proxy factors in ADSs.

28 See Virginia Eubanks statement that even when removing “dirty data,” ADSs often reflect the very
discriminatory behavior we sought to avoid because “[w]hat we’re doing is using the idea of
eliminating individual irrational bias to allow this vast structural bias to sneak in the back door of the
system,” https://qz.com/1427159/algorithms-cant-fix-societal-problems-and-often-amplify-them/.
29 See for example, Northpointe’s dispute of ProPublica’s finding that its risk assessment tool falsely

labeled Black defendants as likely to reoffend and whites as likely to not re-offend,
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; see also
Virginia Eubanks, A Child Abuse Prediction Model Fails Poor Families, WIRED, Jan. 15, 2018,
(noting that, though the Allegheny County Family Screening Tool intentionally took steps to avoid
racial disparities in its child welfare system, the system was nevertheless biased and produced racially
discriminatory results because the developers ignored a major area of societal bias in the child welfare
system overall—that people report Black and biracial families to child welfare offices 350% more than
white families, creating an influx of Black family child welfare cases at the outset),
https://www.wired.com/story/excerpt-from-automating-inequality/.

8
8. Establish Procedures for Addressing Harms When ADSs
Disproportionately Impact Individuals and Community
Members

Continuing to rely on ADSs without any pre-implementation processes, such


as the recommendations suggested here, risks subjecting entire communities to
continued discriminatory and unconstitutional enforcement and policing practices.
ADSs could be used to justify disparate treatment of communities of color in terms of
how “suspicion” is defined, who is chosen as “targets” for increased enforcement and
surveillance, and where these machine-learning tools are deployed—all raising
significant constitutional concerns under the First, Fourth, and Fourteenth
Amendments to the U.S. Constitution. The use of ADSs threatens to distort
reasonable suspicion, expectation of privacy, and freedom of speech doctrines. These
potential constitutional harms further underscore why the Task Force must make
bold and expansive recommendations to create procedures and safeguards to protect
the public from potential constitutional and other violations.

9. The City Must Create Accountability Structures that Empower


All Community Members to Participate in Pre- and Post-
Acquisition Decisions About Automated Decision System

The City is experimenting on its residents by relying on ADSs to make


predictions and decisions without fully understanding how these systems will affect
community members. Worse, the City has not required complete ADS transparency
or meaningful community engagement—meaning the very communities that will be
affected by ADSs are left out of the equation. To date, the City has not provided
sufficient mechanisms for non-agency experts and community members to be
educated about, and then thoroughly evaluate, all ADSs prior to implementation. The
City must reaffirm its commitment to accountability and transparency by creating
structures that center community members—not machines—in the decision-making
process and provide meaningful opportunities to give feedback and input about ADSs.

Conclusion

In closing, the NYPD’s use of ADSs already creates an unprecedented


expansion of police surveillance. While the expansion implicates all residents’ privacy
rights, as I’ve noted, the burdens and harms are not evenly shared among City
residents. Communities of color, particularly Black and Latinx residents, will
continue to be disproportionately subjected to profiling, policing, and punishment to

9
the extent that ADSs replicate the biases of the current criminal legal system and
law enforcement practices.

The rapid, unchecked deployment of ADSs without effective mechanisms for


public input, independent oversight, or the elimination of racial discrimination and
bias is unacceptable and untenable. Data and technology should not be weaponized
by New York City against its residents. This Task Force should therefore make
recommendations that hold agencies accountable for ensuring that all ADSs—
including ADSs currently in use and any future ADSs—are transparent, fair and free
from racial discrimination and bias.

Thank you,

Janai Nelson
Associate Director Counsel

10

You might also like