Skip to content

Consent, Automated Systems, and Discrimination (FTC Comments)

My response to the FTC’s ANPR on Commercial Surveillance and Data Security

Federal Trade Commssion.  Protecting America's Consumes.  On the left, the FTC logo.

Submitted to regulations.gov a couple of hours before last night's deadline.  An earlier version of points 2-4 appeared in the extended remix of my public comments in September.

Thank you for your attention to the pressing issues of commercial surveillance and data security.  As the author of the Nexus of Privacy Newsletter, I write about commercial surveillance and other connections between technology, policy, and justice.  My career includes founding a successful software engineering startup; General Manager of Competitive Strategy at Microsoft; and co-chairing the ACM Computers, Freedom, and Privacy Conference. In fall 2021, I was a member of the Washington state Automated Decision-making Systems Workgroup, and I have testified about privacy legislation in over a dozen Washington state legislature hearings.

My comments focus primarily on consent, discrimination, algorithmic error, and automated systems, although also touch on questions related to data minimization and other aspects of privacy.  In summary:

  1. Consent is a vital complement to data minimization and completely prohibiting some commercial surveillance activities.  Opt-out is not meaningful affirmative consent, and an opt-in approach to regulation will enhance innovation. (Questions 26, 73-81)
  2. Algorithmic error and discrimination is pervasive across multiple sectors – and the harms fail disproportionately on the most vulnerable people. (Questions 53, 57, 65, 66, 67)
  3. The FTC should build on the recommendations of Algorithmic Justice League’s Who Audits the Auditors, the White House OSTP’s Blueprint for an AI Bill of Rights, and the California Privacy Protection Agency’s AI equity work. (Questions 41-46, 56, 67)
  4. The FTC should develop its regulations working with the people most likely to be harmed by commercial surveillance – and prioritize their needs. (Questions 29, 39, 43)
A person saying :Technologies reflect the biases of the makers and implicit rules of society" with a photo of an incarcerated Black person in the background
Malkia Cyril, at Personal Democracy Forum 2017, originally tweeted by @anxiaostudio

Privacy scholarship, and practical experience, has clearly shown the limitations of purely consent-based approaches.  Businesses that profit from commercial surveillance will use misleading tactics to get people to consent, bombard people with requests to induce “consent fatigue”, or bribe or coerce consent.  So commercial surveillance activities with known discriminatory and human rights impacts like face surveillance, social scoring, and crime prediction should be prohibited.  The “unacceptable risk” category of the EU’s AI Act is a good starting point, although as the civil society amendments available on EDRI’s site highlight, needs to be expanded – for example, it doesn’t currently prohibit emotion recognition.

However, there are many commercial surveillance activities that are not likely to be prohibited by these regulations.  Location tracking is one good example. While this data can certainly be abused, it can also power very useful services; different people will make different tradeoffs as to whether and when they want to be tracked.  First-party targeted advertising is another.  People who trust a company may well wish to see more-relevant ads, both for their own use and to help the company’s business.  On the other hand, people who do not have a trust relationship with the company (or distrust some of the service providers the company uses) may not want to have their data used to target ads.

In situations like this, consent is crucial – and by consent, I mean affirmative, informed consent, also known as “opt in”.

“Opt out” approaches, by contrast, assume consent.  As a Washington state resident said in testimony in a 2021 state legislative hearing, an "opt-out" approach lets anybody come into your house and rummage around in your drawers – without being invited in – until you tell them to go away.

“Opt out” also leads to major biases, for example against disabled people. Even if regulation requires opt-out pages and privacy policies to be accessible, there’s no reason to believe that they will be in practice; after all, courts have found that non-accessible websites violate the ADA, but an astonishing 98.6% of all web home pages have accessibility errors – an average of over 50 errors per page.

People who have limited reading or technology skills and people who are not native English speakers are also heavily impacted by opt out.  And while advanced opt-out approaches such as a global privacy control help techies who have their own devices and don’t need assistive technologies, they do not fully address the problem.

Commercial surveillance providers often object to opt-in saying it will be too hard to use, but the extremely positive response to Apple’s App Tracking Transparency clearly shows that’s a lie. The reality is that most tech companies haven’t devoted any effort at all to making opt-in easy.  After all, current privacy regulation is almost exclusively opt-out, so their business interests are better served by putting their effort into making opt-out as hard as possible.

So opt-in is also an excellent example of the opportunities for new regulations to enhance innovation – and enhance the development of products that protect our privacy.

2. Algorithmic error and discrimination is pervasive across multiple sectors – and the harms fail disproportionately on the most vulnerable people (Questions 53, 57, 65, 66, and 67)

Technology can be liberating … but it can also reinforce and magnify existing power imbalances and patterns of discrimination.  As documented by researchers like Dr. Safiya Noble in Algorithms of Oppression and Dr. Joy Buolamwini, Timnit Gebru, and Inioluwa Deborah Raji in the Gender Shades project, today’s algorithmic systems are error-prone and the datasets they’re trained on have significant biases.

These errors – and the discrimination these systems cause – are prevalent across all sectors. For example, Color of Change's Black Tech Agenda talks about “the harm that algorithmic has done to Black communities regarding equitable access to housing, health care, employment, education, credit, and insurance."  Similarly, facial recognition errors led to the arresting innocent Black people like Nijeer Parks, Robert WIlliams, and Michael Oliver; and Algorithms that detect hate speech online are biased against Black people.  Of course, it is not only Black communities that are harmed by this –  health care allocation algorithms discriminate against disabled people, automated risk assessments discriminate against Hispanic people, fraud-detection systems harm against unemployed people, … the list goes on.

And as this list indicates, the harms are usually much greater on the historically underserved communities the FTC has committed to protecting.

Regulation is clearly needed, and it needs to be designed to protect the people who are most at risk.  However, the traditional list of “protected classes” is not sufficient. Some algorithmic discrimination is intersectional – for example, Gender Shades and follow-on work highlight that facial recognition is even more inaccurate for Black women.  And, other forms of algorithmic discrimination can disproportionately harm groups like unemployed people or renters who are not considered “protected classes”.

3. The FTC should build on the recommendations of Algorithmic Justice League’s Who Audits the Auditors, the White House OSTP’s Blueprint for an AI Bill of Rights, and the California Privacy Protection Agency’s AI equity work (Questions 41-46, 56, 67)

A grid of six colored boxes, each with the one of the policy recommendations listed above

There are a lot of challenges in regulating algorithms and automated decision systems.  Fortunately, there is a lot of excellent work out there for the FTC to build on.

The Algorithmic Justice League’s founder Joy Buolamwini, research collaborator Inioluwa Deborah Raji, and Director Of Research & Design Sasha Costanza-Chock (known for their work on Design Justice) recently collaborated on Who audits the Auditors: Recommendations from a field scan of the algorithmic auditing ecosyystem, the first comprehensive field scan of the artificial intelligence audit ecosystem.

The policy recommendations in Who Audits the Auditors highlight key considerations in several specific areas where FTC rulemaking could have a major impact.

  1. Require the owners and operators of AI systems to engage in independent algorithmic audits against clearly defined standards
  2. Notify individuals when they are subject to algorithmic decision-making systems
  3. Mandate disclosure of key components of audit findings for peer review
  4. Consider real-world harm in the audit process, including through standardized harm incident reporting and response mechanisms
  5. Directly involve the stakeholders most likely to be harmed by AI systems
    in the algorithmic audit process
  6. Formalize evaluation and, potentially, accreditation of algorithmic auditors.

The Blueprint for an AI Bill of Rights announced in September by the White House Office of Science and Technology Policy (OSTP) is also very valuable.  The detailed recommendations in Algorithmic Discrimination Protections, Data Privacy and Notice and Explanation sections relate directly to many of the questions in the ANPR.

And the details of the rulemaking matter a lot.  Too often, well-intended regulation has weaknesses that commercial surveillance companies, with their hundreds of lawyers, can easily exploit.  Looking at proposals through an algorithmic justice lens can highlight where they fall short.

For example, the Who Audits the Auditors recommendations highlight ways that proposed American Data Privacy and Protection Act (ADPPA) consumer privacy bill falls short of effective regulations:

  1. ADPPA doesn't require independent auditing, instead allowing companies like Facebook to do their own algorithmic impact assessments.  And government contractors acting as service providers for ICE and law enforcement don't even have to do algorithmic impact assessments!  As Color of Change's Black Tech Agenda notes "By forcing companies to undergo independent audits, tech companies can address discrimination in their decision-making and repair the harm algorithmic that bias has done to Black communities."
  2. ADPPA doesn’t require affirmative consent to being profiled – or even offer the opportunity to opt out.
  3. ADPPA doesn’t mandate any public disclosure of its algorithmic impact assessments – not even summaries or key components
  4. ADPPA doesn't have any requirement for including real-world harms – or even measuring the impact.
  5. ADPPA doesn't have any requirement at all to involve external stakeholders in the assessment process – let alone directly involving the stakeholders most likely to be harmed by AI system.
  6. ADPPA allows anybody at a company to do an algorithmic impact assessment – and it's not even clear whether it allows the FTC rulemaking authority for potential evaluation and accreditation of assessors or auditors

Taking all of this valuable work into account starting early in the process will lead to more effective regulations.

4. The FTC should develop its regulations working with the people most likely to be harmed by commercial surveillance – and prioritize their needs (Questions 29, 39, 43)

As Afsenah Rigot points out in Design From the Margins, a design process that centers the most impacted and marginalized users from ideation to production results in outcomes that are highly beneficial for all users and companies.  While Rigot focuses on product design, and AJL’s recommendations make a similar point about involving the stakeholders most likely to be harmed by AI systems in an auditing process, For example, A. Prince Albert III Hiding OUT: A Case for Queer Experiences Informing Data Privacy Laws, suggests using queer experiences as an analytical tool to test whether proposed a privacy regulation protects people’s privacy and  Stress-testing privacy legislation with a queer lens illustrates the insights from this approach.

So as the FTC develops these regulations, it’s critical to involve the people who will be most impacted – and to make sure their needs are prioritized.  As well as the protected classes, it's also vital to look at how proposed regulations affect pregnant people, rape and incest survivors, immigrants, unhoused people, and others who are being harmed today by commercial surveillance.  Regulations that protect them will wind up protecting everybody.