User strike!
Using CCPA to waste a company's computational resources

Table of Contents

The wind-up

The California Consumer Privacy Act, or CCPA, goes into effect [2020-01-01 Wed].

One of its stipulations is that users can request the "deletion of personal information," including "inferences drawn from this information."

Many business models are built around building inferences from information. Many such business models rely on machine learning to do so. In general, training machine learning models is computationally expensive.

The delivery

Imagine a company that…

  1. Is subjected to the CCPA
  2. Complies with the CCPA (i.e., follows its regulations)
  3. Relies on training machine learning models on user data.
  4. Needs to re-train its models every time a user requests deletion.

If the users know about the company's schedule for honoring deletion requests, an adversarial group of users could request their data deleted at strategic times so as to waste the company's computational resources—effectively, a sort of "user strike."

How likely is this, in reality?

Richmond says [2019-12-27 Fri],

from what i've seen in updated privacy policies, most companies have some non-automated process for it

where you have to email for a deletion request

so it's somewhat blackboxed to me as to how different organizations will comply/act on it

like it's possible the strike just overwhelms the legal response team (which would be interesting), but deletion is done in such a way that it doesn't have the same blunt force on the technical system as it does on the legal team

[…]

due to the blackboxing, it's just not clear to me that the potential user resistance will have its intended effects

In some sense, this is the most interesting part. All kinds of user resistance strategies (group or individual) might have unexpected downstream effects. For example, in some cases, inclusion in the model might be better than exclusion

A design fiction could describe this user strike, and show how the blackboxed compliance mechanism created a weird downstream effect (e.g., it introduces more bias into the system because only privileged white people opted out, etc)

oh yeah. it's a common thing around diversity and inclusion initiatives, that sometimes inclusion in models is what you actually want. (and presents a nice case of pitting privacy and fairness against each other)

related, vera's CSCW paper on old school unions, and while resisting data collection surveillance by employers on their work, they did engage in selective, tactical forms of self-surveillance and data collection in order to rebut claims, etc

Notes

About CCPA narrowly

i also haven't done a lot of reading into CCPA yet, but i suspect "inferences" only refers to individual-level inferences, and not group level ones

"User strikes"

In general, I'd be interested to hear about other possible "user strikes"—scenarios in which users can exploit the nature of free services and favorable regulation to inflict economic pain on service providers.

Fake news might be one possible (existing) example. Fake news articles have, presumably, cost platforms like Facebook a lot of money as they are forced to keep pace with these developments. However, while the user strike I describe above could punish anti-privacy business models, I'm not convinced that these fake news stories achieve any higher purpose.

Author: ffff

Created: 2019-12-27 Fri 20:33

Validate