One of the most interesting concepts Zuboff introduces is that of the uncontract. It’s best understood through examples.
Car insurance companies are doing trial runs for programs that can monitor your driving behavior and shut down the engine if the software decides you are not driving safely. This already exists. The only obstacle is to get drivers to agree to sign up but that will be easy.
Now, let’s think of the following possibilities. We keep hearing words “safe” and “unsafe.” People need to feel safe. They can be made to feel extremely unsafe by anything, including words. How hard is it to install cameras in a classroom and institute some sort of an automated disciplinary response whenever a professor or a student uses an unsafe word?
How hard is it to get your phone to listen in to what you are saying to your family members and do some sort of an automatic reaction (like sending police to your house) if your language, tone of voice, or the sounds you make sound “unsafe”? How fast will you learn to perform for the benefit of the smart machines in everything you do? I’m guessing, extremely fast.
The result will be to substitute the social contract, where people negotiate relationships and meanings with each other, with the uncontract, where relationships between humans are ruled by an algorithm.
This will all be done in the name of safety. And the groundwork is being laid right now. Have you noticed how often we hear about things or spaces being unsafe? This is a recent thing. And it’s not accidental. This is how we prepare to hand over our agency to smart machines. We are all riling ourselves up to make this process easier. Because it’s too much of a bother to try to resist.