Ethical Implications of System Accountability
Security for the user or security for the company? • When software developers build large systems with accountability of users they are empowering the companies that own the systems while potentially endangering the system’s users. The alternative is building systems with deniability, which turn out to be more akin to systems from the pre-digital world.
Hello software developer, I would like to talk about how we think about the systems we build. More precisely, I think we have a tendency to think about concrete details of our systems, sometimes missing the bigger picture.
When we build systems, we want them to be secure, right? How do we talk and think about security, then? Are we missing something?
A lot of the time, I think we are too entrenched in the current way of doing things, the status quo. As new developers, in particular, we are taught and socialized to build systems in a certain way. But maybe there is no reason to uphold this particular status quo.
I will try to look into that, specifically focusing on building secure systems that hold users accountable for their actions. This is how one should build secure systems today, we are told. Is it so? Whose security are we actually considering when we build such systems?
Secure systems hold users accountable #
The CIA triad is a model for system security. For a system to be secure, the triad postulates, it must ensure confidentiality, integrity and accessibility of its data. The model may be as old as Julius Caesar’s Gallic Wars, although the acronym definitely isn’t.
At the IT-University of Copenhagen I recently took a course in security which introduced the CIA triad with an added component: accountability. For a system to be secure, it was argued, it must be possible for a company or system to hold users accountable for their actions. To ensure that unauthorized or illegal use, as well as attempts to breach the system, can be traced back to the violating user.
Some argue that accountability is already an intrinsic part of integrity: if no user can be held accountable for an action, the integrity of the data cannot be ensured. Others argue for different extensions of the triad than adding just accountability.
CIA+A is one model for defining information security, but definitely not the only one. Accountability, in particular, is relevant from the perspective of corporations: how can we, as a company, best protect our data, keeping our interests in mind?
CIA+A is not a model for information security from the perspective of the user. In fact, building systems with accountability has significant ethical implications for users that we developers rarely consider properly. Let’s do just that for a moment.
The problem, at its core, is that the power balance in most modern, widely used software systems is skewed in the favor of businesses. Companies know more about their customers than ever before, and because this data it stored it can be used.
The data exists. It may be abused. #
Let’s take banks as an example.
In the ‘50s banks knew your balance, they knew when you withdrew or deposited money, and they potentially knew something about your loans.
Today, banks have a continuous record of exactly what you purchase and from where, because you use your credit card in shops and small transactions now go directly to your bank account, rather than via cash.
This makes it easier to solve crimes involving illegal transactions, but it also means that the bank knows of any legal-but-embarrassing transactions you may have made. This is information that could be used against you.
In fact, even evidence of transactions that are currently legal and not embarrassing could potentially be used against you in the future.
Humans are vulnerabilities: You can set up as many defences as you want. As long as you have actual real-life people with access to the data (which is the case, if the data exists), you have a vulnerability.
No amount of security is enough to protect against this. One example is the recent Danish scandal in which data from Nets, the Danish payment handling company, was leaked to a tabloid. The tabloid were given information of the whereabouts of celebrities, which was acquired from their confidential credit card transactions.
Nets were being good about their data, both in terms of accountability and from a legal standpoint.
They kept a trail of transactions in case of a later need for audits: they were holding their users accountable. This led them to keep huge amounts of data on all credit card users in Denmark. The data existed, and so it could be abused.
From a legal standpoint, only employees and contractors who had been cleared for work with sensitive data were allowed near the data. Even the leak, the contractor who illegally shared data with the tabloid, had this clearance. When people are involved and have access to data, they can be swayed to share it.
Nothing to hide #
I often end up talking to people about the importance of thinking about data and who gets to keep it. Especially when I talk to people outside of the tech world, people who may not have a good idea of just how much data is recorded and how it is stored, the response to the topic is surprise.
How is there even an issue here?
After all, isn’t it good that someone is watching the dangerous elements of society? And by the way I’m not doing anything I’m ashamed of so they can watch me all they want.
Let’s look at only the last part of that argument.[n:1] It is a primarily emotionally-based argument: it doesn’t feel wrong that anybody is watching me. And that argument makes perfect sense. The problem is not necessarily for you in person, but a problem for societies in general.
In this case, we are looking at banks and transaction handlers keeping their users accountable, which leads to trail of transactions. The banks and transaction handlers are the good guys, the enforcers of order. In this case, the information was used to track celebrities, but it could conceivably be used for much more sinister purposes.
Evidence of legal-but-embarrassing transactions can be used to ruin careers. A politician from an opposition party may, for example, have their career ruined or have their public reputation or political campaign significantly damaged by a leak of embarrassing details. (Think sex toys, porn habits, flowers that didn’t make it to their spouse, or even just frowned-upon literature.)
A young person who does not even consider their future may make mistakes or minor transgressions that can be used against them when their life turns around and they decide to fight for a better world and get involved in politics.
It is even possible to argue for illegal transactions not being completely monitored: sometimes breach of law is necessary in order to improve the law. The law is not, in and of itself, final.
Take, for example, gay rights: for a long time, homosexuality was punishable. This changed because of gay rights activists, in breach of the law by their existence.
Marijuana was recently legalised in some US states, but would not have been if not for the existing use of it.
You, personally, would lose out on progress for the society you live in.
Today, the risk of being punished for illegally downloading movies in Denmark is very small. Imagine if that changes in the future, and this becomes more frowned upon than it is today. This information could be leaked selectively about persons that powerful, connected or simply malicious people dislike, ruining reputations.
A 13-year-old kid not really understanding the rules later becomes the head of a broadcasting station, but has their life shattered when their teenage transgressions are leaked.
The data exists. So it may be abused.
The current government might not use the tools at their disposal (eg. pushing employees of banks to leak damning information on opposition candidates) but a government in the future might.
It is impossible to say what our government will look like in twenty, thirty or forty years.
But we can stop the potential for harm by not collecting as much data.
Deniability #
The alternative to building systems with accountability, is to build systems with deniability. Deniability is achieved when some event in a system can be guaranteed to have happened, and the involved parties are sure that it has happened correctly, but it cannot afterwards be traced to any single user.
This is not the most straight-forward concept, and I think it is best illustrated with an example.
Signal is a messaging protocol which is end-to-end encrypted and has built-in deniability. (The protocol is used in the Signal app as well as in WhatsApp.) This means that communications sent from one user to another can only be read by the intended recipient, and that the recipient could never prove, after receiving the message, that the original sender sent it.
If Alice sends Bob something private, intended for his eyes only, Bob can never prove that Alice sent it to him, because the way the message is signed means that he might as well have created it himself. This is deniability.
It turns out that this feature protects users better than the accountability of systems such as public-key cryptography. Signing a message irrevocably proves that the author sent it, and even if the message is shared with parties it was never intended to be shared with, it can be traced back to the originating user.
No matter how safely you construct your system, legal access to data may be turned into illegal or unwanted distribution of data, with the information strongly tied to a user.
Any system that builds in accountability violates the trust of users (trusting that their data is safe) and unnecessarily trusts other parties (employees, contractors).
Banks without accountability #
Some form of accountability is necessary in banking. We don’t need to know what users are doing, but we must know that transactions are valid. For example, if I can keep transferring money out of my account without the money actually disappearing, I am effectively copying money.
Banks also add the functionality to undo transactions. A transaction can be called back. It is a user-friendly feature that did not even exist with physical cash (other than by trusting the other party to uphold an agreement to take the goods back in return for the money).
Can we imagine a bank constructed in such a way that it cannot trace transactions made by customers? And if we can, what will it cost us as customers (in terms of functionality, security, etc.)?
The most promising alternative to banking is blockchain technology.
Bitcoin provides much of the functionality that a bank does, but with a distributed, rather than centralized, authority. Bitcoin allows transactions of virtual currency in a manner that is already as easy (if not easier than) transferring money through an online banking interface.
Bitcoin does not allow transactions to be undone. In that regard it is more like using cash.
Users of Bitcoin have better transaction privacy than users of banks, because the currency is inherently anonymous. Money is paid to wallets identified by random strings. Unfortunately, while no name is tied to the wallets themselves, they are publicly visible, and it is possible to trace the flow of money through the market.
In Denmark, police are using the open traceability facet of Bitcoin to trace suspected dealings in illicit substances, and Bitcoin transactions have recently been used as evidence in Danish court.
If the police can trace transactions, so can other people. Many bad actors, I would suspect, have more tech resources and skills at their disposal than the Danish police. Once a user’s identity has been tied to a Bitcoin wallet, it will be hard to use any Bitcoin without the transaction being tied to them.[n:2]
There are innovations being made in the world of the blockchain, however. Zcash adds better anonymity features to the existing feature set of Bitcoin, resulting in a currency that works essentially like cash, except it is digital. With Zcash, transactions are no longer traceable.[3]
With Zcash, the tradeoff from a user perspective becomes simply this: do we want to be able to undo transactions, or do we want our transactions not to be stored by a central authority?
It is possible to build systems that use cryptography as a replacement for trust in large corporations. Central authority in the digital world is not required.
Ethical Implications #
As software developers we make decisions every day that impact a lot of people. We build the invisible infrastructure of the world.
If you are a developer at a corporation which keeps large amounts of data on their customers, you are helping to build systems put these very same customers at risk. No system is ever completely safe. The threat landscape is forever changing. The only way to make sure your customers’ data doesn’t end up in the wrong hands is to never collect it.
In this post I have outlined the arguments for why such corporations endanger users. The arguments for why systems that value user security would look very different from those that value corporate security.
I have also listed examples of alternative approaches. Alternative kinds of systems that don’t require accountability, because there are no legal employee accesses to a user’s data.
You have a choice to make: what will you help build? This post may sway you one way or another. I hope, at least, that it has made you aware of the implications of your work.
I would love to hear your thoughts on the subject. Have I missed an obvious point? Am I just plain wrong? Or do you agree with me? Let me know.
If you would like to keep up with what I write, you can sign up for my newsletter.
Notes #
- That it is good that the dangerous elements of society are being watched, is also a duboius point: due to data saturation it is not necessarily true that more surveilance needs to improved security. It’s great security theater, though.
- There are Bitcoin equivalents of money laundering (tumbling) which can be used to make it more likely to untraceably transfer money from one (compromised) wallet into another (uncompromised) one. These cost money and, from an ethical standpoint, I do not think it should be necessary to use them in order to ensure one’s privacy.
- Zcash is constructed around a blockchain on which all participants can verify that transactions have happened legally. It is based on zkSNARKs (zero-knowledge Succinct Non-interactive ARguments for Knowledge), which can best be explained as black magic dervied from very advanced mathematics. It is possible to make a transaction, and show everybody that it happened legally without revealing the amount of money sent, who it was sent from, or who it was sent to. ✨ Magic! ✨