Do You Want Your Tech Profitable, Ethical, Useful or Reliable?
Back in 2007, one of my computer science professors at the University of Potsdam asked the class: “What is good software“? "Reliable!“, "Interesting to build!“, "Financially successful!“. The answers were quite diverse.
Back then, I was wondering: "Who of us is right?" All these answers seemed to have their merit. But it was easy to imagine situations where these answers would be in conflict. Which one would prevail? In my class, we moved on to other topics without digging deeper.
More than a decade later, with many years of experience as a professional software developer, I have learned that these different answers are at the heart of some of the most enduring conflicts in how technology is built and used.
I also learned how these conflicts are consistently decided, and why. And sometimes, that scares me.
But let's return to my uni for a moment. In 2009, I was writing my master’s thesis on software architecture evaluation. Back then I learned that one of the concerns of software architects is designing software to meet required quality standards. A core question in software architecture theory is therefore deceptively simple: What is "software quality"? And like the students in my class, engineering researchers and professionals have a wide range of answers to this question.
What is "good" technology? It depends on whom you ask
Software engineering researchers Barbara Kitchenham and Shari Lawrence Pfleeger summarized five different understandings of software quality:
Transcendental view: The "I know it when I see it" approach. Quality can be recognized, but not defined. This is hard to measure or operate on.
User view: Defines quality as "fitness for purpose". Typically, measured by usability-centered criteria such as task success and user satisfaction.
Manufacturing view: Quality is understood as conformance to specification. Production and service metrics such as "99.99% availability" or "less than 10 defects per module" are typical of this view.
Product view: Quality corresponds with inherent attributes of the software, such as reliability or maintainability. Technical criteria such as "internal cohesion", and "module coupling" are some measures typical for this view.
Value-based view: Quality is defined by what the user (or the sponsor/client, which might not be the same) is willing to pay for. Measured by financial success measures such as profit or revenue.
Turns out these views on quality can be mapped by their relevance to different stakeholder groups of engineering projects. (I'll leave out the transcendental view because it is so hard to operate on!) Let's have a look at how these views apply to a well-known technology product such as Facebook.
The user view unsurprisingly is the most relevant to the users of technology, and their representatives within the company such as product owners or user research. Facebook's users might define the products quality as "great entertainment" and "easy to stay in touch with friends and family".
The manufacturing view and the product view are the most relevant to the engineers involved in designing, producing and operating technology. Facebook's engineers might be proud of their work if the software they build is written as "clean code" and "covered by a fully automated test suite".
The value-based view is the most relevant to the purchasers of technology and their in-house representatives in sales and marketing. The senior leadership team will also be mostly concerned with this view. They will be happy about "advertising revenue growth" and "positive cash flow".
Those who followed Facebook's press coverage over the last years might now be wondering about other stakeholders that are concerned with Facebook's quality. What about society at large and its representatives in politics and law?
Back in 1996 when Kitchenham and Pfleeger wrote their paper, the social impact of software wasn't widely discussed yet. To represent this development, let's add a political view of software quality to the list. That view defines quality as meeting local, national and international legal, environmental or ethical standards.
Building "good" technology involves trade-offs between stakeholders
In an ideal world, of course, all of these different perspectives would align. In reality, likewise of course, they often don't.
When engineers are following their vision of software quality by cleaning up working source code, they use up development time. Their users might think this time would be better spent adding new functionality to the software.
Unless the users are willing to pay for these new features, the sales team will not be thrilled about either of these two uses of expensive development time.
And, as the heated discussions about Facebook's newsfeed filters, advertisement model and other aspects of their software show, even a reliable, very profitable software platform widely adopted by users might raise serious legal and ethical concerns.
So, who wins when there is a conflict and trade-offs are needed?
These trade-offs get decided by who pays for and builds the software
Someone has to decide whose view of quality gets priority. In other words, someone decides which stakeholder's needs and goals are most important.
In the most general case, that someone will be whoever pays the engineers building the technology. That is just a consequence of normal employment or freelancing contracts.
In most cases, that is the funders or shareholders of a tech company. In profitable companies, the value-based view will necessarily be the driving perspective on what is "good" technology. Unless the company can profitably sell its technology, it will not sustain itself.
In some cases, it might be a government agency, a nonprofit organization or even the engineers funding themselves independently. These rarer cases are interesting because the resulting technology might not have to be optimized for profitability. Other views of quality might dominate, such as the users view, the engineering views or the political view.
Also in many cases, because engineers are scarce, they are given some say in what they are building, and how, as an incentive.
As a result, most technology is built with a value-based view of quality first, and the manufacturing and product views of quality second.
In other words, much of the technical infrastructure we live in is optimized not primarily for our needs as users or members of society. It is optimized primarily for the needs of the shareholders, funders and engineers of the company that built it.
That is what scares me. Technology, both physical and digital, is too much a central part of our environments for us — the people living with it — not to have a say in how it is shaped.
So what can we do to influence the tech we live with?
First, we have to accept that our technology is not necessarily built with our needs at the forefront.
One avenue then might be to “creatively readjust” our technology after it was produced to suit our needs better. This was the original meaning of the term “hacking” and one of the early motivators for developing Free/Open Source Software. Of course, that limits such adjustments to technically very skilled people and technology that has not been designed to be completely inaccessible to outside modification.
A more widely accessible way to influence technology development is to make the user and political views of quality a part of the value-based view. Using less jargon: Two levers we have is paying for our technology and pressing for legal regulation.
Users who pay for the technology they are using have much more influence on its development. Simply because “satisfying their needs, so they will keep buying” becomes a business imperative. If we don’t pay for our tech, someone else will, and become the main external stakeholder to satisfy. "You're not the customer, you're the product" is often quoted in the context of free services paid for by advertisers.
Going beyond that, we can support alternative funding models for technology, such as tax-funded or donation-based. These may not always be feasible but can provide a worthwhile counterpoint to profit-oriented models.
And, besides the individualistic approach of “voting with our money”, we can look into collective influencing via tech regulation. Social quality concerns that become legal requirements are much more likely reflected in tech products than purely ethical concerns.
Tech regulation is much more commonplace than its sometimes controversial reputation suggests. It covers a wide spectrum, from consumer safety laws such as banning dangerous chemicals in products to restrictions on controversial medical technology such as cloning.
Few would argue it is entirely unnecessary. As the ongoing discussion around fine-tuning recent legislation like the European data privacy law GDPR shows, legislation is rarely perfect on first try. But it does trigger discussions about how we as society want our technological infrastructure to look like and how we can create counterweights to the purely economic-value based approach to tech quality.
These discussions are necessary. A deeper understanding of how economic forces shape the quality of something that is essential to our lives is a great starting point for recognizing when these forces become overpowering. Technology needs to be produced in economically sustainable ways, so the value-based model of tech quality will be with us for at least as long as capitalism will be. But any force needs to be balanced in order to not become destructive.
Thanks to Kat R., Lisa H. and Katherine J. for your feedback on this post!