In an interview with The Australian Financial Review, Tim Cook was quoted by Inc.com as saying that: “Technology doesn’t want to be good. It doesn’t want to be bad, it’s neutral. And so it’s in the hands of the inventor and the user as to whether it’s used for good, or not used for good…The risk of not doing that means that technology loses touch with the user. And in that kind of case, privacy can become collateral damage. Conspiracy theories or hate speech begins to drown everything else out. Technology will only work if it has people’s trust.”
Tim Cook was, inter alia, reacting to the recent debacle on Facebook’s questionable approach to privacy. He certainly got the last part right … “technology will only work if it has people’s trust” … but the rest needs some analysis.
“Technology doesn’t want to be good. It doesn’t want to be bad, it’s neutral”
Cook isn’t the first to make this statement. It is often argued that technology is neutral. This view proposes that the effect of technology is solely dependent on how we use such tools.
Technology, like science, is a human endeavour that is guided by values and conceptions of what is good or desirable to achieve. Therefore, technology is made to serve a specific purpose and achieve specific aims for a specific audience. Thus, when we examine technology we must review two dimensions: (i) the tangible invention with its intended goals, and (ii) the achieved uses in the intended market. The view that technology is neutral has plausibility only in so far as it relates to observed uses abstracted from intended goals. This approach seems awkward as, ab initio, technology has intended functions (an air-conditioner’s purpose is to cool a room, and its use as a hammer is not apt or desirable) which are connected to the realisation of its expected goals. Thus, form can’t be divorced from function.
The design of the Wannacry cryptoworm had the objective to infect computers and demand ransom payments. Its observed use followed its intended purpose and infected 200,000 computers across 150 countries clocking up $4bn in economic losses. Assertions of neutrality may often be veiled attempts to escape responsibilities for intended consequences and thus should be debated thoroughly.

“It’s in the hands of the inventor (and the user) as to whether it’s used for good, or not used for good.”
During the design-phase of technology, social consequences are malleable, but during use these are largely set-in stone. Subsequently technologists must pay attention to ethical issues in early stages of the product’s engineering lifecycle.
In software architecture, we utilise many design frameworks which highlight maintainability, cost-control or efficiency. My proposal is to inject value-sensitive design principles in these architectural paradigms and place such values as litmus tests prior to delivery. These value-sensitive propositions will drive engineers to design technology for inclusivity or for ultimate human well-being. A good example of such principles may be found in the 2017 Asilomar design principles for AI which propose specific value-sensitive principles which should be adopted in AI deployments. More recently, the European Commission’s work through AI HLEG on trustworthiness.
Therefore can technology promote values? Can the Internet transmit moral values in an age where individualism has created a digital cacophony of disparate self-centred voices? Individualism is not an effect of technology. Perhaps social technologies have been an aggravating factor leading to a higher level of self-interest, however technology has merely evidenced the deeply changing society we live in. Individualism as a moral stance implies that a person acts for his or her own interest without concern of societal requirements.
Individualism gained speed when we collectively moved away from belief systems and religions, when we allowed a xenophobic fear and distrust of strangers to set in, when we expressed a lack of trust in government or structures which have, for years, served us. (More on this here). This social distancing has led to less local, inter-personal relationships and subsequently we have been drawn to the cult of ‘me’. As Derakhshan put it (Wired, 19/10/17): “…instead of a quest for knowledge we see a zest for instant approval from an audience, for which we are constantly (but unconsciously) performing”.
For technology to promote values we should focus our efforts on the design phase and also ensure that at every stage of computing (from acquiring of data to re-transmitting it) we implement and safeguard recognised values. These must include privacy principles relating to fairness, transparency or even social justice. A key value to be promoted in this regard (and perhaps safeguarded through policy) is that of informational self-determination or the capacity of the individual to determine the disclosure and use of his/her data. Similarly, the Internet can transmit values of social justice by ensuring that a larger amount of citizens are allowed access to Government so as to further enrich it. The utilisation of this public resource does not erode the state, it legitimises it further, by providing access to the distributed knowledge that it represents (more on this here).
Ultimately, although technology is centred on abstract engineering functions, it’s a central theme for us humans and we have the responsibility to shape it positively to create a better world.
No Comments