Trust is a vital aspect of every friendship, every family, every society. When you and another person trust each other, you’ve worked out that your interests are suitably aligned. You both believe the other will behave in ways that ‘look out’ for the two of you, that serve you both well.
Trust supports our interactions as social animals. We’ve evolved to look for clues that tell us how trustworthy another might be, and to explore ways to test and build that trust without really thinking about it.
Roger Mayer defines trust as:
the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party. [An Integrative Model of Organizational Trust]
The reference to vulnerability conveys that there is something important in the object of trust that could be lost should the trustee let down the trustor. The trustor risks a willingness to trust and determines whether to do so by assessment of the trustee’s trustworthiness. When people trust one another, they have determined that their respective interests are encapsulated by the other; they’re aligned. The situation can be generalised as:
A trusts B to do X, optionally in context Y.
To say I trust you in some way is to say nothing more than that I know or believe certain things about you – generally things about your incentives or other reasons to live up to my trust, to be trustworthy to me. [Trust and Trustworthiness, Russell Hardin]
Hardin proposes that, rather than attempt to qualify or quantify a matter of trust, it’s far easier to account directly for trustworthiness, which then begets trust.
How trustworthy is our everyday technology? The answer entails a longer and more challenging chain of trust. To say …
A trusts T (the technology) to do X in context Y
requires that …
A trusts V (the technology vendor) in the context of both X and Y to develop, produce, and sometimes maintain and operate T according to V’s stated objectives and operating principles.
Trustworthiness is then trickier to ascertain:
Ascertaining (un)trustworthiness requires a detailed technical examination beyond the means of many users of digital products and services. Absent knowledge of a reason to withhold it, and perhaps subject to prevailing norms, they offer their trust irrespective of their ability to monitor or control that other party.
Nudge is a mot du jour following the success of a book of that title discussing ways in which people might be influenced “to chose what is best for them” [Sunstein and Thaler].
Unsurprisingly, much of the advice applies equally to marketers seeking to influence people to chose what is best for them. Metaphorically speaking a nudge might be said to be perceivable by the individual on the receiving end, yet such influence is not always perceivable and may be engineered deliberately not to be. Perhaps then it’s more accurate to talk of ‘being programmed’.
“Very swiftly we lose control of many aspects in our life. The idea and trust that humans are very well capable of acting responsibly is slowly evaporating.” [Networks of Control, Christl and Spiekermann]
In other words, if we can no longer trust ourselves because we’re unable to trust our technology, by corollary we become less trustworthy to others with corrosive consequences for the fabric of our societies.
This is not a mere matter of public education, despite excellent efforts in that respect e.g. from the Wall Street Journal [What They Know - Wsj.com, 2010] and Public Radio [Privacy Paradox from the Note to Self podcast, WNYC (New York Public Radio), 2017]. Just ask yourself, can you determine exactly when and how you're being programmed, when and how your trust is being dishonoured? Just as importantly, and remarkably prescient given its time, Wiener (1950) observed our nature to:
... accept the superior dexterity of the machine-made decisions without too much inquiry as to the motives and principles behind these.
He warned that allowing a machine to make decisions for us – to “decide our conduct” – does not end well unless we do so having previously comprehended its calculus.
If we are then to trust technology and have that trust respected irrespective of our individual audit facilities, if our collective vulnerability isn’t to be suckered, we need to effect systemic change. Given that the mechanism exploiting our vulnerability is the data flowing unseen in the digital realm – data relating to our relationships, our proclivities, our movements, our transactions, our beliefs – grappling with the concept of privacy in the digital age is a good place to start.