Everyday things & persuasive technology

Architectures of Control in Design, Blog

Two precedents from the interface between design, business and psychology are especially relevant here.
First, Donald Norman’s influential The Psychology of Everyday Things, later republished as The Design of Everyday Things [32], formalised and analysed much of the accumulated wisdom surrounding user behaviour and interaction with products–taking ‘human factors’ design beyond ergonomics and anthropometrics and into the field of usability: considering users’ conceptual models and mental processes, with the aim of improving the customer experience (and, with it, making products more competitive in the marketplace).
Norman’s clear explanation of forcing functions–he uses the seatbelt interlock as an example–with the classification into interlocks, lock-ins and lock-outs, is useful as a framework for understanding many architectures of control. He also sounds the appropriate notes of caution for designers considering the use of forcing functions:

“If a forcing function is really desired, it is usually possible to find one, although at some cost for normal behaviour. It is important to think through the implications of that cost–to decide whether people will deliberately disable the forcing function… It isn’t easy to force unwanted behaviour onto people. And if you are going to use a forcing function, make sure it works right, is reliable, and distinguishes legitimate violations from illegitimate ones.” [32]

The other major precedent at this design-business-psychology interface is the work of B J Fogg and his team at Stanford’s Persuasive Technology Laboratory [52] researching ‘captology’–computers as persuasive technology. Whilst much of the work is concerned explicitly with computer-based persuasion (websites, games and interactive software), the extension of software into products, particularly mobile devices is also a component of the research.
Fogg is explicit about the distinction between persuasion and coercion (and deception); many (indeed most) of the architectures of control outlined in this paper would undoubtedly be classed as coercive technology rather than persuasive technology by his definition. For example, taking two products which have a common possible outcome (reducing the amount of hours for which children watch television), Square-Eyes (q.v.) is probably on the coercion side of the boundary, whilst the AlternaTV system mentioned in Fogg’s book Persuasive Technology [53] is on the persuasion side, since it does not actually restrict children, merely encourage them through, effectively, a competition to see which ‘team’ can watch the least television.
Nevertheless, many of the points that Fogg raises are pertinent when the issue of consumers’ reactions to architectures of control is raised. From Persuasive Technology:

“Interactions created for mobile devices should support an intensive, positive relationship between user and product. Otherwise, the relationship is likely to be terminated, as the device becomes ‘a goner.’ If you viewed a mobile device as part of you, you would expect it to serve you; serving someone else would be a type of betrayal–your device sold you out.” [54]

Considering the ethics of the intentions behind persuasive technologies is a central part of captology research; the most favoured examples are those with intended social benefit, and whilst commercial benefit is not decried (especially where it is also helpful to the consumer), subversive uses of persuasive technology for commercial benefit are criticised–for example, Hewlett-Packard’s complex ‘MOPy Fish’ screensaver (which encouraged users to print multiple copies of documents, as an alternative to photocopying, in return for ‘points’ which would allow the user to ‘buy’ items to enhance the fish’s habitat) [55].
HP's MOPy Fish screensaver - an example of 'persuasive technology'
Hewlett-Packard’s ‘MOPy Fish’ screensaver–one of the more subversive ‘persuasive technology’ examples cited by B J Fogg
Fogg notes that, “in the future, certain interactive influence tactics are likely to raise ethical concerns, if not public outrage” [56], and, as applied to architectures of control in general, this may well be a significant understatement.


Previous: Control & networks | Next: The democracy of innovation