An awareness of architectures of control in products, especially digital technology, has been growing significantly over the past few years. Perhaps unsurprisingly, some of the strongest reactions have propagated in and been disseminated through internet communities, especially those at the intersection of technology and policy thinking.
‘Hacker’ culture may be commonly associated only with computers (and generally, by the media, in a negative and incorrect way), but in the correct sense of a culture of technical exploration, experimentation and the innovative testing of rules and boundaries, it is as evident in the young child who uses a stick to retrieve a confiscated football from a high shelf as in Richard Feynman determining how to retrieve secret documents from locked drawers at Los Alamos [63].
The Norwegian teenager working out how to get DVDs to play on his GNU/Linux box [e.g. 64] is not too far removed from the group of engineering students working out how to lift an Austin Seven van onto the roof of Cambridge’s Senate House [65].
There is no malicious intent: whether the attitude is Eric Raymond’s, that “the world is full of fascinating problems waiting to be solved” [66] or even Feynman’s “pleasure of finding things out” [67], much ‘hacking’ is simply the use of ingenuity in an attempt to understand products and systems more fully–indeed, an attempt to grok, in Robert Heinlein’s very useful terminology [68].
This fuller understanding can come through–and make possible–finding ways around the embedded architectures of control, with the result of freeing or improving information or functions that are being restricted or are obviously not optimised to the user’s advantage.
Another way of phrasing this might be to say that ‘reverse engineering’ (as demonised by so many EULAs) is not easily separable from ‘forward engineering’–almost all engineering projects depend on understanding of prior art to facilitate a new or improved function. To borrow twice, rather convolutedly, from Isaac Newton: there are many layers of innovators standing on each other’s shoulders, being supported by previous ingenuity and in turn supporting future innovators to see shinier pebbles further along the sea shore.
Specifically, many architectures of control in products (and software) are intended to remove what Edward Felten calls the ‘freedom to tinker’ [69]: the Audi A2 bonnet (q.v.) is a high-profile example, but even Apple’s deliberate design of the iPod to make battery replacement by the user a difficult task [e.g. 70] counts here as part of a trend to move product sovereignty away from the user and into the hands of the ‘experts’.
Whilst individual architectures of control–especially those backed by major companies, such as trusted computing and various DRM methods–have received public support from some ‘technical’ commentators, the most vocal reactions from the technical community are generally very wary of the impact that architectures of control may have on innovation and freedom.
For some, such as the Electronic Frontier Foundation, the fight against restrictive or repressive architectures of control is framed within a larger legal and civil rights context–“educat[ing] the press, policymakers and the general public about civil liberties issues related to technology; and act[ing] as a defender of those liberties” [71].
The ‘chilling effects’ [72] on innovation and cultural development caused by challenges to liberties, whether through architectures of control, or regulation, or both, are part of the debate, especially where ‘invisible’ (or perhaps, ‘opaque’) disciplinary architectures can effectively enforce norms as if they were regulation; as Lawrence Lessig says (specifically in relation to the architecture of ‘cyberspace,’ but nevertheless pertinent to disciplinary architectures in general):
“We should worry about this. We should worry about a regime that makes invisible regulation easier; we should worry about a regime that makes it easier to regulate. We should worry about the first because invisibility makes it hard to resist bad regulation; we should worry about the second because we don’t yet… have a sense of the values put at risk by the increasing scope of efficient regulation.” [73]
Equally, there are others for whom the effects of architectures of control on the freedom to innovate predominate in the debate. User-driven innovation (ranging from the development of pultrusion machinery highlighted by Eric von Hippel in the 1980s, to the phenomena of ‘innovation communities’ and ‘democratised innovation’ that he has more recently formalised [58]) is certainly challenged by the rise of architectures of control in products and software–for example, Hal Varian’s comment on some mobile phones which detect (and refuse to operate) if a non-recommended brand of battery is used:
“What about cellphone batteries? There are now hand pumps that allow you to produce enough juice to charge your own batteries. Inventors are experimenting with putting such pumps in your shoes so you can charge your cellphone by merely walking around. This would be great for users, but it is hard to experiment with such technologies if you can use only certain power sources in your cellphone.” [74]
The success of O’Reilly Media’s MAKE magazine [75]–“technology on your time”–aimed at independent technical enthusiasts and hobbyists from a range of skill levels, with each issue detailing user modifications to existing products (many of them computer-based), new developments in engineering and technology, and simple construction of entirely new projects, indicates that democratised innovation is perhaps a real field of growth, especially if the irritation level of some architectures of control is sufficient to drive people to find ingenious ways around them through tinkering. MAKE had 25,000 subscribers after 4 months, as opposed to O’Reilly’s estimate of 10,000 after a year [76].
MAKE magazine, from O’Reilly, embodies a new spirit of democratised innovation
Indeed, Richard Stallman’s foundation of the free software movement–perhaps the archetypal user-driven innovation community–was, in a sense, a reaction to the imposition of a contractual architecture of control (the more restrictive Lisp licensing implemented by Symbolics on MIT’s AI Lab [77]).
It is possible, then, that many in the technical community will relish the challenges set by increased use of architectures of control, and much good work may come from this; however, for the non-technical consumer, the challenges may lead to frustration and exclusion, as will be examined in the next section.
(One might argue that in-built restrictive architectures have actually encouraged innovation–would there have been so many groups dedicated to unlocking the iPod’s secrets if the architecture had been entirely open?–but this seems to be analogous to arguing that war is something to encourage because it forces innovation and resourcefulness: is there not a better way to achieve the same desirable results?)
Overall, much of the technical community’s (cautious) reaction to architectures of control can be summed up by Paul Graham’s comments–suitably annotated and with emphasis added:
“Show any hacker a lock and his first thought is how to pick it. But there is a deeper reason that hackers are alarmed by measures like copyrights and patents [or, in this case, architectures of control]. They see increasingly aggressive measures to protect “intellectual property” [and indeed, economic or politically strategic intentions] as a threat to the intellectual freedom they need to do their job. And they are right… It is by poking about inside current technology that hackers [and engineers, and designers] get ideas for the next generation.” [78].
Previous: The democracy of innovation | Next: Consumers’ reactions to DRM