Indeed, we might anticipate computers and their software scoring the highest across the board in terms of these cognitive dimensions since they are highly malleable and flexible. Unlike a total station, for example, which although digital can only perform within certain limited survey-related parameters, the computer is capable of taking on different roles or guises, frequently simultaneously. For instance, as I write this, my computer is running the word-processor that is being used to compose this text and at the same time it is also functioning as a radio playing in the background, my email program has just informed me a new message has arrived, and any number of devices from anti-virus software to hardware components are periodically checking online for updates without my intervention. Furthermore, the operating system that actually makes my computer work is invisibly managing a host of different processes: from what happens when I press a key to what appears on the screen; from low-level file management that ensures I can retrieve what I create to the communication protocols used to access the printer that produces my final copy. Everything we do with our digital devices is underpinned by software driven by innumerable algorithms, which are frequently characterised as invisible, black boxes. Striphas (2015), for example, has argued that our reliance on algorithms constitutes what he calls an 'algorithmic culture', while Bogost (2015) goes further and suggests that we live not so much in an algorithmic culture as a 'computational theocracy' with the invisibility of algorithms giving them a transcendental, almost divine character. In the process, algorithms can become mythologised:
'On the one hand, they have been depicted as powerful entities that rule, sort, govern, shape, or otherwise control our lives. On the other hand, their alleged obscurity and inscrutability make it difficult to understand what exactly is at stake.' (Ziewitz 2016, 3).
Ziewitz points to the way that algorithms are seen as powerful and consequential actors, imbued with agency and impact; however, as black boxes they are difficult to understand and, in a recursive way, this tends to be read as another sign of their influence and power (2016, 5-6). These algorithms drive the collection, storage, and manipulation of archaeological data - from the capture of our data using digital instruments with knowledge embedded within their hidden algorithms to the operationalisation of data within computer systems, to the retrieval and processing of those data. In the process, different human agents take decisions in relation to the design of algorithms, the implementation of algorithms, and the input to algorithms, and take actions arising from the output of algorithms. Our application of a computer is therefore effectively a collaborative venture, whether we realise it or not, combining human and algorithmic agency in a complex web of interrelationships that remain largely unseen and seemingly impenetrable. Our use of these tools is ultimately mediated by the actions and decisions of those who designed and created them - and may well have done so without our mode of use specifically in mind. Berry characterises this environment as one in which:
'code/software lies as a mediator between ourselves and our corporeal experiences, disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarised transmission system.' (Berry 2012, 381)
and argues that this requires careful description and critical attention not least because of the way that software hides its depths behind its interfaces (2012, 382). He introduces the notion of compactants - computational actants (2012, 391) - that operate silently and surreptitiously or, where the human agent is aware, offer perceived value and benefits that lead to them disappearing into the background again.
One logical response to this situation is to do nothing - to abrogate responsibility to the ubiquity of the machine in an essentially powerless, subservient relationship to the digital tools we use on a daily basis (Huggett 2004; 2012a). It might reasonably be argued that we do not need to know the model of processing at work, and indeed, using a well-designed black box system can seem to be effortless and unequivocally valuable. On the other hand, the inadequacy of many algorithmic models relative to the intuitive, action-situated nature of human intention means that the content of the black boxes should be highly visible (Hamilton et al. 2014). If that is the case, and faced with such complexity, how do we sensibly move forward? What do we, as end-users, need to know about in our use of these tools? What should we understand? What can we reasonably understand? How much do we need to know about the ordering principles and bases of the tools we use (Hamilton et al. 2014)? Even assuming that the algorithms underlying the software we use were accessible - a key aspect of the argument in favour of the archaeological use of open-source software (Marwick 2016; Ducke 2015, for instance) - the knowledge required to understand and disentangle the code is considerable, and trust in the correct implementation of routines and processes by others remains ultimately necessary. In fact, it is becoming the case that even programmers themselves do not always understand what the software system is actually doing: for instance, Google's DeepMind software extends its own neural network by teaching itself, creating a computationally incomprehensible black box. Demis Hassabis, one of its founders, believes that in future:
'The system could process much larger volumes of data and surface the structural insight to the human expert in a way that is much more efficient - or maybe not possible for the human expert... The system could even suggest a way forward that might point the human expert to a breakthrough.' (Metz 2016).
Internet Archaeology is an open access journal based in the Department of Archaeology, University of York. Except where otherwise noted, content from this work may be used under the terms of the Creative Commons Attribution 3.0 (CC BY) Unported licence, which permits unrestricted use, distribution, and reproduction in any medium, provided that attribution to the author(s), the title of the work, the Internet Archaeology journal and the relevant URL/DOI are given.
Terms and Conditions | Legal Statements | Privacy Policy | Cookies Policy | Citing Internet Archaeology
Internet Archaeology content is preserved for the long term with the Archaeology Data Service. Help sustain and support open access publication by donating to our Open Access Archaeology Fund.