Big Data Analytics and the Securitisation of Mobility

#58

Btihaj Ajana

The paper is concerned with the ways in which the adoption of big data analytics in border management is increasingly contributing to the augmentation of the function and intensity of borders. Recently, there has been a growing interest in Big Data Science and its potential to enhance the means by which vast data can be collected and analysed to enable more advanced decision making processes vis-à-vis borders and immigration management. In Australia, for instance, the Department of Immigration and Citizenship has recently developed the Border Risk Identification System (BRIS) which relies on big data tools to construct patterns and correlations for improving border management and targeting so-called ‘risky travellers’ (Big Data Strategy, 2013). While in Europe, programmes such as EUROSUR and Frontex are examples of big data surveillance currently used to predict and monitor movements across EU borders. In this paper, I argue that with big data come ‘big borders’ through which the scope of control and monopoly over the freedom of movement can be intensified in ways that are bound to reinforce ‘the advantages of some and the disadvantages of others’ (Bigo) and contribute to the enduring inequality underpinning international circulation. Drawing on specific examples, I explore some of the ethical issues pertaining to the use of big data for border management. These issues revolve mainly around three key elements:
First, I examine the issue of categorisation. I argue that processes of border surveillance and securitisation through big data analytics are inevitably implicated in acts of sorting and classification, which pose one of the pertinent ethical issues vis-à-vis the politics of borders. These acts enable the systematic ordering and categorisation of the moving population body into pattern types and distinct categories, a process that contributes to labelling some people as risky and others as legitimate travellers, and demarcating the boundaries between them. Therefore, and through such processes, there is the danger of augmenting the function of borders as spaces of ‘triage’ whereby some identities are given the privilege of quick passage whereas other identities are arrested (literally). Correlatively, I argue that big data techniques and their categorising mechanisms raise the very foundational question of what it means to be ‘human’ nowadays. For, far from being this presumably ‘universalistic’ and all-inclusive category, humanity has for so long ‘operated through systematic technologies of inclusion/exclusion’ (Bhandar, 2004) that are informed by a defective thinking of what constitutes the human in the first place.
The second point relates to the projective and predictive nature of big data techniques and their approach to the future. Much of big data analytics and the risk management culture within which it is embedded are based on acts of projection whereby the future itself is increasingly becoming the object of calculative technologies of simulation and speculative algorithmic probabilities. This techno-culture is based on the belief that one can create ‘a grammar of futur antérieur’ by which the future can be read as a form of the past in order to manage risk and prevent unwanted events (Bigo, 2006). Big data promise to offer such grammar through their visualisation techniques and predictive algorithms, through their correlations and causations. However, I argue, following from Kerr and Earle (2013), that big data analytics raises concern vis-à-vis its power to enable ‘a dangerous new philosophy of preemption’, one that operates by unduly making assumptions and forming views about others without even ‘encountering’ them. In the context of big data’s use in border management and immigration control, this translates into acts of power, performed from the standpoint of governments and corporations, which result into the construction of ‘no-fly lists’ and the prevention of activities that are perceived to generate risk, including the movement of potential asylum seekers and refugees. I also take issue with the way in which big data approaches the ‘future itself’. Increasingly, and as Bigo and Delmas-Marty (2011) rightly argues, the ‘colonisation’ of the future is becoming a major feature in the governance of various fields and spaces including those of borders and transnational mobility. While the preemptive attitudes towards the future are operating in the name of security, safety, and the fight against terrorism and other social ills, they are also lacking a sense of awareness that ‘the present situation is also the fault of the will to master the world, to try to control some part of it by ‘scientifically’ discriminating the enemy within, and to believe that technology can do it’ (Bigo, 2006). These attitudes are also a manifestation of a parochial style of thinking and governing. By favoring a technocratic approach as opposed to questioning the very power structures and dynamics that are at the base of the world’s staggering inequalities, oppressions and socio-political troubles (which often lead to forced migration), these fear-driven governmental attitudes end up tearing issues of borders, immigration and asylum away from their historical and political context, and separating them from ‘human rights and social justice frameworks’ (Wilson & Weber, 2008).
The third point relates to the implications of big data on understandings and practices of identity. In risk management and profiling mechanisms, identity is ‘assumed to be anchored as a source of prediction and prevention’ (Amoore, 2006). With regard to immigration and borders management, identity is indeed one of the primary targets of security technologies whether in terms of the use of biometrics to fix identity to the person’s ‘body’ for the purpose of identification and identity authentication (Ajana, 2013) or in terms of the deployment of big data analytics to construct predictive profiles to establish who might be a ‘risky’ traveller. Very often, the identity that is produced by big data techniques is seen as disembodied and immaterial, and individuals as being reduced to bits and digits dispersed across a multitude of databases and networks, and identified by their profiles rather than their subjectivities. The danger of such perception, I argue, is the precluding of social and ethical considerations when addressing the implications of big data on identity as well as the reduction of the latter into an asset, a commodity. I therefore emphasise the importance of embodied approach to big data and identity to contest this presumed separation between data and their physical referent and the ever-increasing abstraction of people. This is crucial, especially when the identities at issue are those of vulnerable groups such as asylum seekers and refugees. I argue that this requires the rethinking of the entire normative framework through which the relationship between identity, data and body is understood and conceptualised, and challenging the taken for granted distinction between ‘embodied identity or physical existence […] and information about (embodied) persons’ (Van der Ploeg, 2003). For, identity cannot be dissociated from the embodied experience nor can it be extracted merely from the collection of data and information.

Keywords: big data, borders, identity, mobility, surveillance.