Categories
Progress Diary

The first hurdle(s): navigating the institutional OS (part 1)

Over the course of the next two blogs we will attempt to set out some of the challenges we encountered in ‘setting up the project’. Issues of ethics and research integrity, information security and data protection became a penrose staircase – requiring careful, circuitous, and imaginative navigation. In this first blog, we attempt to unpack how cultural representations of hackers shaped the research governance process, and ultimately led us to develop a somewhat unwieldy solution to imagined attackers. The second blog will explore the other side, looking at how perceptions of state agencies, their interest in us and our data and their powers of surveillance also shaped the way we designed our research.

Although all criminological research is ethically complex in its own way, our research presented – and was seen by others to present – a number of distinctive challenges.  As we will go on to explain here, we think that many of the challenges which initially seem fairly distinctive about this project, are actually relevant to lots of other research projects. So we hope these blogs will be useful for anyone involved in research. Cheekily, we also hope that by setting out the ethical challenges we identified and how we have tried to engage with them, we will be able to pick your brains to establish vulnerabilities with our approach and improve our technical and procedural solutions. It is our intention that all of this work will ultimately be made freely available, and we encourage researchers who wish to engage with this area to make use of it when navigating these challenges, and let us know how they get on.

THE ‘HACKER’ STEREOTYPE AND ACADEMIC SENSIBILITIES TOWARDS RISK

Unsurprisingly, the first set of challenges were thrown up by the dominance of ‘hacker’ as a signifier of technological criminal.  The risks the project conjured in the imaginations of the many who reviewed it are a function of ‘hackers’ being commonly equated with a malicious-tech-savvy-criminal-mastermind. This project rang the alarms-bells of every department; it was a harbinger of  ‘cyber-attacks’ for information security, the very essence of data protection’s nightmares, and just downright exhausting for the research ethics committee.  

obligatory hacker stock photo – just to be absolutely clear that we are *not* suggesting that the person depicted actually is a malicious-tech-savvy-criminal-mastermind (Photo by Nahel Abdul Hadi on Unsplash)

En garde! :  Cyber-attacks and institutional risk-perceptions

The first set of concerns revolved around an increased likelihood the institution would be attacked, that us as researchers would be attacked, and even that people involved in illegal hacking might use our project as a means through which to attack each other. These attacks might take various forms: attacks on the university network undermining its ability to operate, ransomware attacks on our data or the university more generally, phishing attacks for our credentials, stealing and publishing our data, online harassment or threats, and doxxing (publishing private information about an individual on the internet) us or our participants. Attacks of this nature may cause harm to us, the institution, criminology as a discipline, and our participants. Of course, we do not at all dismiss this as a risk, but rather wish to think critically about its distinctiveness in the context of our project.  

First, and as you’ll likely be aware if you’re reading this, universities are already a popular target for illegal cyber-attacks. A number of recent high profile events have made this abundantly clear. Last year, one of our team, Sarah, received an email from her former university, informing her that her alumni information had been exposed in a breach. As the article linked above makes clear, there may be multiple reasons for this exposure, but databases of personal information and intellectual property are likely to be of much greater interest to potential hackers than our research. Projects involving patents or valuable product and process development are a highly desirable target of both malicious state and non-state actors (recently COVID-19 vaccine research from multiple companies was targeted). This risk is neither unique, nor arguably particularly high when compared with projects of the nature outlined above. Equally, ransomware attacks are likely to continue and increase in frequency irrespective of the projects taking place and arguably mitigations should be considered by all researchers undertaking research (and work). However the cultural construction of ‘hackers’ inevitably framed how the project was understood and assessed from a security, data governance and research ethics perspective which led it to be scrutinised in a way which other projects, equally at risk of attracting the attention of cyber-attackers, are not. Dismantling these misconceptions is an important step in ensuring research governance processes are attuned to the reality of online risks to our data, institutions and well-being.

Online Harassment, researcher safety and selective paternalism 

Second, from the perspective of researcher safety, most academics are already routinely targeted by internet-mediated attacks. Phishing and spear-phishing campaigns are an inevitable product of a public facing role (and freely available email addresses). Ironically, many of these attacks leverage and capitalize on the conditions produced by marketised universities: and the pressure contemporary academics find themselves under to be a jack of all trades. For example, your average academic will need to work through tens of emails per day (for some, this will hit three figures), in addition to planning and delivering teaching, marking assignments and conducting research. The means by which all of these tasks are conducted normally involve a large number of bureaucratic processes, including the attaching of forms, opening of attachments, and asking others to process a payment/payment request. They also tend to do this for long intensive periods without breaks, holidays, due to the pressure to do it all. Ultimately, this creates an ideal environment for social engineering attacks and universities are well aware of this.  

Furthermore, scholars have been the targets of harassment and doxxing in various ways for quite some time already, and the internet has simply enabled this further.  Various politically and ideologically motivated groups  routinely target researchers (e.g. doxxing, online harassment, misinformation) who undertake research on issues related to gender, race, and ethnicity and who are typically politically left leaning.  While ethics reviewers often think about how researchers can protect themselves from sensitive topics, it is rare that committees or review processes scrutinise how well researchers are protected from harassment, and what resources institutions have in place to support them in the event that this happens. It is only through the provision of robust support that universities and ethics committees can be confident enough to avoid paternalistic tendencies to reject applications on the basis that the research exposes the individuals undertaking it to too great a risk of harm. It will also avoid the alternative approach where data management teams encourage researchers to create and approve risk assessments in which they must merely accept the risk to themselves, as a condition of being able to move forward. Both approaches raise their own ethical problems. 

We have had to come up with ways to protect ourselves, our participants and our data. One golden rule will be that we will not download any files which are sent to us, nor will we send attachments to any participants. We have uploaded all of our documents in plaintext to github where anyone can access them freely, using whatever privacy technologies they wish to protect their anonymity.  

DEVELOPING OUR SOLUTION

We have got a new computer (the source of all the excitement in our first blog post).  The aim is that all communication with participants will happen via that computer. This serves a number of privacy and security functions.  First, this computer will never be allowed to connect with the University’s IT systems or networks. This airgapping approach insulates our traffic from university IP logging (for reasons discussed in the next blog), and also provides assurances to the security team, a condition of our applications approval. This PC is also not running any university software. We are using Qubes OS, a fantastic and freely available privacy oriented OS to run containerised instances on Linux/windows. These virtual machines provide us and our participants an extra layer of security (and very loud fans). To maintain the airgap with our data, our interviews either oral or written, will be recorded or dictated using an analogue voice recorder. 

Despite having put this rather clunky risk mitigation strategy in place, a pretty obvious attack route remains: us. We will necessarily continue to open hundreds of emails on our work and personal computers each week. It is the much disfavoured machinery of academic life, and to suggest any researcher (or institution) could ever rule it out as an attack vector would represent the most basic and fundamental misstep. We have done the training, all of the training (Shane has even written some!). We are already on high alert for phishing and other scams, but ultimately – as all social engineers know all too well – we are only human, which makes us the easiest route into most networks. Similarly, another human-related risk is that IT services fail to update older systems. Of course, all research (and techno-mediated work) involves these risks, and our project is no different in this respect. Interestingly, phishing or human-centred security risks were not one of the issues raised by the research governance processes, who focused solely on the technical. This common misunderstanding of how attacks against organisations primarily happen (e.g. classically the NHS, or more recently and catastrophically, the Health Service Executive in the Republic of Ireland), is a testament to how the cultural constructions of hacking and cybercrime not only misrepresent hacking as ‘bad’, but misrepresent the means by which computer-dependent crimes are committed as involving substantial technical expertise, which distracts from the reality; overburdened IT services being unable to update older systems, overburdened academics opening attachments or clicking links without thinking. These misrepresentations combine to produce a situation that is fundamentally damaging to our collective cybersecurity.  

*A reminder: if you are interested in taking part in our project, please take a look at the documents on our github site, and contact us on our protonmail email address: GoingAFK@protonmail.com (please do not use our university email addresses).

Leave a Reply

Your email address will not be published. Required fields are marked *