Progress Diary

The first hurdle(s): navigating the institutional OS (part 1)

Over the course of the next two blogs we will attempt to set out some of the challenges we encountered in ‘setting up the project’. Issues of ethics and research integrity, information security and data protection became a penrose staircase – requiring careful, circuitous, and imaginative navigation. In this first blog, we attempt to unpack how cultural representations of hackers shaped the research governance process, and ultimately led us to develop a somewhat unwieldy solution to imagined attackers. The second blog will explore the other side, looking at how perceptions of state agencies, their interest in us and our data and their powers of surveillance also shaped the way we designed our research.

Although all criminological research is ethically complex in its own way, our research presented – and was seen by others to present – a number of distinctive challenges.  As we will go on to explain here, we think that many of the challenges which initially seem fairly distinctive about this project, are actually relevant to lots of other research projects. So we hope these blogs will be useful for anyone involved in research. Cheekily, we also hope that by setting out the ethical challenges we identified and how we have tried to engage with them, we will be able to pick your brains to establish vulnerabilities with our approach and improve our technical and procedural solutions. It is our intention that all of this work will ultimately be made freely available, and we encourage researchers who wish to engage with this area to make use of it when navigating these challenges, and let us know how they get on.


Unsurprisingly, the first set of challenges were thrown up by the dominance of ‘hacker’ as a signifier of technological criminal.  The risks the project conjured in the imaginations of the many who reviewed it are a function of ‘hackers’ being commonly equated with a malicious-tech-savvy-criminal-mastermind. This project rang the alarms-bells of every department; it was a harbinger of  ‘cyber-attacks’ for information security, the very essence of data protection’s nightmares, and just downright exhausting for the research ethics committee.  

obligatory hacker stock photo – just to be absolutely clear that we are *not* suggesting that the person depicted actually is a malicious-tech-savvy-criminal-mastermind (Photo by Nahel Abdul Hadi on Unsplash)

En garde! :  Cyber-attacks and institutional risk-perceptions

The first set of concerns revolved around an increased likelihood the institution would be attacked, that us as researchers would be attacked, and even that people involved in illegal hacking might use our project as a means through which to attack each other. These attacks might take various forms: attacks on the university network undermining its ability to operate, ransomware attacks on our data or the university more generally, phishing attacks for our credentials, stealing and publishing our data, online harassment or threats, and doxxing (publishing private information about an individual on the internet) us or our participants. Attacks of this nature may cause harm to us, the institution, criminology as a discipline, and our participants. Of course, we do not at all dismiss this as a risk, but rather wish to think critically about its distinctiveness in the context of our project.  

First, and as you’ll likely be aware if you’re reading this, universities are already a popular target for illegal cyber-attacks. A number of recent high profile events have made this abundantly clear. Last year, one of our team, Sarah, received an email from her former university, informing her that her alumni information had been exposed in a breach. As the article linked above makes clear, there may be multiple reasons for this exposure, but databases of personal information and intellectual property are likely to be of much greater interest to potential hackers than our research. Projects involving patents or valuable product and process development are a highly desirable target of both malicious state and non-state actors (recently COVID-19 vaccine research from multiple companies was targeted). This risk is neither unique, nor arguably particularly high when compared with projects of the nature outlined above. Equally, ransomware attacks are likely to continue and increase in frequency irrespective of the projects taking place and arguably mitigations should be considered by all researchers undertaking research (and work). However the cultural construction of ‘hackers’ inevitably framed how the project was understood and assessed from a security, data governance and research ethics perspective which led it to be scrutinised in a way which other projects, equally at risk of attracting the attention of cyber-attackers, are not. Dismantling these misconceptions is an important step in ensuring research governance processes are attuned to the reality of online risks to our data, institutions and well-being.

Online Harassment, researcher safety and selective paternalism 

Second, from the perspective of researcher safety, most academics are already routinely targeted by internet-mediated attacks. Phishing and spear-phishing campaigns are an inevitable product of a public facing role (and freely available email addresses). Ironically, many of these attacks leverage and capitalize on the conditions produced by marketised universities: and the pressure contemporary academics find themselves under to be a jack of all trades. For example, your average academic will need to work through tens of emails per day (for some, this will hit three figures), in addition to planning and delivering teaching, marking assignments and conducting research. The means by which all of these tasks are conducted normally involve a large number of bureaucratic processes, including the attaching of forms, opening of attachments, and asking others to process a payment/payment request. They also tend to do this for long intensive periods without breaks, holidays, due to the pressure to do it all. Ultimately, this creates an ideal environment for social engineering attacks and universities are well aware of this.  

Furthermore, scholars have been the targets of harassment and doxxing in various ways for quite some time already, and the internet has simply enabled this further.  Various politically and ideologically motivated groups  routinely target researchers (e.g. doxxing, online harassment, misinformation) who undertake research on issues related to gender, race, and ethnicity and who are typically politically left leaning.  While ethics reviewers often think about how researchers can protect themselves from sensitive topics, it is rare that committees or review processes scrutinise how well researchers are protected from harassment, and what resources institutions have in place to support them in the event that this happens. It is only through the provision of robust support that universities and ethics committees can be confident enough to avoid paternalistic tendencies to reject applications on the basis that the research exposes the individuals undertaking it to too great a risk of harm. It will also avoid the alternative approach where data management teams encourage researchers to create and approve risk assessments in which they must merely accept the risk to themselves, as a condition of being able to move forward. Both approaches raise their own ethical problems. 

We have had to come up with ways to protect ourselves, our participants and our data. One golden rule will be that we will not download any files which are sent to us, nor will we send attachments to any participants. We have uploaded all of our documents in plaintext to github where anyone can access them freely, using whatever privacy technologies they wish to protect their anonymity.  


We have got a new computer (the source of all the excitement in our first blog post).  The aim is that all communication with participants will happen via that computer. This serves a number of privacy and security functions.  First, this computer will never be allowed to connect with the University’s IT systems or networks. This airgapping approach insulates our traffic from university IP logging (for reasons discussed in the next blog), and also provides assurances to the security team, a condition of our applications approval. This PC is also not running any university software. We are using Qubes OS, a fantastic and freely available privacy oriented OS to run containerised instances on Linux/windows. These virtual machines provide us and our participants an extra layer of security (and very loud fans). To maintain the airgap with our data, our interviews either oral or written, will be recorded or dictated using an analogue voice recorder. 

Despite having put this rather clunky risk mitigation strategy in place, a pretty obvious attack route remains: us. We will necessarily continue to open hundreds of emails on our work and personal computers each week. It is the much disfavoured machinery of academic life, and to suggest any researcher (or institution) could ever rule it out as an attack vector would represent the most basic and fundamental misstep. We have done the training, all of the training (Shane has even written some!). We are already on high alert for phishing and other scams, but ultimately – as all social engineers know all too well – we are only human, which makes us the easiest route into most networks. Similarly, another human-related risk is that IT services fail to update older systems. Of course, all research (and techno-mediated work) involves these risks, and our project is no different in this respect. Interestingly, phishing or human-centred security risks were not one of the issues raised by the research governance processes, who focused solely on the technical. This common misunderstanding of how attacks against organisations primarily happen (e.g. classically the NHS, or more recently and catastrophically, the Health Service Executive in the Republic of Ireland), is a testament to how the cultural constructions of hacking and cybercrime not only misrepresent hacking as ‘bad’, but misrepresent the means by which computer-dependent crimes are committed as involving substantial technical expertise, which distracts from the reality; overburdened IT services being unable to update older systems, overburdened academics opening attachments or clicking links without thinking. These misrepresentations combine to produce a situation that is fundamentally damaging to our collective cybersecurity.  

*A reminder: if you are interested in taking part in our project, please take a look at the documents on our github site, and contact us on our protonmail email address: (please do not use our university email addresses).

Progress Diary

-bash: ssh GoingAFK@researchatlast!

Dr Sarah Anderson and Dr Shane Horgan 


GoingAFK:~ ShaneandSarah$ Open

Last week was a big week for us. After years of planning (literally!), the final preparation for our research project into people’s moves away from illegal “hacking” (more on this term later) is in place: we have a computer! It looked as if it might not happen when our enthusiastic team member, Shane, didn’t check the IT desk opening times, but we made it. Getting a computer shouldn’t have been as much of a problem as it has been, but a global pandemic has made relatively easy tasks complicated, in this case leading to an international shortage of IT equipment. This isn’t the only way that the pandemic has thrown a spanner in the works of this project (more on this later as well).

Selfie of smiling researcher carrying laptop bag
The acquisition of the laptop.

We are starting this blog to chart our progress, the ups and downs (of which there is already some catching up to do), and to start a conversation with you about the issues we are grappling with. These issues vary from conceptual and definitional, to methodological, technical and ethical ones. One of the aims of our project is to make ‘open source’ our approach to negotiating and managing these complexities and questions, with the underlying aim being to enable future sources to tackle them more easily. This post represents a first step. 

The Pretext

Some background….The project started with a conversation between two friends in a pub. We have now decided that pubs might just be our most creative work environment. Shane is interested in all things cybercrime-related and has done research into how different groups and organisations routinely (do and don’t) protect themselves from cyber threats. At that time he was designing a new sociological course on cybercrime. Sarah’s recent work had explored something known as ‘desistance’ from crime. Broadly this means the process by which people move away from involvement in criminal offending. There is a lot of research in this area, but so far, most of this research has been with people involved in offending IRL (drug crime, violence, burglary etc). 

We got thinking about whether or not existing theories about this process would stand up when applied in a totally different context, for example, illegal forms of hacking. One theory suggests that important ‘turning points’ in someone’s life, such as getting a job or getting married, help explain why people move away from offending – in part because they are too busy doing other things in other places. But people with IT skills might be sat at their computer at work, so potentially still having the opportunity to keep doing what they were doing. Equally, what might be deemed as illegal hacking in one context might be perfectly legal and encouraged in another. Another theory focuses on shifts in peoples’ identity, where someone starts to see themselves as a law-abiding person committed to ‘pro-social’ values. But from what we knew, many people involved in hacking already have values that could be regarded as pro-social (even if they are not always pro-corporate!). This got us thinking about the extent to which moves away from illegal forms of hacking involve submitting to dominant (neoliberal? political? ideological?) values, and of course whether that’s ultimately what ‘desistance’ means more generally.

Bugs and vulnerabilities 

Since then we have been developing this project. But even basic things have proved to be difficult. To start with, we have kept coming back to one pretty crucial question: what are we even talking about? This is because each of the terms in our research question – ‘desistance’, ‘illegal’, ‘hacking’ – are problematic in their own way. Let’s start with the term ‘hacking’.


‘Hacker’ has become synonymous with ‘criminal’ (no thanks to the media and some criminologists). But as many have been at pains to point out, the term hacking covers a wide range of different activities and ‘craft’ (Steinmetz, 2016). Therefore, how we conceptualise and understand ‘hacking’ from the outset of our project has huge implications for the final image of hacking careers that we will eventually be able to decipher. Hacking, often (but not always), refers to highly skilled work (paid and unpaid), some of which historically has been pretty critical to the development of the Internet, its security, our privacy, and way of life more generally. For now, we have added the term ‘illegal’ in front, to show that it is forms of hacking that are (or at least can be) criminalised that we are interested in. But this still presents problems. 

To start with, some legislation has been pretty poorly defined, and many of those who engage in practices that are ‘illegal’ are still actively working towards improving cyber-security. For example, independent security researchers exploring and cataloguing malware. In other words, some of those who are technically involved in breaking the law might still be termed ‘the good guys’. At the same time, state-led hacking practices that involve the hoarding of 0-day vulnerabilities operate with pseudo-legality, despite presenting a substantial risk to the collective security of society online. Overall, when subjected to more careful scrutiny ‘legal’ and ‘illegal’ are tenuous categories in the context of our research, which introduce as many problems as they solve. We tried adding the term ‘malicious’ in front too, but that term is also pretty subjective. Malicious according to who? People rarely describe their own activities as malicious, and what a company or government views as malicious, another person may view as altruistic (or vice versa). 


Another problem was with ‘desistance’. There are lots of debates in the literature about how you determine whether someone has actually desisted from crime, and when someone counts as having really stopped (one pessimistic perspective is that you can only fully evidence desistance when you are dead!). In addition to these debates, this topic presents additional headaches: e.g. the diverse range of practices covered by the term ‘hacking’, and the fact that the legality (or not) of the practices may rest to a large extent on the contexts in which you are engaged in them, on whose behalf, and how these are viewed by (which) government. So you see our problem. One day we are going to write a paper on just this (One day… the road to hell for academics is paved with half planned semi-drafted papers).

The next step was planning the project and getting someone to fund us. In this project, we want to explore how hacking careers change over time and how hacking practices and hacker communities fit into people’s lives, across their life course. To explore these issues, we want to securely and ethically collect the life stories of people who have been involved in illegal forms of hacking. We managed to persuade the lovely people at the Carnegie Trust to pay for us to fly all over the world to hacker conferences (DEFCON and CCC) where we could try and build relationships and find people who might generously be willing to share their stories with us.

At the end of January 2020, soon after we were awarded funding, Sarah and Shane met to celebrate, and plan the next steps. We even did a risk assessment, where we jokingly included ‘Global pandemic – no international travel – no conferences – total replan necessary’. You know the rest…

methodology> bash -x [Negotiating Risk and Representations]

Since then we have been busy redesigning the project and navigating the University ethics process. We have come a long way, but we are still trying to find and think of new ways to build those relationships, and are always on the lookout for people, forums, and organisations who might be able to provide a way in (ideas welcome!). Our project documents can all be found on our GitHub page:* 

The ethics process has also presented multiple hurdles, given the sensitivities of the project, the data being collected, and the fact that the criminal stereotype of the ‘hacker’ (rightly or wrongly) now rings alarm bells with lots of different university departments! We also have a half-written paper on this, which we are hoping to present at the Human Factor in Cybercrime conference later this year. It definitely deserves a blog in itself, so we will come back to this in our next entry….

For now though, thanks for your interest in our project. We are just the right mixture of nervous and excited, and will let you know how we get on. Talk soon. 

*If you are interested in taking part in our project, please do not contact us on our university email addresseses. To help us protect your anonymity, please contact us on our project’s protonmail account:

GoingAFK:~ ShaneandSarah$  exit 
Saving session…
…copying shared history…
…saving history…truncating history files…
Deleting expired sessions…1 completed.
[Process completed]