The brave new world of research IT
Almost every organization today is grappling with the pressure to innovate at pace with data and AI, while carefully managing operational, security and compliance risks. This is doubly true for research institutions, where competition is fierce, the use of sensitive data is prevalent, and the value of a breakthrough is potentially huge.
The risk of work being stolen or sabotaged is greater than ever due to intensifying geopolitical rivalries and the proliferation of generative artificial intelligence. AI offers malicious actors unprecedented power to probe defenses, impersonate authorities and code malware at speed and scale. The numbers bear this out, with as many as 97% of higher education establishments in the UK experiencing a cyber-attack in the past three years.
As well as empowering actors to obtain data, technology is also democratizing the means to misuse it. The contents of a database can just as easily be analyzed by AI to create a devastating bioweapon as it can a lifesaving vaccine. And with quantum computers on the horizon, traditional encryption is no deterrent, as actors can harvest today and unencrypt it in the future.
Just as prevalent as external threats are the risk of data loss by well-intentioned, authorized users. The promise of increased productivity may tempt under-pressure researchers to use unsecure AI tools and hardware. If data isn’t locked-down or if sanctioned tools aren’t adequate, there’s a very real risk of users uploading sensitive files to public models or unprotected personal devices running the likes of OpenClaw.
In short, research worldwide is facing a multitude of digital threats, many of which didn’t even exist five years ago. It’s therefore no surprise that regulators and funding bodies are demanding higher security standards across the board. The consequences of inaction are therefore not limited to stalled progress in individual projects but extend to significant financial penalties and reputational damage that will harm institutional competitiveness.
For example, in Europe, non-compliance with GDPR and the EU AI Act can lead to fines of up to €35 million, while the NIS2 Directive classifies research entities as critical infrastructure and can hold leadership personally liable. In the USA, failing to meet the CMMC framework legally bars institutions from Department of Defense funding. Similarly, violating NIH and HIPAA policies can result in rejected grants, paused clinical trials, and withheld future funding.