The cybersecurity of medical devices holds a particular fascination. The hit TV series Homeland devoted a 2012 episode to the hacking of the Vice President’s pacemaker – a fiction following factual revelations that Dick Cheney’s heart defibrillator was modified to prevent terrorists using it in an attempt on his life. Several years on, with momentum in implant innovation accelerating, potent new weapons in the cybersecurity arsenal are emerging.

We help our customers revolutionise healthcare through next-generation medical devices.

In this article, I’m going to explore some of the approaches currently being used – or considered – to address concerns over the cybersecurity of implanted medical devices. I’ll also unpack the potential application of Rust, a relatively new (in comparison to C and C++) systems programming language. But first let’s set the scene with some background.

Implants are a class of a device with conflicting requirements: safety-critical operation in a highly resource-constrained environment while also providing user-friendly and secure access for monitoring and therapy adjustment. The balancing of these concerns has sometimes come at the cost of providing little in the way of detecting or preventing unauthorized or malicious access to the implant. The same applies to the information sent to it or received from it.

Unauthorized access could allow, in the worst case, modifications of operating parameters that expose the implant user to safety risks. In less severe scenarios, the user's safety is not put at risk, but personally identifiable information could be exposed. This would violate the user's privacy and laws regarding the protection of such information, such as HIPAA (Health Insurance Portability and Accountability Act.)

Less obvious threats include Denial-of-Service (DoS) attacks, which seek to prevent legitimate remote communications with the implant. They might also generate such abnormally high levels of communication activity that the implant’s battery is drained faster than usual.

Close proximity defense

Enforcing close-range communication is a well-used security approach. It limits the set of possible attacks by requiring the attacker to be in close physical proximity to the victim. At first glance, this seems like an easy thing to do. However, many short-range wireless protocols have been subject to range extending attacks. One method for mitigating this problem combines short range radios with biometrics that require continuous proximity to the patient.

Another useful method is cryptography. Correct application of cryptographic tools can dramatically reduce the attack surface. If the device only responds to authenticated commands, the attacker must obtain the authentication token or circumvent the authentication mechanisms. This reduces the problem of securing the entire command set to securing the keys, which is generally more tractable. Unfortunately, many effective cryptosystems, particularly asymmetric ones, require power-intensive computation. This is undesirable for implants.

When it comes to external devices, this interesting proposal prevents unauthorized access to the implant when present but allows for it in emergencies. Such a device could use a shared key to communicate with the implant, minimizing power consumption from encryption.

Now let’s turn to battery constraint mitigation. Battery intensive operations must not be accessible to attackers in a way that can cause the battery to be drained. External or kinetic charging mechanisms can mitigate this. Careful choices of communication protocols and encryption methods are also important to prevent denial of service attacks that drain the battery.

These approaches all share a common property in that they affect the architecture of the whole system and rely on interface definitions and policies to ward off the avenues through which most cyberattacks take place. But there still exists the possibility that, due to the choice of software implementation language in one or more parts of the system, defects may remain that allow cyberattacks to succeed.

Attack entry points

Some classes of software defects, such as out-of-bounds memory access and race conditions, act as entry points for some cybersecurity attacks. These defects can be present in the implant's own software or in an external device involved in the chain of communication with the implant. Implant software is typically written in languages like C and C++ which require a robust software development process and significant amounts of review, analysis, and testing to detect these types of defects.

Enter the Rust programming language. One of its core principles is enforcing guarantees about safety so that it’s impossible build a program with the types of defects I’ve discussed here, while not requiring power and space-intensive features like garbage collection.

Could these features of Rust be used to address cybersecurity threats in a different way? Could choosing Rust as the implementation language provide ‘inherently safe design by architecture features’ as set out in IEC/TR 80002? Well certainly Rust helps combat many classes of programming errors associated with cyberattacks at compile time versus run time. It detects out-of-bounds memory accesses at compile time and avoids typical issues with null pointers by largely obviating the need for their expression in the language. Further, it uses a borrow-checker to help prevent:

  • Race conditions
  • Errant pointers
  • Use of intermediate data
  • Unintended aliasing
  • Use after free
  • Iterator invalidation

Potential architectural benefits

Rust may provide an architectural benefit of lower system complexity by achieving safety-critical segregation through its safety guarantees. Traditional approaches to segregating safety-critical software involve putting it on a separate processor or isolating it as separate processes executing within an operating system that uses hardware-based memory protection. In either case, hardware is used to perform the segregation.

By providing guarantees about compile-time detection of programming errors that could result in non-segregated software components adversely affecting each other, it could be argued that the use of Rust may provide non-physical (processor or memory boundary) segregation of software modules. This might provide some of the same benefits of hardware-based segregation (preventing corruption of data flow, control flow, and the execution environment) without some of the associated costs.

Even if higher-order architectural benefits are not realizable, using Rust over ‘traditional’ implementation languages like C or C++ can be advantageous for other reasons. Having better compile-time safety checks means that fewer bugs make it to testing, which leads to faster and higher-quality development. The higher level of abstraction provided by Rust enables more programmer productivity.

Commercial adoption of Rust

Rust has seen continued growth and adoption year-on-year in the software development community. Apart from the large number of projects and companies that use Rust, major industry players such as Microsoft and Amazon have started to investigate incorporating it into their key products and offerings. There is also interest in the Linux community to enable the use of Rust for the development of kernel modules.

But here’s the thing. Rust is not yet ready to be used in safety-critical applications like medical implants. We’ve talked about several claims made by Rust about safety, but not about verification of those claims. The RustBelt project has done work toward that end. Even given a formal verification of Rust’s safety claims, there’s still a need for certified tools. Those don’t exist yet, but the Sealed Rust project is working on them.

Once Rust makes headway in these areas, early adopters may still find it burdensome to be the first to adopt a new technology in a regulated space. That’s especially true if it is used to make new arguments about architectural concerns or software safety that regulators will have to digest. Cambridge Consultants is well positioned to mitigate this potential risk through our combined experience of adopting new technology and successful regulatory submissions. We’ve also had plenty of development experience and involvement with the use of Rust on embedded platforms. If you'd like to talk through the topic in more detail, email me or my colleague and co-author Nathan Michaels.


Ryan Chaves
Associate Chief Engineer - Systems & Software Engineering, Medical Technology

Ryan is an Associate Chief Engineer of Systems & Software in Cambridge Consultants’ Medical Technology Division. He has over 18 years of experience in safety-critical software design across multiple regulated industries including medical and automotive.