As we enter 2021, the drive to adopt remote monitoring telemedicine techniques continues to gather pace. With the effects of the pandemic still reverberating around the world, healthcare providers are increasingly turning to a remote model to reduce infection risks. The benefits are clear, but it’s important to consider potential concerns – not least the veracity of data collected by people who are not clinically trained. 

Want to learn more?

Already in this series of articles my colleagues have explored the value proposition, patient-centric design, security and system architecture of telemedicine systems. Now I want to examine some of the issues around measurement and data collection, including that issue of quality. Let’s start with the basics, and the dedicated devices that are used to gather relevant physiological data from the patient. That data will include vital signs like heart rate and blood pressure, as well as wider contextual information about behaviour or therapy compliance. 

Depending on the monitoring requirements, the need for continuous measurement may demand a wearable device approach – as with continuous heart rate monitoring, for example. Alternatively, periodic measurements that can be performed at specific times by the patient using a more traditional measurement device – taking a blood pressure reading for instance – may be sufficient. 

Telemedicine and remote monitoring are currently used after a clinical visit, tracking recovery from a surgical intervention perhaps, or when managing chronic conditions like diabetes. In the first case, patients would typically be kept in the clinical setting for observation for a short period and then discharged, subject to periodic visits to their doctor to check progress. Remote monitoring allows earlier patient discharge and better follow-up tracking as data can be measured more often than, say, a weekly visit. In the second case, remote monitoring again offers opportunities to gather more data at finer time resolution and in a real-world setting, rather than in the artificial clinical setting.

Patient-controlled equipment

Additionally, COVID-19 is driving healthcare providers towards this model to reduce the risk to patients of getting infections beyond their original condition by reducing the time spent in hospital and not having in-person follow-up visits. But this means that measurements are no longer being made by trained clinical staff using well understood equipment where a level of sanity checking can be applied to collected data. Instead, they are made using novel remote monitoring equipment controlled by a patient who may not fully understand what is being measured or why. This is what sparks worries over the quality of data being collected.

Two vital factors apply to data quality: is it accurate – or at least sufficiently accurate to allow sensible decision-making – and has it been gathered reliably. By that I mean are measurements happening when they should and are all measurements of similar accuracy? But data quality is only one factor in the many trade-offs that need to be made in developing a new monitoring device.

To explore them in more detail, let's use an example from real life. As a keen orienteer and runner, I use a GPS watch to track my training. The particular watch I have includes a heart rate monitor function, which is now broken, so I'm considering replacement options. While not strictly a medical device, I think the options open to me highlight the data quality question well. I could stick with my current watch and accept the reduced functionality of not recording heart rate – very low data quality! Or I could invest in a new watch. But do I opt for a chest strap heart rate monitor or a wrist-based monitor?

With the chest strap measurement, I'm confident of heart rate accuracy but to achieve it I have to wear extra equipment. I have to remember to pack it in my kit bag, I have to put it on and position it correctly, and I have to wet the electrodes to get a good measurement. Whereas, with the wrist-based option, it's all part of the one device and I can easily wear it all day, not just when training.

Measurement unreliability

But this option suffers from unreliability in measurement – for example from the motion of the sensor over my wrist while running or sweat introducing measurement artefacts. Also, it is measuring a proxy for heart rate rather than making a direct measurement as with the chest strap. This reduces my confidence in the accuracy of the measurement. Having said that, sometimes it is significantly easier to measure a proxy for the desired data, possibly combining two or more measurements to improve confidence.

So, I have two sensing methods to choose between, both of which should give me heart rate data. I get better data quality from the chest strap as it is more reliable, suffering from fewer measurement issues. The wrist option is lower quality data but is easier to use and doesn't require me to remember a second piece of kit.

While this example focuses on the data quality versus usability trade-off, there are many other competing factors that need to be considered in the development of a remote monitoring device. Trade-offs such as data quality versus power consumption, or power consumption versus battery size versus how often the device requires charging. 

As well as the data quality concerns, there are also considerations of data ownership, data security, data storage, reliable communication, and how data gathered by one device fits into the overall ecosystem. Then, with medical devices, we need to consider the regulatory environment as well. 

All these considerations apply not only for the device design, but also for the overall system. Both the device and the system architecture must address factors such as how data compression and encoding might affect data quality or the impact of data loss in communication between parts of the system, and how these factors affect other parts of the system. There are many ways to address such issues, but they all have to be traded against other aspects of the system, for example a more robust communication link will likely require more power to increase signal level or handle retransmission attempts.

Turning an idea into a great product

A deep understanding of all these competing aspects of device and system design are needed to achieve a good overall solution. Getting the right combination of these factors is the mark of good engineering in device development and is the way to turn a good idea into a great product. In my experience, it is rarely a straightforward case of developing a better sensor to improve data quality, but more making an existing sensing method work well within the constraints of power, communications, usability, and all the other concerns. Alternatively, it might be a case of fusing together two lower quality sensing methods to get a single higher quality data stream, as demonstrated by the breakthrough EnfuseNet AI system for the automotive space. 

Finally, the question of how the measured data is going to be used also influences the various trade-off decisions. Lower quality data may be acceptable, depending on how the data is analysed, hence my earlier comment about the data being ‘sufficiently accurate’. 

Back to my running watch example. I now have heart rate data at every point during a run, but that’s a lot of data and it’s not really what I'm interested in. What I really want to know how much time I spent in aerobic versus anaerobic training? Or how hard a session I actually did so I can take the right amount of recovery time? Or am I able to run faster while staying in a particular heart rate zone? To get answers to those questions I might be able to accept lower quality data because the analysis of it can cope with the lower quality and still give me useful insights. Does it matter that a few data points are off by a few beats per minute or are missing?

Similarly, the clinician doesn't necessarily want, or have time, to review all the data collected. The data needs to be processed and turned into useful insight and actionable information. Development of the data processing algorithm can take many forms, from an expert producing an analytical model that builds on the underlying physics of a measurement to a machine learning approach that trains a model to fit the available data. In both cases, the amount and quality of data available for training and validating the processing algorithm is an important consideration and can have a significant impact on the performance of the overall system. A lack of good quality training data can be major issue in algorithm development, but there are ways to overcome the lack of quality training data.

Once the processing algorithm has been determined, it needs to be implemented, leading to further trade-offs – such as where in a system the processing is performed; on the device or centrally in the cloud. These considerations will be examined in the next article in this series. Meanwhile, please drop me an email if you’d like to continue the conversation. It would be great to hear from you.

Author
Neil Rosewell
Associate Director, Global Medical Technology division

Neil is a technical leader in Medical Device development and has been turning new ideas into life-changing products for over 20 years. His particular areas of expertise are in technical leadership of multi-disciplined project teams, systems thinking, formal verification testing, and product development for regulatory compliance, especially IEC 60601.