With the increasing need for remote diagnosis and monitoring, telehealth solutions are on the rise. With that expansion comes an increasing wealth of data – but there’s a trade-off: whether to transmit all of that data at a relatively high cost, or whether to use AI to improve the signal to noise ratio and therefore reduce the data transmission.
Want to learn more?
There are benefits to both approaches. On one hand, an AI enabled data filtering and compression approach can massively reduce the demands on signal transmission and increase battery life. On the other, complete data transmission keeps all future data mining and algorithm development options open and centralises the burden of data processing, allowing for greater flexibility in the future.
The data processing approach for an application therefore depends very much on the specific requirements that deliver both the technical solution and the strategic vision. In this post – the last in our telemedicine series that has spanned value proposition, interface design, security, system architecture and data quality – I’ll be illustrating the various approaches and highlighting some examples along the way.
Primary care telehealth
The primary goal of a telehealth system is to provide communication of diagnostics information between a patient and a caregiver. The COVID pandemic has led to the rise of integrated remote primary healthcare services like Babylon Health, PushDoctor and the NHS App. The advantages are manifold, from reducing risk for both patient and caregiver, to providing a more convenient service.
With these products, a zero-infrastructure or common-infrastructure approach means that the services are limited to phone or web-based flow charts and text, audio or video chat. In these platforms, increasing data fidelity permits a more rounded diagnosis. So, with near-ubiquitous access to mobile platforms and 3/4G data or Wi-Fi, there is little benefit from bandwidth limiting the interactions.
While these primary healthcare telehealth provider platforms are effective for face-to-face consultation and triage, they do not provide a route for transmission of physical or chemical biomarkers (indicators of disease) that can be used as an effective and highly specific diagnostic. To enable this, we use telehealth devices.
Devices are most commonly used when managing conditions – where the investment in a device-based platform is offset against the cost of long-term care within a traditional healthcare system. These can take the form of devices such as wearable ECGs, connected inhalers, continuous glucose monitors (CGMs) and even pacemakers.
Because ineffective management of diabetes and asthma can quickly lead to critical conditions, connected CGM and inhaler platforms have been at the forefront of this – and again different device manufacturers have taken differing approaches to data processing.
On-device (edge) processing
CGM sensors are implanted for a week or two, during which time the body will react to it, and the sensor performance will degrade. Various solutions exist to correct for this, from multi-sensor implants to broadband spectroscopic sensor technologies to low-cost sensors with more frequent recalibration. In all cases, because of the criticality of measurement accuracy in managing the illness, the CGM uses an AI/ML algorithm that interprets the raw data and converts it into metrics of blood glucose and sensor performance on-device.
In this way, time critical decisions are not dependent on connectivity. For making longer term management decisions, web-based portals (Abbot’s LibreView, for example) can make a user’s data available to their doctor, who can provide tailored treatment. The broadband data collected from the sensor can be anonymised and sent back to the medical device company and used to improve algorithms and performance for current or next generation technologies.
Similarly, smart inhalers have taken a low-data approach, initially for compliance with simple clip-on sensors (e.g. Propeller Health) ensuring that a treatment regime is being adhered to. However, with inhaled medication, breath synchronisation and speed are critical to drug delivery to the lung and therefore efficacy. More advanced devices, like the recently released Teva Digihaler, provide feedback via a mobile app to the user to indicate peak inspiratory flow so that technique can be adjusted.
Balancing data needs
With these types of connected device, both narrowband metrics and broadband performance data can be collected and shared at different frequencies to the user, caregiver and device manufacturer. On-device metrics help with the approval process, as they minimise the increased risks from platform software and hardware variances that can be incurred by incorporating generic mobile hardware into the data processing architecture. They also reduce device to device communication, usually the most power-hungry component of the system, so battery life can be extended beyond the expiry of the drug.
Centralised data processing
Where on-device pre-processing is not currently possible, or where the data processing required for data manipulation exceeds the processing required for transmitting visualisation, it can be more efficient to centralise the processing.
One of the earliest approvals for an AI based deep learning algorithm in healthcare was the ‘4D blood flow MRI imaging system’ from Arterys, used for segmenting and visualising the heart. While not a patient-facing telemedicine application, the platform for data processing is a great example of centralised compute architecture supporting optimised data processing.
4D MRI data is relatively large, as it contains xyz coordinates and a time component. A 3D visualisation of a single heartbeat cycle can be around a gigabyte. On top of this, the Arterys system superimposes a 3D visualisation of the blood flow vectors within the heart, which is useful for diagnosis of a range of cardiac dysfunctions. Performing this analysis fast enough to permit real-time interrogation by a cardiac surgeon requires high spec compute.
The bandwidth for viewing this visualisation is no more than that required for viewing a 4k TV programme, so the efficient implementation chosen is to centralise the compute and timeshare invisibly with other users. The advantage of this system is that addition of new diagnostics is relatively simple, as they can be developed for a single well-maintained platform, cutting support and development cost.
Because the compute architecture is elastic (i.e. centralised or cloud compute that can be scaled unlike edge compute), the core functions of the system can be extended arbitrarily through the addition of third-party applications from leading researchers. This allows rapid proliferation and application of cutting-edge AI techniques. As an example, there were multiple COVID-19 apps on the Arterys Marketplace, only a few months after the pandemic was declared.
Cloud-edge hybrid approaches
While edge-first systems bring long battery life and real-time metrics and cloud systems provide effectively unlimited compute and cutting-edge algorithms, there is no reason that a well-planned data processing strategy can’t take the best from both. A popular approach is to provide edge-based feedback to users based on regulatory approved interpretable* algorithms in low power hardware. In the background, long term data aggregation from the devices permits dashboards for clinical presentation and trial design (and adaptation). The device manufacturers benefit as this data can be used for algorithm developments and support longer term strategic aims.
The drawback for such a system is that the edge-based algorithm is tied to capabilities of low power hardware and slowed by a heavily regulated update cycle – but this is changing rapidly. Next generation processors incorporate NPUs (neural processing units) into the dies, and chiplet-based designs are enabling more tailored low power solutions enabling on-device AI. In addition, the proposed improvements to the FDA’s Software as a Medical Device (SAMD) approval process could significantly improve the update cycle, allowing for more rapid iteration of AI based solutions and population-based learning which could enable more responsible deployment of AI.
Continued evolution
As processing efficiency improves, 5G brings lower latency direct-to-cloud connections and sensors provide ever more specific data, data processing architectures will continue to evolve. It is likely that we will see more processing occurring on devices at the edge, but also more patient identifiable data transmitted to applications in the cloud with greater interoperability, increasing the need for security and privacy-preserving dynamic anonymisation. The continued advancement of increasingly intelligent low-power interfaces will permit tailored data-led treatment to be incorporated into ever more convenient and discrete devices or supplemented by entirely app-based behavioural therapies.
With the improving information content available from this data, and improving approaches to data and application sharing, pandemics could be more quickly detected, treatments could be more easily developed and made more broadly available whilst better serving a greater proportion of the population; putting the patient at the centre of their own care management. This long-term trend toward patient centric medicine is being enabled by device and pharma companies whose strategy is making data more readily interpretable with the aim of serving caregivers and patients alike. Please get in touch if you’d like to find out more about our approach to AI enabled connected systems design.
*Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1, 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x