You wake in the morning at the perfect time in your sleep cycle, as your smart mattress is syncing with your calendar to give you a gentle shake at the optimal moment.
Your smart pillow has adjusted itself through the night, rising up or down, based on your snuffles and snores to give you optimal sleep.
In the bathroom, your smart loo seat analyses its morning findings, searching for anything potentially worrisome. As you glance in your smart mirror, it tells you your blood pressure and your current risk of developing type 2 diabetes.
Then your virtual assistant pipes up, announcing that your cardiovascular age has dropped a year since yesterday – finally, months of hard work in the gym have paid off.
This may sound like pure science fiction but the truth is you can already do most of these things thanks to a new breed of technological devices that aim to not only prevent disease, but also eventually predict its onset. And much of this is driven by artificial intelligence (AI).
As a technology broadcaster and journalist, largely for the BBC, I’ve spent the best part of two decades deeply immersed in these fast-evolving innovations. And I’ve seen AI increasingly identify links between genetics, our lifestyles and disease like never before.
We are moving towards an era of more predictive, preventative medicine, with more personalised treatments set to emerge in the coming years. It will see a dramatic change in our approach to even the most devastating diseases including cancer, heart disease, type 2 diabetes, Parkinson’s and dementia.
Many of us, understandably, have mixed feelings about the rise of AI and its use of our personal data – the very fuel it relies on to function.
Technology journalist Lara Lewington spent the best part of two decades deeply immersed in fast-evolving innovations
But in healthcare, where it can speed up drug development and predict our risk of getting sick, it gives us a massive head start in preventing (rather than simply treating) chronic illness.
Already, it’s being used to monitor moles and skin lesions to forecast whether they could become cancerous.
The NHS has piloted its use in efficient distribution of hospital beds and trials are continuing to illustrate how AI can provide an extra pair of eyes on breast scans, colonoscopies and more, to spot more potentially cancerous growths than humans could.
This isn’t about replacing the medics, but augmenting them.
AI has the potential to transform the healthcare we all receive and also unearth more from how we may choose to track ourselves.
Here are some of the most potentially promising breakthroughs that could make us rethink how we assess our health and spot when things are amiss…
MAGIC MIRROR SELFIE TO SPOT HEALTH ISSUES
Many of us are adept at taking selfies, so how about being able to assess our state of health, and even future risk from one? This is what a looking glass I saw at the Consumer Electronics Show in Las Vegas in 2024 (where the latest tech is often unveiled) hoped to do.
The Anura ‘MagicMirror’ claimed it could measure ‘30 vital signs and parameters’ – including heart rate, blood pressure, BMI, facial skin age, breathing and mental stress index – just from a 30-second selfie video.

AI has the potential to transform the healthcare we all receive and also unearth more from how we may choose to track ourselves
It also analyses your risk of many conditions, including cardiovascular danger, fatty liver disease, stroke and type 2 diabetes – in part by analysing blood flow under the skin, which is imperceptible to the human eye.
All this information is then compared using AI to correlate patterns to disease, vital signs or even how old you look, against benchmark data from 40,000 people.
When it came to skin age, I’ll admit my experience didn’t vouch for its accuracy: one moment it suggested mine was 36 and, two minutes later, it thought 38. To be fair, I was happy with either, but I hope I’m not ageing at that pace.
But even if it needs more data to improve, the concept is a start. Facial imaging is being analysed by AI in other ways, too. One system, which is called Trueblue, uses smartphone cameras to analyse emotions by tracking tiny muscle actions, gaze patterns and blood flow in the face and neck.
The technology is currently being trialled in the NHS in Nottinghamshire, recruiting 125 women from 12 weeks pregnant to 12 weeks after they give birth to see if regular use of the app can detect early signs of the anxiety or mood change that could put them at risk of postnatal mental health problems. This group has estimated rates of depression as high as 26 per cent.
STETHOSCOPE TO ANALYSE HEART DATA
MORE than a million people in the UK are affected by heart failure and 80 per cent of cases are diagnosed when patients turn up sick at A&E. However, a new breed of AI-powered stethoscopes could help.
Conventional stethoscopes have been largely unchanged for years and allow doctors to hear sounds from the heart, lungs and gut to help them diagnose problems. The AI-powered one comes with added electrodes which also means it can check the heart’s electrical activity to look for abnormal rhythms.
Software embedded in the device combines the data collected to interpret more about your heart pump function, calculating whether the crucial left ventricle (the main pumping chamber) is likely to be impaired or not.
The idea is that it will mean medics can detect problems earlier and prescribe drugs and any lifestyle changes as quickly as possible. One such device – called Eko DUO – is currently being trialled by 100 GP surgeries across the UK as part of a £1.2 million project.
HI-TECH EYE TEST TO SPOT HIDDEN DISEASE
The eyes are the window to the soul, but throw in a bit of AI and they could also provide a picture of our overall health.
Pearse Keane is a professor of artificial medical intelligence and a consultant ophthalmologist. I saw how he and his team at Moorfields Eye Hospital had collaborated with computer scientists at University College London to develop an AI model that could see early signs of a range of diseases through scans of our retinas – a painless 30 second process. (Opticians already manually check for changes that can indicate high blood pressure and diabetes.)
This project is still at the research stage, but by comparing scans with existing NHS hospital data, Professor Keane’s AI model has already been able to identify changes in the eyes that help medics predict Parkinson’s seven years before symptoms appear.
They are now looking at whether they can also pick up other neurodegenerative or cardiovascular diseases. They have even identified associations with schizophrenia.
This field, dubbed ‘oculomics’, could mean a future where we use correlations between biomarkers in the eye and disease, to predict its onset.
‘Doctors have known for more than 100 years that you can pick up some signs of systemic disease from an eye examination, but in the past five to ten years there has been a combination of large datasets, advanced imaging, retinal imaging, including those done on the High Street, and advances in AI, meaning we think we can unpick a lot more than we imagined,’ Professor Keane told me.
AVATAR WHO CAN GO TO CHECK-UPS
What if doctors could soon check your health and chances of becoming ill, without seeing you? Instead, they would use your digital twin, or ‘health avatar’. This unique computer model would virtually recreate your body, with all the data on your genetics, lifestyle, symptoms – plus any data from healthcare trackers – built in.
These 3D models and animations currently exist for some specific medical purposes, but the aim eventually is that we might all be able to have them for the full picture of our bodies and health, to understand more about how medicines would interact with our genetics and microbiome, for example, as well as understand more about our disease risk or its progression.
I’ve seen digital twins employed for smart buildings and smart cities to understand efficient flow of people, for example, but in healthcare a digital twin of the entire human body could eventually simulate how medication might work.
It could also predict disease onset or its progress or allow doctors to see how medical devices – such as stents to clear blocked arteries or a pacemaker to regulate heartbeat – might work before surgery is carried out. It is data, and the AI to make sense of it, that will make this possible.
VIDEO GAME THAT WARNS OF EARLY DEMENTIA
AI is even being used to find ways to detect if someone is at risk of dementia long before it sets in. One way is by looking for signs of spatial disorientation.
University College London neuroscientist Professor Hugo Spiers was one of the researchers behind Sea Hero Quest, a video game that takes you on a journey across the sea, which you need to navigate – in the process giving vital clues to your spatial awareness and cognitive decline.
The premise was that if you want to catch patients before they are overwhelmed by disease, the signs may not be about losing your memory but ‘actually just becoming disoriented’.
I went to film this with him in its early days, before the eventual four million study participants took part – their data helping to build an accurate AI system.
A baseline for how we would respond in the game at different points of our lives was set. This could assess what is ‘healthy’, what happens as we age and the early signals of dementia.
TALK TO PHONE APP TO DETECT HEART TROUBLE
What could be easier than talking into your smartphone to see if your heart is healthy? Thanks to AI, that’s now being trialled.
Called HearO, patients are prompted every morning to use their phones and speak six sentences into the app – the data is then uploaded to a cloud-based system, where it is analysed by AI.
Specifically, the app tracks voice changes that may indicate fluid build-up in the lungs. This can be a sign the heart isn’t pumping blood around the body normally. Changes to the voice often become obvious two to three weeks before a patient is admitted to hospital with heart failure.
In a study by Ohio State University of hundreds of people with heart failure, the app predicted who would deteriorate and need to be admitted to hospital three weeks before it happened. It was correct 76 per cent of the time.
DRUG TRIALS MADE EASIER AND FASTER
The cost and time it takes for clinical trials to produce drugs that are safe and effective are huge. It can take around ten years from identifying a drug to the point of approval, with the cost to the drug company, on average, around £15 million.
British entrepreneur Jim Mellon, who has been involved in biotech for more than 20 years, believes we can improve things not just by using AI in the development process but also feeding into the system ‘all previous trials, [to] work out what worked and what didn’t work’.
One platform, Muse, analyses scientific literature, trial protocols, patient populations and more, to generate bespoke recruitment strategies and marketing materials (in any language) to appeal to the right participants.
But AI is also making even the most complicated trials easier and more efficient.
In 2024, I visited Ochre Bio, an AI drug discovery company searching for a solution to treat late-stage liver disease, without the need for liver transplants.
Its Oxford HQ has a huge lab filled with scientists busy with their pipettes and petri dishes. Powerful AI systems analyse the data that they produce.
In the company founder’s words, they are ‘deep phenotyping’ human livers at scale, meaning they analyse how someone’s genes and habits affect their liver function. AI analyses this and other clinical data at speed, allowing the creation of targeted treatments.
Then, across the Atlantic in New York, the company has an ICU of real human livers being ‘kept alive’ outside the body to test the findings of the lab. These are organs that would not have been suitable for transplant, but Ochre Bio has created a set-up to test potential treatments in as true-to-human conditions as possible.
The company believes their ‘most important ingredient is the vast amounts of human data we’re generating, that we believe supersedes anything that’s been done before’.
ADAPTED from Hacking Humanity by Lara Lewington (WH Allen, £16.99), published July 10, 2025. © Lara Lewington 2025. To pre-order a copy for half-price, see page 22.
HOW MODELLING CANTAKE THE FEAR OUT OF TOUGH SURGERY

Dr Owase Jeelani combined AI and virtual reality to visualise six-month-old Archie’s predicted appearance after the procedure
In 2021 I stood in a Great Ormond Street operating theatre waiting for surgery to commence on six-month-old Archie, who was about to have his head cut open right before my eyes.
His parents had been through a painstaking decision process – assisted, extraordinarily, by AI’s ability to create the images needed to help guide them. Archie was born with sagittal synostosis, a condition that causes a baby’s skull to fuse too early.
As the brain grows, the skull can’t grow sideways to accommodate it, so it expands front to back, distorting the head shape.
Left untreated, it can cause physical and psychological problems. The remedy is to have a spring inserted into the skull, but this surgery is not without risk. So obviously opting for this is a difficult call for any parent.
Step in paediatric neurosurgeon Dr Owase Jeelani. He had combined AI and virtual reality to help Archie’s parents decide, and I was by their side.
We all wore VR headsets as Dr Jeelani talked us through his bespoke 3D visualisation of Archie’s predicted appearance.
The medic had collated data on the outcomes of 60 previous surgeries he’d carried out on other babies with the condition to create models to show what an operation would likely mean for Archie.
Archie’s parents’ immersive experience gave them confidence to make their decision and the operation was a huge success: a small spring was inserted in Archie’s head and he made a good recovery.
This sort of modelling could be replicated to help doctors explain other medical conditions, surgeries and potential options to patients and it is starting to be used in a range of other operations to treat patients of all ages.