From Amazon’s Alexa to mental health chatbots and tracking loved ones with dementia, AI-based healthcare apps are a booming industry and only predicted to grow more prevalent. Too often when we think of “artificial intelligence” we imagine only one strong AI entity, like a government-run Skynet or something. But the reality is that we already have many smaller, privately-owned AIs, finding their way into the market through lower-level distribution in a manner similar to that of the pharmaceutical industry.
In fact, we’ve probably reached the point of no return AI in healthcare. Now I don’t mean this in a Terminator kind of way, where we need to alert our prepper relatives or anything. No reason to panic—I’ve already lived through my Linda Hamilton T2 moment (more on that later) and I’m not going to do it again. I have teenagers and a whole slew of Boomer relatives to keep me in line, to make sure I check my messages and make my appointments and we do this all through our smartphones because that’s how it’s done now. If my doctor recommends an app for something, I will probably give it consideration. Why? Because it’s doctor recommended.
As with anything in healthcare, family opinions are also important. The intriguing thing, though, is that there’s a huge generational gap in how people view and utilize artificial intelligence. According to a recent study on AI acceptance, Boomers generally desire safety and security from AI, whereas millennials care more about AI adaptability, for AI to understand them better. These differences of opinion toward AI could greatly impact the future of AI in healthcare, because eventually the younger, more tech-savvy generation will be the ones making decisions for the older generations, which then leads to the question, how much AI-intervention is acceptable?
Are you okay with AI-enabled robotic companions to reduce loneliness in eldercare? How do you feel about tracking apps for dementia patients? What about facial recognition to detect pain, or CCTV monitoring in your bedroom to prevent elder abuse? Would you allow your daily medication use to be tracked by an AI assistant, or possibly by digital medications that report back to your physician from inside your body? How do your family members feel about these things? Who in your family will be helping you to make these decisions, and what are their views on healthcare technology? Do you consider yourself to be a tech conformist, or a tech rebel when it comes to your health?
I’ll end with a personal story, because at some point healthcare almost always becomes personal for each of us.
A while back I got kicked out of a Texas hospital emergency room for bad behavior. Well, this is somewhat of a hyberbole. More accurately, I was transferred under court order to a different facility. I won’t go into all the details here, but for me the tipping point came when they asked me to consult with a specialist via webcam. Now for those who don’t know me very well, I have a thing about webcams. I keep mine covered and rarely use it. However, in this particular instance, the hospital staff wheeled in a little cart with a laptop on it, with the expectation that I would discuss my health with the supposed human being whose face was on the screen. Hadn’t these people ever heard of deep fakes? Anyway, it didn’t take long for me to take the initiative to simply close the screen and wheel the cart back out of the room. It all went downhill from there. Not my finest moment, really, but an important one to explain because I know I’m not the first patient to feel uncomfortable discussing medical issues with a device instead of a person.
Yet as a global culture we are increasingly handing over the reins.
Anna L. Davis is an author and editor. Her novel, Open Source, is a sci-fi thriller with cyberpunk elements—featuring human microchipping, brain implants and twisted hackers. Anna now works as a digital journalist at Edit10for.com, focusing on artificial intelligence news, global surveillance updates and public health critical infrastructure.