When data disagrees with you.

I just come back from watching the latest film featuring Tom Hanks. It really made me think about the problem of data versus human nature.

It’s quite clear that we are collecting more and more data every day. Most of the actions we do every day are recorded in a database somewhere. The trend it’s clear and it will go up. Health data it’s just around the corner.

The film it’s about the US Airways Flight 1549 on January 15th of

  1. The film tells the story from the accident to the judgement of the pilot. Spoiler Alert. The closing line is how human factor must be taken into account. None of their flight simulations was considering the time a human being needs to make a decision and therefore were not fitting to the reality the pilot faced.

In a reality were AI will change everything to come, outliers are my main worry. Not so long ago, I was taking a beer with a Data Scientist that was claiming that a computer should be doing a Doctor’s work due to the algorithmic nature of our work. The MD on me felt hurt, but I can understand his position. He wants the best for his health and massive amount of data can help on that. In fact I think that we should be taking more advantage from AI on the medical practice, but that’s another story for another day. The problem for me it’s that there are too many variables that are not assessable when treating a patient, and that’s where the human factor comes in.

How do you evaluate (without a human in the middle) that someone is independent for it’s daily living activities? Asking the patient to fill up a questionnaire like Barthel and putting a threshold ? What if he/she has a subclinical dementia ? Maybe he is telling you obliquely that his situation at home is bad and he prefers to stay in the hospital…

What if the output to the sum of the medical conditions of a patient is a treatment that will only longer it’s suffering. Is the algorithm pretrained on that too? What if due to the way you trained your algorithm you stop treating people with a chance of surviving ? This are everyday ethical questions that EOLC MD must confront. Not strange outliers.


I think that every AI related worker itself should be advocating for the human control of outliers, of discrimination, being conscient that not every situation is addressable beforehand. Medical data right now it’s lacking of good tagging and modeling. I really want to help doctors on being AI enhanced doctors, but in the end, I want human making the decision. Not pretrained neural network.