Start of Main Content




One-on-one relationships keep things simple. But as our healthcare systems keep expanding, patients interact with an increasing number of healthcare professionals. Data management, then, becomes exceedingly complex, creating challenges in maintaining standards and the flow of communication across disparate medical teams. In her video, Chin discusses these challenges and how to deliver trustworthy data that enables physicians to make better decisions for their patients.


My name is Lynda Chin. I am, by training, a physician who has spent time in private practice, but spent most of my career in academic research focusing on genomics, cancer genomics, and really seeing firsthand how the advances in technology and science are impacting and dramatically changing the way medicine is practiced or will be practiced.

If we look back at a century ago, the relationship between the patient and the physician is very different. It was a relationship based on trust.

You have your family doctor for generations and you trust them, and they know you.

As medicine becomes more complex, as we understand biology more and we start to understand the complexity in medicine, the field has evolved to have many, many specializations.

So there’s no longer one single doctor who takes care of health. It is more likely tens and hundreds of doctors and nurses that are involved in our care.

I think to manage this complexity in medicine today we need to have our doctors work as one team so that there is still this one-to-one relationship, you know, from a patient perspective, that the system works together to help take care of the person’s health. That means that team of doctors who are most likely not even physically in the same facility need to be able to function as a single team.

That means they have to be able to share information, and they have to be able to trust each other. If they don’t trust each other, then the team breaks down. And that trust is predicated on, I believe, transparency.

Bumper: Transparency as the Foundation for Trust

When it comes to being able to make a decision I need to be able to trust that you have made the right decision.

That means I have to know what that decision is, what the context was when you made that decision, and all of this requires being able to get access to information. And that information isn’t just any kind of data. It has to be data that we can trust. That means how the data is collected matters a whole lot. I have to be able to interpret the data because there is a standard. So if there’s no standard, I can’t really use the data to create the sense of confidence that I can build on the data and the result you have generated.

So, I think to create the transparency which translates into confidence and trust, we have to have sharing of the data, we have to have standards so that we have confidence the data are collected properly, and then we have to have a standard so that we can compare from one to the next so that we know how we can interpret the data.

Bumper: Trust and Artificial Intelligence (AI)

Medicine has gotten more complex, the amount of data and knowledge have exploded, and I would say our human brain hasn’t gotten bigger. So we need help. That’s where AI, like analytics, comes in to not just help organize the data, but help interpret some of the data so that it generates actionable information to help an individual make decisions.

I see that AI has great potential to really impact the practice of medicine. Not to replace the human decisions, really, but to help interpret data that are not black and white. You know, there’s an art to medicine. A straight rule-based analytic is not going to help the physician, and that’s where the artificial intelligent-like or machine learning, or cognitive analytics come into play that are much more probabilistic. But, that also introduces another factor into trust. How do I trust that the AI system is making the right interpretation?

I don’t have a black and white answer. As the technology advances and matures, we will learn to use artificial intelligence more and more in the most appropriate way that can really help physicians deliver better care without taking that decision-making capability out of that human-trust relationship.