How the Health Industry Can Use AI to Improve Treatment Plans
In the first of a three-part series discussing the role of Artificial Intelligence (AI) within the healthcare sector, we speak to scientists at Qatar Computing Research Institute, part of Hamad Bin Khalifa University, about this topic.
In what ways is AI used, or could be used, to enhance healthcare in Qatar?
Dr. Mohamad Saad: AI has several potential applications that are being used or can be used in healthcare in Qatar. Areas where AI can play an important role include electronic health records (EHR), imaging and radiology, wearables, and genomics. With the availability of large amounts of EHR data in Qatar, in a centralized system that encompasses Hamad Medical Corporation (HMC) and Primary Health Care Corporation (PHCC), AI models can be developed to predict disease occurrence, severity, and prognosis. Such models help to improve prevention and treatment plans. For example, AI can help predict cardiovascular events using EHR data, which cannot be done by the medical practitioner alone given the large amount of data. Genomics is another important area where AI can also help healthcare in Qatar.
Genomics dissects the genetic code through DNA sequencing (and other types of data) and can generate big data that cannot be analyzed using simple models only. AI can play a major role in understanding unknown biological mechanisms coded by DNA. Hopefully, with the availability of more than 20,000 whole genome sequences generated by the Qatar Genome Program and Qatar Biobank, new discoveries will be made for the Qatari population, and the region at large.
In a collaboration between HMC and QCRI, available genetic risk scores in European data were tested and validated in the Qatari population for coronary heart disease, and also obesity using multi-omics approaches. These results were presented at the American Heart Association and Journal of American College of Cardiology conferences. A recently published multi-omics study in Qatari data led to important insights for Obesity using AI tools (HMC and QCRI; https://www.frontiersin.org/articles/10.3389/fendo.2022.937089/full).
Dr. Sabri Boughorbel: AI has the potential to enhance the radiology department by automating the reporting task of radiologists and providing diagnosis decision support. For example, AI models can quickly scan thousands of x-Rays, MRI images, and CT-scans to help radiologists make better and timely decisions regarding diagnosis and therefore treatment.
How helpful was AI, and technology in general, during the worst of the COVID pandemic?
Dr. Sabri Boughorbel: The expectation on AI contribution during the COVID pandemic was higher than what was delivered. There was an expectation of AI-based vaccines and drugs. Drug repurposing using AI provided promising results, but it remains to be seen if a novel breakthrough will happen.
Dr. Mohamad Saad: At QCRI, scientists developed an AI model to test for a number of drugs to treat COVID-19, and identified two drugs, i.e., Brilacidin1 and Ritonavir2, to be effective for COVID-19 treatment. These two drugs received FDA approval for COVID-19 treatment.
What projects is QCRI involved in relation to AI and healthcare?
Dr. Mohamad Saad: In collaboration with many healthcare stakeholders (MOPH, HMC, PHCC, etc.), QCRI is participating in several projects that involve AI and healthcare. One is assessing the value of wearable data (e.g., smart watches) for lifestyle intervention in respect to Type 2 Diabetes management (e.g., prevention, reversal). This is part of the Qatar Diabetes Prevention Program funded by QNRF. Other projects include developing cardiovascular and type 2 diabetes risk scores using EHR data. Polygenic risk scores for coronary artery disease in Qataris are being developed and preliminary results are promising.
Does relying heavily on AI in healthcare have any drawbacks, and in what way?
Dr. Sabri Boughorbel: Like other technologies, AI dependency will bring its drawbacks. For example, security vulnerability needs to be addressed.. Other drawbacks are social disparity, bias, and fairness. If the AI tools are developed on datasets that have intrinsic racial bias, the models might reflect these issues during deployment. For example, if the models have been mostly trained using data from European populations, the accuracy in other non-European datasets might be poor.
Dr. Ghanim Al-Sulaiti: The more complex the AI models are, the more opaque they become, meaning that, on many occasions, we do not understand how the machine derived an output from a specific input. AI experts refer to this as the black box problem, and they call for such models to be transparent and interpretable to ensure fairness and accountability. The degree of transparency differs based on context and application. In applications in which the use of AI models poses minimal risk to humans, such as email spam filters, there is less need for transparency (i.e., detailed articulation of how the system works and on what data it was trained).
However, in situations in which human lives are at stake, such as in healthcare, there is a need for transparency to build trust in the healthcare system, to ensure that patients receive treatment in a fair manner, and to establish accountability for medical decisions based on machine-learning recommendations.
Dr. Mohamad Saad (Scientist, Statistical Genetics and Bioinformatics), Dr. Sabri Boughorbel (Scientist, Machine Learning for Health) and Dr. Ghanim Al-Sulaiti (Scientist, AI Strategy and Policy) all work at Qatar Computing Research Institute, part of Hamad Bin Khalifa University