Does Explainability Enhance the Effectiveness of AI Models in Public Health? The COVID-19 Context

Neda Ahmadi, Mehrbakhsh Nilashi

Abstract


Generative AI models, such as ChatGPT, offer versatile applications in healthcare, particularly in the COVID-19 era. While these models show promise in medical decision support, the imperative of explainability cannot be overstated. Understanding how AI arrives at recommendations is crucial for transparency and trust, especially in critical areas like COVID-19 management. However, challenges persist in elucidating the decision-making processes of AI models, potentially hindering their acceptance in medical practice. This paper discusses the necessity of prioritizing explainability mechanisms tailored for AI-powered linguistic models, particularly in the context of COVID-19-related healthcare decisions. By shedding light on AI reasoning, explainability mechanisms not only enhance transparency and accountability but also foster trust among medical professionals, facilitating informed collaboration between human expertise and AI capabilities.


Keywords


Explainability, ChatGPT, COVID-19, XAI, Healthcare, Machine Learning

Full Text:

Abstract PDF

References


Abumalloh, R. A., Asadi, S., Nilashi, M., Minaei-Bidgoli, B., Nayer, F. K., Samad, S., . . . Ibrahim, O. (2021). The impact of coronavirus pandemic (COVID-19) on education: The role of virtual and remote laboratories in education. Technology in Society, 67, 101728.

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160.

Ahmadi, H., Gholamzadeh, M., Shahmoradi, L., Nilashi, M., & Rashvand, P. (2018). Diseases diagnosis using fuzzy logic methods: A systematic and meta-analysis review. Computer Methods and Programs in Biomedicine, 161, 145-172.

Ahmed, Z., Mohamed, K., Zeeshan, S., & Dong, X. (2020). Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database, 2020, baaa010.

Ajunwa, I. (2020). An Auditing Imperative for Automated Hiring Systems. Harv. JL & Tech., 34, 621.

Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410-14430.

Akhtar, N., Mian, A., Kardan, N., & Shah, M. (2021). Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access, 9, 155161-155196.

Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., . . . Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information fusion, 99, 101805.

Alikhademi, K., Richardson, B., Drobina, E., & Gilbert, J. E. (2021). Can explainable AI explain unfairness? A framework for evaluating explainable AI. arXiv preprint arXiv:2106.07483.

AlZubi, A. A., Alarifi, A., & Al-Maitah, M. (2020). Deep brain simulation wearable IoT sensor device based Parkinson brain disorder detection using heuristic tubu optimized sequence modular neural network. Measurement, 161, 107887.

Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V. I., & Consortium, P. Q. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making, 20, 1-9.

Antoniadi, A. M., Du, Y., Guendouz, Y., Wei, L., Mazo, C., Becker, B. A., & Mooney, C. (2021). Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Applied Sciences, 11(11), 5088.

Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & society, 35, 611-623.

Arora, P., Boyne, D., Slater, J. J., Gupta, A., Brenner, D. R., & Druzdzel, M. J. (2019). Bayesian networks for risk prediction using real-world data: a tool for precision medicine. Value in Health, 22(4), 439-445.

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., . . . Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.

Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial intelligence and human trust in healthcare: focus on clinicians. Journal of medical Internet research, 22(6), e15154.

Bagaric, M., Svilar, J., Bull, M., Hunter, D., & Stobbs, N. (2021). The Solution to the Pervasive Bias and Discrimination in the Criminal Justice: Transparent Artificial Intelligence. American Criminal Law Review, 59(1).

Balagurunathan, Y., Mitchell, R., & El Naqa, I. (2021). Requirements and reliability of AI in the medical context. Physica Medica, 83, 72-78.

Bashir, A. K., Victor, N., Bhattacharya, S., Huynh-The, T., Chengoden, R., Yenduri, G., . . . Liyanage, M. (2023). Federated Learning for the Healthcare Metaverse: Concepts, Applications, Challenges, and Future Directions. IEEE Internet of Things Journal.

Bellucci, M., Delestre, N., Malandain, N., & Zanni-Merk, C. (2021). Towards a terminology for a fully contextualized XAI. Procedia Computer Science, 192, 241-250.

Bhattacharya, A. (2022). Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more: Packt Publishing Ltd.

Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. Paper presented at the IJCAI-17 workshop on explainable AI (XAI).

Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., & Rinzivillo, S. (2023). Benchmarking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery, 1-60.

Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and information technology, 15, 209-227.

Burnham, J. P., Rojek, R. P., & Kollef, M. H. (2018). Catheter removal and outcomes of multidrug-resistant central-line-associated bloodstream infection. Medicine, 97(42).

Bussone, A., Stumpf, S., & O'Sullivan, D. (2015). The role of explanations on trust and reliance in clinical decision support systems. Paper presented at the 2015 international conference on healthcare informatics.

Callaway, E. (2023). The next generation of coronavirus vaccines. Nature, 614, 22-25.

Canny, A., Donaghy, E., Murray, V., Campbell, L., Stonham, C., Bush, A., . . . Daines, L. (2023). Patient views on asthma diagnosis and how a clinical decision support system could help: A qualitative study. Health Expectations, 26(1), 307-317.

Carabantes, M. (2020). Black-box artificial intelligence: an epistemological and critical analysis. AI & society, 35(2), 309-317.

Carter, A. (2018). Cathy O'Neil (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York, St. Martin’s Press and Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, New York, Broadway Books.

Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F., . . . Rao, R. M. (2017). Interpretability of deep learning models: A survey of results. Paper presented at the 2017 IEEE smartworld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, Internet of people and smart city innovation (smartworld/SCALCOM/UIC/ATC/CBDcom/IOP/SCI).

D'Agostino Sr, R. B., Pencina, M. J., Massaro, J. M., & Coady, S. (2013). Cardiovascular disease risk assessment: insights from Framingham. Global heart, 8(1), 11-23.

Dalenberg, D. J. (2018). Preventing discrimination in the automated targeting of job advertisements. Computer law & security review, 34(3), 615-627.

Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.

Delahanty, R. J., Alvarez, J., Flynn, L. M., Sherwin, R. L., & Jones, S. S. (2019). Development and evaluation of a machine learning model for the early identification of patients at risk for sepsis. Annals of emergency medicine, 73(4), 334-344.

Ding, W., Abdel-Basset, M., Hawash, H., & Ali, A. M. (2022). Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Information Sciences.

Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1), eaao5580.

Duque Anton, S. D., Schneider, D., & Schotten, H. D. (2022). On Explainability in AI-Solutions: A Cross-Domain Survey. Paper presented at the International Conference on Computer Safety, Reliability, and Security.

Emmertâ€Streib, F., Yliâ€Harja, O., & Dehmer, M. (2020). Explainable artificial intelligence and machine learning: A reality rooted perspective. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(6), e1368.

Erasmus, A., Brunet, T. D., & Fisher, E. (2021). What is interpretability? Philosophy & Technology, 34(4), 833-862.

Fabre, V., Amoah, J., Cosgrove, S. E., & Tamma, P. D. (2019). Antibiotic therapy for Pseudomonas aeruginosa bloodstream infections: how long is long enough? Clinical Infectious Diseases, 69(11), 2011-2014.

Fuhrman, J. D., Gorre, N., Hu, Q., Li, H., El Naqa, I., & Giger, M. L. (2022). A review of explainable and interpretable AI with applications in COVIDâ€19 imaging. Medical Physics, 49(1), 1-14.

Gaviglio, A. M., Skinner, M. W., Lou, L. J., Finkel, R. S., Augustine, E. F., & Goldenberg, A. J. (2023). Geneâ€targeted therapies: Towards equitable development, diagnosis, and access. Paper presented at the American Journal of Medical Genetics Part C: Seminars in Medical Genetics.

Ghane, M., Ang, M. C., Nilashi, M., & Sorooshian, S. (2022). Enhanced decision tree induction using evolutionary techniques for Parkinson's disease classification. Biocybernetics and Biomedical Engineering, 42(3), 902-920.

Ghoshal, B., & Tucker, A. (2020). Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. arXiv preprint arXiv:2003.10769.

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. Paper presented at the 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA).

Gkontra, P., Quaglio, G., Garmendia, A. T., & Lekadir, K. (2023). Challenges of Machine Learning and AI (What Is Next?), Responsible and Ethical AI Clinical Applications of Artificial Intelligence in Real-World Data (pp. 263-285): Springer.

Graziani, M., Dutkiewicz, L., Calvaresi, D., Amorim, J. P., Yordanova, K., Vered, M., . . . Pulignano, V. (2023). A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artificial intelligence review, 56(4), 3473-3504.

Griffin, T. A., Green, B. P., & Welie, J. V. (2023). The ethical agency of AI developers. AI and Ethics, 1-10.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.

Guillemé, M., Masson, V., Rozé, L., & Termier, A. (2019). Agnostic local explanation for time series classification. Paper presented at the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI).

Habuza, T., Navaz, A. N., Hashim, F., Alnajjar, F., Zaki, N., Serhani, M. A., & Statsenko, Y. (2021). AI applications in robotics, diagnostic image analysis and precision medicine: current limitations, future trends, guidelines on CAD systems for medicine. Informatics in Medicine Unlocked, 24, 100596.

Hacker, P., Engel, A., & Mauer, M. (2023). Regulating ChatGPT and other large generative AI models. Paper presented at the Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency.

Hagras, H. (2018). Toward human-understandable, explainable AI. Computer, 51(9), 28-36.

Hall, K., Chang, V., & Jayne, C. (2022). A review on Natural Language Processing Models for COVID-19 research. Healthcare Analytics, 100078.

Harris, P. N., Tambyah, P. A., Lye, D. C., Mo, Y., Lee, T. H., Yilmaz, M., . . . Bassetti, M. (2018). Effect of piperacillin-tazobactam vs meropenem on 30-day mortality for patients with E coli or Klebsiella pneumoniae bloodstream infection and ceftriaxone resistance: a randomized clinical trial. Jama, 320(10), 984-994.

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., . . . Hussain, A. (2023). Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 1-30.

Hicks, S. A., Eskeland, S., Lux, M., de Lange, T., Randel, K. R., Jeppsson, M., . . . Riegler, M. (2018). Mimir: an automatic reporting and reasoning system for deep learning based analysis in the medical domain. Paper presented at the Proceedings of the 9th ACM Multimedia Systems Conference.

Howard, J. J., Sirotin, Y. B., Tipton, J. L., & Vemury, A. R. (2021). Reliability and validity of image-based and self-reported skin phenotype metrics. IEEE Transactions on Biometrics, Behavior, and Identity Science, 3(4), 550-560.

Hu, S., Gao, Y., Niu, Z., Jiang, Y., Li, L., Xiao, X., . . . Xia, J. (2020). Weakly supervised deep learning for covid-19 infection detection and classification from ct images. IEEE Access, 8, 118869-118883.

Husky, M. M., Kovess-Masfety, V., & Swendsen, J. D. (2020). Stress and anxiety among university students in France during Covid-19 mandatory confinement. Comprehensive psychiatry, 102, 152191.

Javaid, M., Haleem, A., Singh, R. P., Suman, R., & Rab, S. (2022). Significance of machine learning in healthcare: Features, pillars and applications. International Journal of Intelligent Networks, 3, 58-73.

Jiang, J., Kahai, S., & Yang, M. (2022). Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. International Journal of Human-Computer Studies, 165, 102839.

Kam, H. J., & Kim, H. Y. (2017). Learning representations for the early detection of sepsis with deep neural networks. Computers in Biology and Medicine, 89, 248-255.

Karim, M. R., Döhmen, T., Rebholz-Schuhmann, D., Decker, S., Cochez, M., & Beyan, O. (2020). DeepCOVIDExplainer: Explainable COVID-19 diagnosis based on chest X-ray images. arXiv preprint arXiv:2004.04582.

Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC medicine, 17, 1-9.

Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081-2096.

Kenny, E. M., Ford, C., Quinn, M., & Keane, M. T. (2021). Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence, 294, 103459.

Kök, İ., Okay, F. Y., Muyanlı, Ö., & Özdemir, S. (2023). Explainable artificial intelligence (xai) for internet of things: a survey. IEEE Internet of Things Journal.

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., . . . Maningo, J. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health, 2(2), e0000198.

Landry, L. G., Ali, N., Williams, D. R., Rehm, H. L., & Bonham, V. L. (2018). Lack of diversity in genomic databases is a barrier to translating precision medicine research into practice. Health Affairs, 37(5), 780-785.

Lane, H. C., Core, M. G., Van Lent, M., Solomon, S., & Gomboc, D. (2005). Explainable Artificial Intelligence for Training and Tutoring. Paper presented at the AIED.

Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878.

Lee, H.-C., Yoon, S. B., Yang, S.-M., Kim, W. H., Ryu, H.-G., Jung, C.-W., . . . Lee, K. H. (2018). Prediction of acute kidney injury after liver transplantation: machine learning approaches vs. logistic regression model. Journal of clinical medicine, 7(11), 428.

Li, M. D., Arun, N. T., Gidwani, M., Chang, K., Deng, F., Little, B. P., . . . O’Shea, A. (2020). Automated assessment and tracking of COVID-19 pulmonary disease severity on chest radiographs using convolutional siamese neural networks. Radiology: Artificial Intelligence, 2(4), e200079.

Li, X., Xiong, H., Li, X., Wu, X., Zhang, X., Liu, J., . . . Dou, D. (2022). Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond. Knowledge and Information Systems, 64(12), 3197-3234.

Lisboa, P., Saralajew, S., Vellido, A., Fernández-Domenech, R., & Villmann, T. (2023). The coming of age of interpretable and explainable machine learning models. Neurocomputing, 535, 25-39.

Loh, H. W., Ooi, C. P., Seoni, S., Barua, P. D., Molinari, F., & Acharya, U. R. (2022). Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Computer Methods and Programs in Biomedicine, 107161.

Lopes, P., Silva, E., Braga, C., Oliveira, T., & Rosado, L. (2022). XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Applied Sciences, 12(19), 9423.

Maguolo, G., & Nanni, L. (2021). A critic evaluation of methods for COVID-19 automatic detection from X-ray images. Information fusion, 76, 1-7.

Mahya, P., & Fürnkranz, J. (2023). An Empirical Comparison of Interpretable Models to Post-Hoc Explanations. AI, 4(2), 426-436.

McNally, J. G., Karpova, T., Cooper, J., & Conchello, J. A. (1999). Three-dimensional imaging by deconvolution microscopy. Methods, 19(3), 373-385.

Mehta, O., Liao, Z., Jenkinson, M., Carneiro, G., & Verjans, J. (2022). Machine learning in medical imaging–clinical applications and challenges in computer vision. Artificial Intelligence in Medicine: Applications, Limitations and Future Directions, 79-99.

Melis, D. A., Kaur, H., Daumé III, H., Wallach, H., & Vaughan, J. W. (2021). From human explanation to model interpretability: A framework based on weight of evidence. Paper presented at the Proceedings of the AAAI Conference on Human Computation and Crowdsourcing.

Meskó, B., & Topol, E. J. (2023). The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ digital medicine, 6(1), 120.

Miikkulainen, R., Liang, J., Meyerson, E., Rawal, A., Fink, D., Francon, O., . . . Duffy, N. (2024). Evolving deep neural networks Artificial intelligence in the age of neural networks and brain computing (pp. 269-287): Elsevier.

Molnar, C., Casalicchio, G., & Bischl, B. (2020). Interpretable machine learning–a brief history, state-of-the-art and challenges. Paper presented at the Joint European conference on machine learning and knowledge discovery in databases.

Muniz, A., Liu, H., Lyons, K., Pahwa, R., Liu, W., Nobre, F., & Nadal, J. (2010). Comparison among probabilistic neural network, support vector machine and logistic regression for evaluating the effect of subthalamic stimulation in Parkinson disease on ground reaction force during gait. Journal of biomechanics, 43(4), 720-726.

Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2023). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941.

Natesan Ramamurthy, K., Vinzamuri, B., Zhang, Y., & Dhurandhar, A. (2020). Model agnostic multilevel explanations. Advances in neural information processing systems, 33, 5968-5979.

Nilashi, M., Abumalloh, R. A., Alghamdi, A., Minaei-Bidgoli, B., Alsulami, A. A., Thanoon, M., . . . Samad, S. (2021). What is the impact of service quality on customers’ satisfaction during COVID-19 outbreak? New findings from online reviews analysis. Telematics and Informatics, 64, 101693.

Nilashi, M., Abumalloh, R. A., Almulihi, A., Alrizq, M., Alghamdi, A., Ismail, M. Y., . . . Asadi, S. (2023). Big social data analysis for impact of food quality on travelers’ satisfaction in eco-friendly hotels. ICT Express, 9(2), 182-188.

Nilashi, M., Abumalloh, R. A., Alrizq, M., Alghamdi, A., Samad, S., Almulihi, A., . . . Mohd, S. (2022). What is the impact of eWOM in social network sites on travel decision-making during the COVID-19 outbreak? A two-stage methodology. Telematics and Informatics, 69, 101795.

Nilashi, M., Abumalloh, R. A., Alrizq, M., Almulihi, A., Alghamdi, O., Farooque, M., . . . Ahmadi, H. (2022). A hybrid method to solve data sparsity in travel recommendation agents using fuzzy logic approach. Mathematical Problems in Engineering, 2022.

Nilashi, M., Abumalloh, R. A., Minaei-Bidgoli, B., Samad, S., Yousoof Ismail, M., Alhargan, A., & Abdu Zogaan, W. (2022). Predicting parkinson’s disease progression: Evaluation of ensemble methods in machine learning. Journal of healthcare engineering, 2022.

Nilashi, M., Abumalloh, R. A., Minaei-Bidgoli, B., Zogaan, W. A., Alhargan, A., Mohd, S., . . . Samad, S. (2022). Revealing travellers’ satisfaction during COVID-19 outbreak: moderating role of service quality. Journal of Retailing and Consumer Services, 64, 102783.

Nilashi, M., Abumalloh, R. A., Mohd, S., Azhar, S. N. F. S., Samad, S., Thi, H. H., . . . Alghamdi, A. (2023). COVID-19 and sustainable development goals: A bibliometric analysis and SWOT analysis in Malaysian context. Telematics and Informatics, 76, 101923.

Nilashi, M., Abumalloh, R. A., Yusuf, S. Y. M., Thi, H. H., Alsulami, M., Abosaq, H., . . . Alghamdi, A. (2023). Early diagnosis of Parkinson’s disease: A combined method using deep learning and neuro-fuzzy techniques. Computational biology and chemistry, 102, 107788.

Nilashi, M., Ahmadi, H., Shahmoradi, L., Ibrahim, O., & Akbari, E. (2019). A predictive method for hepatitis disease diagnosis using ensembles of neuro-fuzzy technique. Journal of infection and public health, 12(1), 13-20.

Nilashi, M., Ahmadi, H., Sheikhtaheri, A., Naemi, R., Alotaibi, R., Alarood, A. A., . . . Zhao, J. (2020). Remote tracking of Parkinson's disease progression using ensembles of deep belief network and self-organizing map. Expert Systems with Applications, 159, 113562.

Nilashi, M., Asadi, S., Minaei-Bidgoli, B., Abumalloh, R. A., Samad, S., Ghabban, F., & Ahani, A. (2021). Recommendation agents and information sharing through social media for coronavirus outbreak. Telematics and Informatics, 61, 101597.

Nilashi, M., bin Ibrahim, O., Ahmadi, H., & Shahmoradi, L. (2017). An analytical method for diseases prediction using machine learning techniques. Computers & Chemical Engineering, 106, 212-223.

Nilashi, M., Bin Ibrahim, O., Mardani, A., Ahani, A., & Jusoh, A. (2018). A soft computing approach for diabetes disease classification. Health Informatics Journal, 24(4), 379-393.

Nilashi, M., Ibrahim, O., Ahmadi, H., Shahmoradi, L., & Farahmand, M. (2018). A hybrid intelligent system for the prediction of Parkinson's Disease progression using machine learning techniques. Biocybernetics and Biomedical Engineering, 38(1), 1-15.

Nilashi, M., Ibrahim, O., Samad, S., Ahmadi, H., Shahmoradi, L., & Akbari, E. (2019). An analytical method for measuring the Parkinson’s disease progression: A case on a Parkinson’s telemonitoring dataset. Measurement, 136, 545-557.

Nilashi, M., Rupani, P. F., Rupani, M. M., Kamyab, H., Shao, W., Ahmadi, H., . . . Aljojo, N. (2019). Measuring sustainability through ecological sustainability and human sustainability: A machine learning approach. Journal of Cleaner Production, 240, 118162.

Nilashi, M., Samad, S., Ahani, A., Ahmadi, H., Alsolami, E., Mahmoud, M., . . . Alarood, A. A. (2021). Travellers decision making through preferences learning: A case on Malaysian spa hotels in TripAdvisor. Computers & Industrial Engineering, 158, 107348.

Ozyurt, F., Tuncer, T., & Subasi, A. (2021). An automated COVID-19 detection based on fused dynamic exemplar pyramid feature extraction and hybrid feature selection using deep learning. Computers in Biology and Medicine, 132, 104356.

Packin, N. G., & Lev Aretz, Y. (2018). Learning algorithms and discrimination. Book Chapter: Learning Algorithms and Discrimination In RESEARCH HANDBOOK OF ARTIFICIAL INTELLIGENCE AND LAW (Woodrow Barfield & Ugo Pagallo eds.)(2018 Forthcoming), Baruch College Zicklin School of Business Research Paper(2018-04), 03.

Park, H. (2022). Providing post-hoc explanation for node representation learning models through inductive conformal predictions. IEEE Access, 11, 1202-1212.

Pedrero-Sánchez, J. F., Belda-Lois, J.-M., Serra-Ano, P., Ingles, M., & Lopez-Pascual, J. (2022). Classification of healthy, Alzheimer and Parkinson populations with a multi-branch neural network. Biomedical Signal Processing and Control, 75, 103617.

Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing.

Picardi, C., Hawkins, R., Paterson, C., & Habli, I. (2019). A pattern for arguing the assurance of machine learning in medical diagnosis systems. Paper presented at the Computer Safety, Reliability, and Security: 38th International Conference, SAFECOMP 2019, Turku, Finland, September 11–13, 2019, Proceedings 38.

Qureshi, R., Irfan, M., Ali, H., Khan, A., Nittala, A. S., Ali, S., . . . Shah, Z. (2023). Artificial Intelligence and Biosensors in Healthcare and its Clinical Relevance: A Review. IEEE Access.

Rahmani, A. M., Yousefpoor, E., Yousefpoor, M. S., Mehmood, Z., Haider, A., Hosseinzadeh, M., & Ali Naqvi, R. (2021). Machine learning (ML) in medicine: Review, applications, and challenges. Mathematics, 9(22), 2970.

Rao, S., Böhle, M., & Schiele, B. (2022). Towards better understanding attribution methods. Paper presented at the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.

Ren, K., Zheng, T., Qin, Z., & Liu, X. (2020). Adversarial attacks and defenses in deep learning. Engineering, 6(3), 346-360.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Paper presented at the Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. Paper presented at the Proceedings of the AAAI conference on artificial intelligence.

Rupani, P. F., Nilashi, M., Abumalloh, R. e., Asadi, S., Samad, S., & Wang, S. (2020). Coronavirus pandemic (COVID-19) and its natural environmental impacts. International Journal of Environmental Science and Technology, 17, 4655-4666.

Saeed, W., & Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263, 110273.

Samek, W., & Müller, K.-R. (2019). Towards explainable artificial intelligence. Explainable AI: interpreting, explaining and visualizing deep learning, 5-22.

Samih, A., Adadi, A., & Berrada, M. (2019). Towards a knowledge based explainable recommender systems. Paper presented at the Proceedings of the 4th International Conference on Big Data and Internet of Things.

Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. Paper presented at the Proceedings of the IEEE international conference on computer vision.

Sharma, R., Gulati, A., & Chopra, K. (2023). Artificial Intelligence (AI) and Machine Learning (ML): An Innovative Cross-Talk Perspective and Their Role in the Healthcare Industry Artificial Intelligence and Machine Learning in Healthcare (pp. 9-38): Springer.

Sheu, R.-K., & Pardeshi, M. S. (2022). A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. Sensors, 22(20), 8068.

Sun, J., Liao, Q. V., Muller, M., Agarwal, M., Houde, S., Talamadupula, K., & Weisz, J. D. (2022). Investigating explainability of generative AI for code through scenario-based design. Paper presented at the 27th International Conference on Intelligent User Interfaces.

Taheri, S., Asadi, S., Nilashi, M., Abumalloh, R. A., Ghabban, N. M., Yusuf, S. Y. M., . . . Samad, S. (2021). A literature review on beneficial role of vitamins and trace elements: Evidence from published clinical studies. Journal of Trace Elements in Medicine and Biology, 67, 126789.

Taresh, M. M., Zhu, N., Ali, T. A. A., Hameed, A. S., & Mutar, M. L. (2021). Transfer learning to detect covid-19 automatically from x-ray images using convolutional neural networks. International Journal of Biomedical Imaging, 2021, 1-9.

Thakur, S., & Kumar, A. (2021). X-ray and CT-scan-based automated detection and classification of covid-19 using convolutional neural networks (CNN). Biomedical Signal Processing and Control, 69, 102920.

Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems, 32(11), 4793-4813.

Tomsett, R., Braines, D., Harborne, D., Preece, A., & Chakraborty, S. (2018). Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552.

Tong, L., Shi, W., Isgut, M., Zhong, Y., Lais, P., Gloster, L., . . . Wang, M. D. (2023). Integrating Multi-omics Data with EHR for Precision Medicine Using Advanced Artificial Intelligence. IEEE Reviews in Biomedical Engineering.

Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56.

Tosun, A. B., Pullara, F., Becich, M. J., Taylor, D. L., Fine, J. L., & Chennubhotla, S. C. (2020). Explainable AI (xAI) for anatomic pathology. Advances in Anatomic Pathology, 27(4), 241-250.

Tucci, V., Saary, J., & Doyle, T. E. (2022). Factors influencing trust in medical artificial intelligence for healthcare professionals: A narrative review. Journal of Medical Artificial Intelligence, 5.

Tuncer, T., Ozyurt, F., Dogan, S., & Subasi, A. (2021). A novel Covid-19 and pneumonia classification method based on F-transform. Chemometrics and intelligent laboratory systems, 210, 104256.

Udeshi, S., Peng, S., Woo, G., Loh, L., Rawshan, L., & Chattopadhyay, S. (2022). Model agnostic defence against backdoor attacks in machine learning. IEEE Transactions on Reliability, 71(2), 880-895.

Vale, D., El-Sharif, A., & Ali, M. (2022). Explainable artificial intelligence (XAI) post-hoc explainability methods: Risks and limitations in non-discrimination law. AI and Ethics, 2(4), 815-826.

Veiber, L., Allix, K., Arslan, Y., Bissyandé, T. F., & Klein, J. (2020). Challenges Towards {Production-Ready} Explainable Machine Learning. Paper presented at the 2020 USENIX Conference on Operational Machine Learning (OpML 20).

Vilone, G., & Longo, L. (2021). A quantitative evaluation of global, rule-based explanations of post-hoc, model agnostic methods. Frontiers in artificial intelligence, 4, 717899.

Vorm, E., & Combs, D. J. (2022). Integrating transparency, trust, and acceptance: The intelligent systems technology acceptance model (ISTAM). International Journal of Human–Computer Interaction, 38(18-20), 1828-1845.

Wang, L., Lin, Z. Q., & Wong, A. (2020). Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific reports, 10(1), 19549.

WHO. (2023). WHO Coronavirus (COVID-19) Dashboard. Retrieved from https://covid19.who.int/

Wu, J., Peck, D., Hsieh, S., Dialani, V., Lehman, C. D., Zhou, B., . . . Patterson, G. (2018). Expert identification of visual primitives used by CNNs during mammogram classification. Paper presented at the Medical Imaging 2018: Computer-Aided Diagnosis.

Xie, Y., Gao, G., & Chen, X. A. (2019). Outlining the design space of explainable intelligent systems for medical diagnosis. arXiv preprint arXiv:1902.06019.

Yang, C. C. (2022). Explainable artificial intelligence for predictive modeling in healthcare. Journal of healthcare informatics research, 6(2), 228-239.

Ye, Q., Xia, J., & Yang, G. (2021). Explainable AI for COVID-19 CT classifiers: an initial comparison study. Paper presented at the 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS).

Zhang, J., Yu, L., Chen, D., Pan, W., Shi, C., Niu, Y., . . . Cheng, Y. (2021). Dense GAN and multi-layer attention based lesion segmentation method for COVID-19 CT images. Biomedical Signal Processing and Control, 69, 102901.

Zhang, Y. (2017). Can a smartphone diagnose parkinson disease? a deep neural network method and telediagnosis system implementation. Parkinson’s disease, 2017.

Zheng, Q., Delingette, H., & Ayache, N. (2019). Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow. Medical image analysis, 56, 80-95.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.