π Spotlight #12: π Guidance to boost trust, π€― AI to detect stress, π§ Impacts of AI suggestions, π Measurable goal for implementing AI
AI Health Hub, 27/11/2023
π BSI publishes guidance to boost trust in AI for healthcare
News
To enhance trust in AI products used for medical diagnoses and treatment, the British Standards Institution (BSI) recently released a guidance document titled βValidation framework for the use of AI within healthcare β Specification (BS 30440)β. The standard covers various AI products in healthcare, such as regulated medical devices and patient-facing AI technologies, and home monitoring devices. The guidance development involved collaboration among experts from different fields, including clinicians, software engineers, AI specialists, ethicists, and healthcare leaders.
As the development of healthcare AI is getting mature, it is vital to take into account how people think about these products before using them. Therefore, such recognized standards will help instill confidence in AI products used in healthcare for the benefit of patients, healthcare professionals, and society.
π€― AI tool can detect distress in overburdened hospital workers
News
It is time to pay attention to the mental health of healthcare workers. Researchers from New York University Langone Health published the study in the Journal of Medical Internet Research AI, demonstrating that AI has the potential to identify psychological distress in hospital workers. The study analyzed over 800 healthcare professionalsβ psychotherapy sessions during the initial COVID-19 wave. It found that healthcare workers discussing topics like experiences in hospital units, sleep issues, and mood problems were more likely to receive anxiety and depression diagnoses. Although the increased risk for these conditions was 3.6%, the model is expected to improve with more data. The findings suggest that natural language processing through AI could become an effective tool for detecting and monitoring anxiety and depression symptoms, potentially benefiting both healthcare workers and the general population.
You can access the full article here.
π§ The impact of AI suggestions on radiologistsβ decisions: a pilot study of explainability and attitudinal priming interventions in mammography examination
Scientific Article
If you want to know how AI suggestions can influence cliniciansβ decision-making, check out this paper. It was shown that medical professionals are prone to follow the incorrect suggestion made by algorithms. This might be due to following a βmental shortcutβ and holding specific attitudes toward the system. Correspondingly, the authors designed a semi-experiment to test whether explainability inputs and attitudes toward AI can influence how AI suggestions impact radiologistsβ decisions. Although they did not find the moderating effect of both explainability inputs and attitude toward AI, they presented how radiologistsβ decisions were influenced by AI suggestions. Below are the main findings of this study:
When radiologists consulted the AI suggestions, it was more likely for them to make incorrect decisions when AI gave incorrect suggestions.
The pattern of making under-, correct, and over-diagnosis of radiologists was also corresponding to that of the AI system.
The authors also did a pathway analysis to show how various factors were associated with different decision types.
π Implementing AI? Heathcare organizations βmust have a specific measurable goalβ
News
Tom Hallisey, the digital health strategy lead and board member at Columbia Memorial Health, is going to give a speech on the health systemβs AI work at the 2023 Healthcare Information and Management Systems Society (HIMSS) AI in Healthcare Forum, December 14-15 in San Diego. The author talked to Hallisey to get a preview of the session and an idea of how to jump into healthcare AI.
Hallisey pointed out that one key question to ask during the beginning phase of a healthcare AI journey should be what problem we are trying to solve. A specific measurable goal is needed to show the value of generative AI tools. Moreover, to ensure AI investments are targeted for maximum impact, establishing a committee will be beneficial. This group collects and prioritizes ideas, guides resource selection, reviews pilot results, and aids in scaling efforts. In the end, they talked about one tip to ensure long-term success with a healthcare AI investment, which is a continuous evaluation framework because of the dynamic nature of AI tools. What works in one population might not in another. Overall, we should start a healthcare AI journey by pinpointing the problem to solve, setting specific measurable goals, and adopting a measurable approach in selecting tools for successful implementation and integration.
π₯³ Enjoy reading!
This is my very first post in AI Health Hub, I am very happy to join the community. I hope you enjoy the findings of this week.
Leave comments below or let us know if you have any opinion about what you would like to read from AI Health Hub.