top of page

Reaction to Joint Statement on the Use of AI Tools in Radiology



Author: John F Kalafut, PhD

We've been excited to see, digest and react to the Special Report in Radiology: Artificial Intelligence representing a practical overview of considerations, guidance and recommendations for developers, users, procurers, and regulators of AI tools applied in radiology [1]. It is an extremely useful compendium of reference material, real-world experiences, considerations, and advice to the (very broad) community implicated in the lifecycle of AI technologies in medical imaging. As a professional services and solutions firm, we resonate with and appreciate nearly all the recommendations, observations, and considerations throughout the report.

 

Each of the societies and their respective members and institutions have been pioneers in the development and dissemination of both evolutionary and revolutionary digital technologies into the daily practice of medicine since the 1970s. It was not just catalyzed by the recent wave of excitement arising from the last and current generation of data-driven, computational methods like Deep Learning. Radiology is arguably the only sub-specialty in medicine that is - and has been - 100% 'digital' for nearly 20 years. CT, Ultrasound, and MRI, for example, are inherently digital; there would be no medical images without advanced mathematical computations, digital signal processing, and networking. Therefore, while many areas of healthcare are recently now wrangling with the regulatory, practical, and IT issues of embracing "AI" into the clinic the collective experience of the radiology community, especially as encoded into reports like this one, should be broadly referred and built-upon.

 

Below are a few observations and insights based upon cursory review of the report.

 

  • We acknowledge the challenges and hard work involved in gathering inputs, creating content, and reaching agreement among many societies across the world. However, the report is unwieldy and attempts to address too broad of an audience. It gives general suggestions for some stakeholders, such as software manufacturers and regulators that takeaway from the impactful messages for other stakeholders. It may be better to split this type of statement into smaller parts that are more relevant and actionable for each stakeholder group.


  • We believe the stakeholder group MOST in need of streamlined guidance, practical assessment tools, and checklists are community healthcare systems, medium-scale radiology groups, independent hospitals, and vendors striving to serve them with these new technologies. In our experience and practice, this constituency is asking for and needing actionable and realistic guides to assist in the overarching process of selecting, evaluating, deploying, and optimizing the use of AI tools in daily practice. We wonder how actionable the 15 pages of insights here will be by this constituency. Table 4 ("Purchasing Considerations for AI models in radiology") contains excellent points and is a great departure point. The 'devil', however, are in the selection criteria and activities necessary below these very high-level points.  We hear reactions from healthcare organizations that healthcare AI ‘best-practices’ and guidance published by agencies, and AMC-anchored consortia are not easily scaled to all levels of practice. In resource-strapped organizations without informaticists and computational-oriented specialists, they struggle to activate overly general and expert guidance.


  • We agree with and applaud the work and engagement the societies have made with regulators to date. We think, however, the space used here - especially an item like Table 2 - would be better saved for a separate report. From personal experience in the pre-market review phase of medical AI product, regulators are acutely aware of the items in Table 2 and reflect them both in guidance documentations and via 1-1 interactions manufacturers have with the regulatory bodies. Statements like that starting the "Key Statement" in Section 5: 'Prior to approval [sic], regulators should request information from AI software developers pertaining the to the company, clinical use, implementation, product development, demonstration, cost and publications" belie the obligations manufacturers already have in developing regulated software products and the controls mandated by international harmonized standards (eg: ISO13584, IEC/ISO 60630) and national requirements like US 21 CFR Part 820). Likewise, the statement "Regulators should be particularly attuned to ensuring that solutions have an explicit post-market quality assurance plan" probably will mean little to regulators because all medical device software used by radiologists (including PACS) is monitored by their manufacturers and must be 'patched', updated/upgraded, against a plan that is a key part of the manufacturer's Quality Management System (see IEC 62304). IEC 62304 defines software development life cycle to span the life of the software from definition of its requirements to its release, use, and eventual removal from the market. [2]


  • There is a valid need for identifying and addressing specific and nuanced challenges regarding the safety and efficacy of medical imaging AI and the quality and risk methodologies most applicable to address them. We think that a dialog with professionals from the Software Quality Assurance and Regulatory Sciences communities would result in more robust and experiential-driven standards and practices. Possible interactions and collaborations with societies like the IEEE, INCOSE, or the ISQA could be very fruitful to constructively help fill the gaps in current quality standards and software lifecycle practices. With few exceptions, the experience gained in practicing medicine does not equate to understanding the principles inherent to quality engineering, product risk management, and regulatory sciences/frameworks applicable to medical device development.


  • We wholeheartedly agree with the points raised in the report that there must be standardization of reporting by manufacturers of details and attributes regarding ML model training, testing and development to regulatory bodies. Standardized information and presentation will enable better post-market assessment and understanding by purchasers and users of the software tools which is a pain-point of potential customers and implementors of imaging AI today.


  • We also welcome the development of such guides, templates, and information so that both standards bodies and regulators may incorporate them into specific processes in pre-market clearance and approval processes.


  • We find it a little odd that radiomics is singled out (section 4.C) to typify the opaqueness of imaging AI. Agreed, some high-dimensional features (e.g.: 'busyness') and information theoretic measures might not be obviously correlated to disease processes, but radiomics measures all trace back to pixel and voxel characteristics. Inherent to radiomic processing is the down-selection of quantitative features and a ‘trail’ of factors that can be interrogated. Because the radiomic features relate back to the actual spatial structures (e.g.: tumor) there is a much better chance of corelating those findings to true, molecular, and biologic processes [3]. Sure, Vision Image Transformers and attention-based analysis methods can also be used to detect correlations among imaging features and pathophysiology, but most DL approaches are truly 'black box' and the ability to connect features of interest encoded inside the DL is not readily achievable by currently available 'explainability' methods.


References:

 [1]   A. P. Brady et al., “Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA,” Radiol. Artif. Intell., vol. 6, no. 1, p. e230513, Jan. 2024, doi: 10.1148/ryai.230513.


[2]   IEC 62304:2006 Medical device software Software life cycle processes.” doi: ISBN 978-2-8322-4638-2.


[3]    M. R. Tomaszewski and R. J. Gillies, “The Biological Meaning of Radiomic Features,” Radiology, vol. 298, no. 3, pp. 505–516, Mar. 2021, doi: 10.1148/radiol.2021202553.

 

 

 

Asher Informatics PBC

The solutions we've built at Asher Informatics align well to the recommendations made in the report. Our services are intended, however, to help translate frameworks like this from Journal Article to practice by healthcare organizations of all types.

 

Asher Informatics PBC provides independent clinical AI performance and quality assurance services.  They strive to make clinical AI accessible and beneficial for all health systems and patients. They offer a range of clinical AI solutions designed to mee the specific needs and challenges of each health system.  Utilizing cutting-edge technologies and methodologies they provide evidence-based performance of deployed clinical AI models to improve confidence and measure Roi.  Their goal is to ensure affordable and ethical clinical AI solutions are selected to meet the needs and resources of each system.

 

Kommentare


Die Kommentarfunktion wurde abgeschaltet.
bottom of page