{"id":2886,"date":"2026-01-17T20:29:49","date_gmt":"2026-01-17T20:29:49","guid":{"rendered":"https:\/\/americanvoiceofhealth.com\/index.php\/2026\/01\/17\/ai-is-speeding-into-healthcare-who-should-regulate-it\/"},"modified":"2026-01-17T20:29:49","modified_gmt":"2026-01-17T20:29:49","slug":"ai-is-speeding-into-healthcare-who-should-regulate-it","status":"publish","type":"post","link":"https:\/\/americanvoiceofhealth.com\/index.php\/2026\/01\/17\/ai-is-speeding-into-healthcare-who-should-regulate-it\/","title":{"rendered":"AI is speeding into healthcare. Who should regulate it?"},"content":{"rendered":"<header class=\"wp-block-harvard-gazette-article-header alignfull article-header is-style-classic has-colored-heading has-media-on-the-left\">\n<figure class=\"wp-block-image\"><figcaption class=\"wp-element-caption\">\n<p class=\"wp-element-caption--caption\"> I. Glenn Cohen.<\/p>\n<p class=\"wp-element-caption--credit\">File photo by Niles Singer\/Harvard Staff Photographer<\/p>\n<\/figcaption><\/figure>\n<div class=\"article-header__content\">\n\t\t\t<a class=\"article-header__category\" href=\"https:\/\/news.harvard.edu\/gazette\/section\/health\/\"><br \/>\n\t\t\tHealth\t\t<\/a><\/p>\n<h1 class=\"article-header__title wp-block-heading \">\n\t\tAI is speeding into healthcare. Who should regulate it?\t<\/h1>\n<p class=\"article-header__subheading wp-block-heading\">\n\t\t\tMedical ethicist details need to balance thoughtful limits while avoiding unnecessary hurdles as industry groups issue guidelines\t\t<\/p>\n<div class=\"article-header__meta\">\n<div class=\"wp-block-post-author\">\n<address class=\"wp-block-post-author__content\">\n<p class=\"author wp-block-post-author__name\">\n\t\tAlvin Powell\t<\/p>\n<p class=\"wp-block-post-author__byline\">\n\t\t\tHarvard Staff Writer\t\t<\/p>\n<\/p><\/address>\n<\/p><\/div>\n<p>\t\t<time class=\"article-header__date\" datetime=\"2026-01-12\"><br \/>\n\t\t\tJanuary 12, 2026\t\t<\/time><\/p>\n<p>\t\t<span class=\"article-header__reading-time\"><br \/>\n\t\t\t8 min read\t\t<\/span>\n\t<\/div>\n<\/p><\/div>\n<\/header>\n<div class=\"wp-block-group alignwide has-global-padding is-content-justification-right is-layout-constrained wp-container-core-group-is-layout-f1f2ed93 wp-block-group-is-layout-constrained\">\n<p>AI is moving quickly into healthcare, bringing potential benefits but also possible pitfalls such as bias that drives unequal care and burnout of physicians and other healthcare workers. It remains undecided how it should be regulated in the U.S.<\/p>\n<p>In September, the hospital-accrediting <a href=\"https:\/\/www.jointcommission.org\/en-us\">Joint Commission<\/a> and the <a href=\"https:\/\/www.chai.org\/\">Coalition for Health AI<\/a> issued <a href=\"https:\/\/www.jointcommission.org\/en-us\/knowledge-library\/news\/2025-09-jc-and-chai-release-initial-guidance-to-support-responsible-ai-adoption\">recommendations<\/a> for implementing artificial intelligence in medical care, with the burden for compliance falling largely on individual facilities.<\/p>\n<p><a href=\"https:\/\/petrieflom.law.harvard.edu\/leadership-staff\/iglenn-cohen\/\">I. Glenn Cohen<\/a>, faculty director of <a href=\"http:\/\/www.hls.harvard.edu\">Harvard Law School<\/a>\u2019s <a href=\"https:\/\/petrieflom.law.harvard.edu\/\">Petrie-Flom Center for Health Law, Biotechnology, and Bioethics<\/a>, and colleagues suggested in the <a href=\"https:\/\/jamanetwork.com\/journals\/jama\/article-abstract\/2842278\">Journal of the American Medical Association<\/a> that the guidelines are a good start, but changes to ease likely regulatory and financial burdens \u2014 particularly on small hospital systems \u2014 are needed.<\/p>\n<p>In this edited conversation, Cohen, the James A. Attwood and Leslie Williams Professor of Law, discussed the difficulty of balancing thoughtful regulation with avoiding unnecessary roadblocks to game-changing innovation amid rapid adoption.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-narrow-single-line\" \/>\n<p><strong>Is it clear that AI in healthcare needs regulation?<\/strong><\/p>\n<p>Whenever medical AI handles anything with medium to high risk, you want regulation: internal self-regulation or external governmental regulation. It\u2019s mostly been internal thus far, and there are differences in how each hospital system validates, reviews, and monitors healthcare AI.<\/p>\n<p>When done on a hospital-by-hospital basis like this, costs to do this kind of evaluation and monitoring can be significant, which means some hospitals can do this, and some can\u2019t. By contrast, top-down regulation is slower \u2014 maybe too slow for some forms of progress in this space.<\/p>\n<p>There\u2019s also a complicated mix of AI products going into hospitals. Some may assist with things like internal purchasing and review, but many more are clinical or clinically adjacent.<\/p>\n<p>Some medical AI products interface directly with consumers, such as chatbots that people might be using for their mental health. For that, we don\u2019t even have internal hospital review, and the need for regulation is much clearer.<\/p>\n<p><strong>With technology moving so fast, is speed important even in regulation?<\/strong><\/p>\n<p>This is an innovation ecosystem that has a lot of startup energy, which is great. But you\u2019re talking about something that can scale extremely quickly, without a lot of internal review.<\/p>\n<p>Whenever you enter what I call a \u201crace dynamic,\u201d there is a risk that ethics is left behind pretty quickly. Whether the race is to be the first to develop something, a race for a startup against money running out, or a national race between countries trying to develop artificial intelligence, the pressures of time and urgency make it easier to overlook ethical issues.<\/p>\n<p>The vast majority of medical AI is never reviewed by a federal regulator \u2014 and probably no state regulator. We want to have standards for healthcare AI and an incentive to adopt standards.<\/p>\n<p>But putting everything through the rigorous FDA process for drugs or even the one for medical devices would in many cases be prohibitively expensive and prohibitively slow for those enamored with the rate of development in Silicon Valley.<\/p>\n<p>On the flip side, if they perform badly, many of these technologies are a much greater risk to the general populace than the average device on the market.<\/p>\n<p>If you take an aspirin or a statin, there are differences in how they work in different people, but to a large extent we can characterize those differences ahead of time. When medical AI is reading an X-ray or doing something in the mental health space, how it\u2019s implemented is key to its performance.<\/p>\n<p>You might get very different results in different hospital systems, based on resources, staffing, training, and the experience and age of people using them, so one has to study implementation very carefully. This would create an unusual challenge for an agency like FDA \u2014 which often says it does not regulate the practice of medicine \u2014 because where the approval of an AI system stops and the practice of medicine begins is complicated.<\/p>\n<p><strong>Your study examines a regulatory system suggested by the Joint Commission, a hospital accreditor, and the Coalition for Health AI. Would an accreditor naturally be something that hospitals would \u2014 or would have to \u2014 pay attention to?<\/strong><\/p>\n<p>Exactly. In almost every state, in order to be able to bill Medicare and Medicaid you need to be accredited by the Joint Commission. This is a huge part of almost every hospital&#8217;s business.<\/p>\n<p>There is a robust process to qualify for accreditation, and every so often you are re-evaluated. It\u2019s serious business.<\/p>\n<p>The Joint Commission hasn\u2019t yet said that these AI rules are going to be part of our next accreditation, but these guidelines are a sign that they may be going in that direction.<\/p>\n<blockquote class=\"wp-block-quote alignwide is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cI speak about legal and ethical issues in this space, but I\u2019m an optimist about this. I think that, in 10 years, the world will be significantly better off because of medical artificial intelligence.\u201d<\/p>\n<\/blockquote>\n<p><strong>Do you find some of the recommendations wanting?<\/strong><\/p>\n<p>Some are more demanding than I expected, but I actually think they\u2019re pretty good.<\/p>\n<p>Requiring that \u2014 when appropriate \u2014 patients should be notified when AI directly impacts their care and that \u2014 when relevant \u2014 consent to use an AI agent should be obtained, is a strong position to take.<\/p>\n<p>A lot of scholars and other organizations don\u2019t take the position that medical AI should always be disclosed when it directly impacts care, let alone that informed consent should always be sought.<\/p>\n<p>The guidelines also require ongoing quality monitoring and continual testing, validation, and monitoring of AI performance.<\/p>\n<p>Monitoring frequency would scale to risk levels in patient care. These are good things to do, but difficult and expensive. You\u2019ll have to assemble multidisciplinary AI committees and constantly measure for accuracy, errors, adverse events, equity, and bias across populations.<\/p>\n<p>If taken seriously, it will probably be infeasible for many hospital systems in the U.S. They will have to make a threshold decision whether they\u2019re going to be AI adopters.<\/p>\n<p><strong>You point out in your JAMA article that most hospitals in the U.S. are small community hospitals, and that resources are a major issue.<\/strong><\/p>\n<p>I am told by people in major hospital systems that already do this that to properly vet a complex new algorithm and its implementation can cost $300,000 to half a million dollars. That\u2019s simply out of reach for many hospital systems.<\/p>\n<p>There are actually going to be things in the implementation that are specific to each hospital, but there are also going to be things that might be valuable to know that are common for many hospital systems. The idea that we\u2019d do the evaluation repeatedly, in multiple places, and not share what\u2019s learned seems like a real waste.<\/p>\n<p>If the answer is, \u201cIf you can\u2019t play in the big leagues, you shouldn\u2019t step up to bat,\u201d that creates a have\/have-not distribution in terms of healthcare access. We already have that as to healthcare generally in this country, but this would further that dynamic at the hospital level.<\/p>\n<p>Your access to AI that helps medical care would be determined by whether you\u2019re in networks of large academic medical centers that proliferate in places like Boston or San Francisco, rather than other parts of the country that don\u2019t have that kind of medical infrastructure.<\/p>\n<p>The goal, ideally, would be more centralization and more sharing of information, but these recommendations put a lot of the onus on individual hospitals.<\/p>\n<p><strong>Doesn\u2019t a system where some hospitals can\u2019t participate negate the potential benefits from this latest generation of AI, which can assist places that are resource-poor by providing expertise that might be missing or hard to find?<\/strong><\/p>\n<p>It would be a shame if you\u2019ve got a great AI that\u2019s helping people and might do the most benefit in lower-resource settings, and yet those settings are unable to meet the regulatory requirements in order to implement.<\/p>\n<p>It would also be a sad reality, as an ethical matter, if it turns out that we\u2019re training these models on data from patients across the country, and many of those patients will never get the benefit of these models.<\/p>\n<p><strong>If the answer is that the vetting and monitoring of medical AI should be done by a larger entity, is that the government?<\/strong><\/p>\n<p>The Biden administration\u2019s idea was to have \u201cassurance labs\u201d \u2014 private-sector organizations that in partnership with the government could vet the algorithms under agreed-upon standards such that healthcare organizations could rely on them.<\/p>\n<p>The Trump administration agrees on the problem but has signaled that they don\u2019t like the approach. They have yet to fully indicate what their vision is.<\/p>\n<p><strong>It sounds like a complex landscape, as well as a fast-moving one.<\/strong><\/p>\n<p>Complex, but also challenging and interesting.<\/p>\n<p>I speak about legal and ethical issues in this space, but I\u2019m an optimist about this. I think that, in 10 years, the world will be significantly better off because of medical artificial intelligence.<\/p>\n<p>The diffusion of those technologies to less-resourced settings is very exciting, but only if we align the incentives appropriately. That doesn\u2019t happen by accident, and it is important that these distributional concerns be part of any attempt to legislate in the area.<\/p>\n<\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>I. Glenn Cohen. File photo by Niles Singer\/Harvard Staff Photographer Health AI is speeding into healthcare. Who should regulate it? Medical ethicist details need to balance thoughtful limits while avoiding unnecessary hurdles as industry groups issue guidelines Alvin Powell Harvard Staff Writer January 12, 2026 8 min read AI is moving quickly into healthcare, bringing &#8230;<\/p>\n","protected":false},"author":1,"featured_media":2887,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"loftocean_post_primary_category":0,"loftocean_post_format_gallery":"","loftocean_post_format_gallery_ids":"","loftocean_post_format_gallery_urls":"","loftocean_post_format_video_id":0,"loftocean_post_format_video_url":"","loftocean_post_format_video_type":"","loftocean_post_format_video":"","loftocean_post_format_audio_type":"","loftocean_post_format_audio_url":"","loftocean_post_format_audio_id":0,"loftocean_post_format_audio":"","loftocean-featured-post":"","loftocean-like-count":0,"loftocean-view-count":132,"tinysalt_single_post_intro_label":"","tinysalt_single_post_intro_description":"","tinysalt_hide_post_featured_image":"","tinysalt_post_featured_media_position":"","tinysalt_single_site_header_source":"","tinysalt_single_custom_site_header":"0","tinysalt_single_custom_sticky_site_header":"0","tinysalt_single_custom_sticky_site_header_style":"sticky-scroll-up","tinysalt_single_site_footer_source":"","tinysalt_single_custom_site_footer":"0","footnotes":""},"categories":[37],"tags":[],"class_list":["post-2886","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-staying-healthy"],"_links":{"self":[{"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/posts\/2886","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/comments?post=2886"}],"version-history":[{"count":0,"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/posts\/2886\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/media\/2887"}],"wp:attachment":[{"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/media?parent=2886"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/categories?post=2886"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/americanvoiceofhealth.com\/index.php\/wp-json\/wp\/v2\/tags?post=2886"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}