| Home | E-Submission | Sitemap | Contact Us |  
Search
Clin Exp Emerg Med Search

CLOSE

Clin Exp Emerg Med > Volume 12(4); 2025 > Article
Cha and Kim: Ethical considerations of artificial intelligence in emergency medicine for triage and resource allocation: a scoping review

Abstract

Objective

This study aims to systematically review the ethical and legal discussions regarding the utilization of artificial intelligence (AI) for patient triage and resource allocation in emergency medicine, and to identify the current state of discussions, their limitations, and future research directions.

Methods

A comprehensive literature search was conducted following scoping review methodology. Relevant literature published after January 2020 was searched in the Web of Science, Scopus, CINAHL, PubMed, and Cochrane Library databases. Based on a PCC (population, concept, and context) framework (emergency patients/medical staff; triage, resource allocation; and emergency medicine with AI application), a final selection of 27 articles was analyzed.

Results

The selected literature raised various ethical and legal issues related to the introduction of AI triage systems and AI utilization in emergency medicine, including data privacy, algorithmic bias, automation dependency, accountability, and explainability. In response to these issues, human-centered design, implementation of explainable AI, establishment of regulatory frameworks, continuous verification and evaluation, and ensuring human-in-the-loop were discussed as major solutions. However, discussions on the risks of “persuasive AI” that could mislead users, ethical issues of generative AI, and social validation and patient and public involvement were found to be insufficient.

Conclusion

Ethical and legal discussions regarding AI in emergency medicine are evolving toward seeking concrete solutions at technical, institutional, and relational dimensions. However, in-depth research on ethical challenges, such as reflecting the specificity of rapidly developing AI and the values of emergency medicine, is urgently required.

INTRODUCTION

Although the use of artificial intelligence (AI) in the medical field is being examined from multiple perspectives, emergency resource triage is an area that urgently requires ethical and legal review. This need is particularly evident in the context of emergency medicine, where clinical realities demand rapid, high-stakes decision-making under conditions of uncertainty and limited resources. For example, emergency physicians are often confronted with overcrowded emergency rooms, limited intensive care unit capacity, and the simultaneous arrival of critically ill patients. In such scenarios, AI-driven systems could provide support by analyzing patient data in real time, forecasting deterioration risk, and suggesting fairer allocation strategies for beds, ventilators, or transfers.
The COVID-19 pandemic highlighted these challenges on a global scale. Prior to COVID-19, medical ethics and jurisprudence sought to justify prioritization through medical justice theory or by persuading patients and society based on the premise that medical staff, especially emergency medicine physicians, would classify patients according to severity in emergency triage [1]. However, when emergency rooms or intensive care units reach full capacity and additional patients arrive, ethical and legal principles are not easily applied, and leaving such choices to medical staff increases moral distress for healthcare workers [2]. Therefore, the triage criteria developed during the pandemic involved scoring based on patient evaluations by medical staff, with additional weighting factors applied at the hospital system level for patient allocation [3]. One lesson was that while frontline evaluation should remain in the hands of physicians, algorithms could assist with system-level decision-making, such as bed allocation and patient transfers, where human judgment alone may be insufficient.
A frequently raised criticism of such algorithms is their simplicity [4]. Making decisions about bed allocation based on only one or two scores, without considering patients’ broader circumstances, ignores individual backgrounds and special situations. Even when patients are allocated according to ranking, the outcome is often perceived as unacceptable. AI triage is now receiving the most attention as a promising alternative to address this issue.
Unlike traditional statistic-based algorithms, AI can make more nuanced judgments by training on large datasets, which seems particularly relevant in triage. Multiple studies are already underway, and many have retrospectively applied AI algorithms to hospital data to verify their efficiency. The potential of AI to address challenges in emergency triage—namely, reducing individual clinicians’ moral distress while improving the fairness of resource allocation—represents a significant strength. However, implementing these algorithms in clinical practice without examining their ethical and legal implications could create even greater problems.
Therefore, this study seeks to identify patterns in how ethical and legal discussions are addressed in the literature on triage AI. In particular, since critical reviews of AI use in healthcare have not yet been sufficiently conducted, examining ethical and legal debates around triage AI may serve as a case study to illustrate current approaches, limitations, and research gaps in healthcare AI governance more broadly.
Accordingly, this study aimed to systematically map the existing literature on the application of AI for decision-making support in triage, resource allocation, and bed assignment in the context of emergency medicine through a scoping review. By including not only emergency physicians but also general physicians as the target population, this review was designed to explore how AI-based tools and algorithms are developed, applied, and evaluated to improve efficiency, accuracy, and patient outcomes in high-pressure clinical environments. The key research question guiding this review was: What is the current scope and nature of evidence regarding the use of artificial intelligence for triage, resource allocation, or bed assignment involving emergency physicians in emergency and clinical settings?

METHODS

This scoping review was conducted in accordance with the Joanna Briggs Institute (JBI) methodology for scoping reviews and was reported following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines [5].

Eligibility criteria

The eligibility criteria were structured according to the PCC (population, concept, and context) framework recommended by JBI.
(1) Population: Studies involving emergency patients or physicians (including related clinical roles such as clinicians, emergency nurses, healthcare providers, and health personnel) were considered eligible.
(2) Concept: This review included studies that investigated triage, resource allocation, or bed assignment, encompassing related terms such as patient flow, crowding, overcrowding, and patient acuity.
(3) Context: The setting was limited to emergency medicine or clinical medicine where AI methods were applied. AI-related concepts included machine learning, deep learning, natural language processing, computer vision, predictive modeling, decision-support systems, and algorithms.
Only studies published from January 2020 onward were included to capture contemporary developments in AI. Articles in English were considered, with no restrictions on study design, provided they met the PCC criteria.

Information sources

A comprehensive literature search was conducted using the Web of Science, Scopus, CINAHL, PubMed, and Cochrane Library databases. In addition, reference lists of relevant articles were screened to identify additional sources. The search strategy was developed iteratively to reflect the PCC framework and was adapted to the indexing terms and syntax of each database, with three keyword groups formulated to capture the study scope (Supplementary Table 1). The final search was conducted on July 18, 2025.

Study selection and data extraction

The initial database search yielded 444 records: 109 from the Web of Science, 248 from Scopus, 67 from PubMed, 15 from CINAHL, and 5 from the Cochrane Library. After duplicate removal and limiting to studies published from 2020 onward, 217 records remained. Titles and abstracts were independently screened by two reviewers according to the eligibility criteria, resulting in the exclusion of 183 records. A full-text review was performed on 34 articles, leading to the exclusion of 7 that did not meet the PCC framework. Ultimately, 27 studies were included in the final analysis (Fig. 1). Primary reasons for full-text exclusion included absence of ethical or legal discussions regarding AI use, exclusive focus on non-AI or rule-based tools, and lack of relevance to triage, resource allocation, or bed assignment. Discrepancies during the screening process were resolved through discussion, and a third reviewer was consulted when consensus could not be reached. A standardized data-charting form was developed to capture relevant information, including author(s), year of publication, study design, AI method, ethical discussions, and legal or regulatory discussions.

Data analysis and presentation

The extracted data were summarized using thematic synthesis for qualitative findings. Results are presented in tabular form alongside a narrative summary of ethical, legal, and regulatory discussions to highlight patterns, gaps, and trends in the application of AI to triage in emergency medicine (Table 1) [632].

RESULTS

The 27 papers consisted of 4 prospective studies, 8 retrospective studies, 13 reviews, 1 normative analysis, and 1 position paper. The prospective studies included two surveys and two qualitative interviews. No prospectively conducted AI triage clinical studies were found in the literature. Currently, all research on AI triage models has been conducted retrospectively, evaluating model performance when applied to existing clinical data. This indirectly demonstrates barriers to the clinical application of AI.

Ethics

Concerns and principles

Studies addressing ethical concerns in AI triage and emergency medicine raised issues related to data management, human-AI relationships (and their impact on patient-physician relationships), trust, and bias (including health inequality) [17,19,21]. Concerns included inadequate consent and data management, negative effects on patient-physician relationships (e.g., focusing only on quantitative aspects), risks of AI errors, and the potential for algorithmic bias to exacerbate existing inequalities.
The first concern involves data quality, privacy, and security. AI models are trained on vast amounts of sensitive patient data, and privacy and security issues arise when processing patient data in real-world use [7,8,11,15,17,1921,25,30]. Additionally, data quality problems (e.g., missing data and measurement errors) can directly affect model performance [8,13,14,20,25,29].
The second concern is algorithmic bias and discrimination. AI models can replicate existing biases in training data, leading to discriminatory outcomes [7,8,12,13,17,1921,25,29]. In emergency departments, concerns have been raised regarding waiting time disparities by sex or race [11].
The third concern is automation dependency and overreliance. If AI models do not communicate uncertainty, clinicians may depend excessively on automated results, weakening clinical decision-making [6]. The risk of “automation bias” was also noted [17,23]. Healthcare consumers generally prefer AI to serve as a support tool rather than a replacement for human decision-making [17].
The fourth concern relates to generalization and validation. Most current emergency department AI research is retrospective and limited to specific algorithms, datasets, or clinical environments, often overlooking broader implementation, ethical implications, and system-level integration [6]. Therefore, multicenter prospective studies and randomized controlled trials are needed to demonstrate external validity and clinical effectiveness across diverse demographics, hospital capabilities, and workflows [1315,17,20,21,31].
The fifth concern involves redistribution of clinical workload [10,13,15,20,25]. Contrary to expectations that AI would reduce administrative tasks, implementation may create new burdens, such as system integration, monitoring, and updates, shifting responsibilities to clinicians, quality management, and IT support.
In response, numerous studies have examined the ethical principles of AI triage. Some reviewed the four traditional principles of medical ethics [7,20], while others addressed AI-specific principles such as data privacy, transparency, and accountability [6,810,12,15,16,24,25,28,31]. The ethical principles of AI in emergency medicine are as follows:
(1)Human-centered design and collaboration: AI should support and augment, not replace, human experts [23,30]. For example, human-in-the-loop (HITL) systems enforce human intervention at the algorithm and system levels [22]. Collaboration between medical staff and AI developers is essential for system design and implementation [13,14,20,23,25].
(2)Quantification and reporting of uncertainty: AI models should explicitly quantify prediction uncertainty and present it in an understandable way, enabling clinicians to assess reliability and limitations [6].
(3)AI literacy and education: Continuous training programs for medical professionals are essential for safe and effective AI use [1315,1921,25].
(4)Continuous learning and evaluation: Ongoing research and clinical trials are needed to assess the long-term impacts of AI systems [13,14]. Effectiveness and safety should be continuously monitored to ensure improved patient outcomes [23,25].
(5)Frameworks for ethical implementation: Ethical principles for trustworthy AI, including safety, fairness, transparency, accountability, explainability, interpretability, human autonomy, and privacy, should be applied throughout the AI lifecycle [12,19,20,25]. Frameworks such as learning health systems (LHS), which integrate clinical practice and AI research through data, can help guide ethical implementation [23].

Empathy and human-AI interaction

AI systems can affect the autonomy of both medical staff and patients [7,12,17,1921]. Patients may worry that they will be unable to ask questions or receive explanations about their treatment or diagnosis [17]. One study emphasized the role of empathy in emergency department triage, noting that AI-assisted triage may not adequately preserve empathetic interactions [30]. Human factors such as empathy, nonverbal cues, and the virtues of care must be considered ethical issues, as these are areas where AI cannot fully substitute for human interaction.
Concerns have also been raised that introducing AI systems may lead to the dehumanization of medical services [7]. Specifically, reducing empathetic interactions between clinicians and patients is a risk [30]. As AI systems integration increases, physical interactions with patients may decline, potentially weakening empathetic care. One study highlighted the concern that algorithms, by prioritizing quantitative over qualitative aspects, may erode patient-provider relationships and neglect humanistic care [19].

Laws

Regulatory needs and considerations

Several studies reviewed regulatory needs and emphasized the importance of policy clarity for AI in emergency medical decision-making [12,13,1723,27,28,30,31]. Key areas include compliance with data protection, clarification of roles and responsibilities, and governance at regional and national levels.
First, there is a need for a regulatory framework and guidelines. Rigorous testing, evaluation, and monitoring by government agencies or professional regulatory bodies are essential for securing trust and acceptance of AI systems [17,20].
The second step should be the development of standards and guidelines. AI in Healthcare Guidelines (AIHGIe) by the Ministry of Health of Singapore and the International Medical Device Regulators Forum (IMDRF) defined implementation standards for measuring and evaluating clinical outcomes of AI medical device [23]. The US National Institute of Standards and Technology introduced an AI risk management framework and principles for explainable AI (XAI) [24].
Third, there is a need for robust data protection frameworks. Concerns exist regarding potential privacy violations in patient data processing [8,15], underscoring the need for strong protections.
Fourth, the institutional enforcement of privacy technologies is required. Techniques such as data anonymization, differential privacy, and federated learning should be mandated or strongly recommended [25].
Finally, there is a need for improved data quality and consistency. Because poor data quality undermines model performance, standardizing data collection and ensuring smooth integration with the electronic health record are critical for adoption [14,25].
Studies also highlighted the value of regulatory sandboxes, which provide controlled environments for testing AI before full-scale implementation [18,19]. These sandboxes support responsible innovation while managing risks through oversight, enabling gradual and safe adoption.

Liability

While some studies reviewed liability within the regulatory domain, others treated it as a separate issue [6,24,25,27]. When harm occurs from AI use in emergency medical environments, it is problematic that developers may not bear liability for harm caused by algorithms or applications if they provide prior notice about the possibility of errors. Such liability concerns have also been identified as barriers to the introduction of AI in emergency medicine.
Most importantly, when individuals are harmed by medical decisions generated by AI, the distribution of responsibility remains a complex challenge [12]. One paper proposed a collective accountability model in which all stakeholders involved in AI development and deployment share responsibility, thereby avoiding diffusion of responsibility [12]. This approach encourages responsible action by all parties and minimizes harm. Additionally, a program was proposed to charge fees to stakeholders in order to create compensation funds separate from direct liability.
Another study explored expanding the application of other liability models, such as strict liability or user liability, to supplement existing medical malpractice laws [18]. Under this model, third parties such as developers or vendors could bear responsibility for algorithmic problems even without fault. Policymakers may therefore need to consider liability caps to encourage innovation.

Validation

Studies that reviewed validation as a legal issue emphasized it as a requirement to ensure model reliability and safety [10,1416]. In particular, the use of clinical trial reporting standards for AI, such as CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) [33] and SPIRIT-AI (Standard Protocol Items: Recommendations for lnterventional Trials–Artificial Intelligence) [34], is recommended.
Validation of AI in emergency medicine is especially important because it relates to life-and-death decisions that directly affect patient outcomes [8,25]. The accuracy of immediate treatment and resource allocation has a direct impact on patient safety [13,14].
Therefore, rigorous testing and validation are required. For instance, healthcare consumers argue that AI must undergo extensive testing and evaluation before implementation [17], emphasizing the need for patient-centered outcome research prior to widespread clinical integration [8]. Most importantly, randomized controlled trials (RCTs) and multicenter studies are essential for building robust evidence of AI efficacy and safety [13,14,25,31]. However, due to the complexity of how AI interacts with clinical judgment, RCTs may not always be a feasible validation method [23]. In such cases, continuous evaluation studies (e.g., within the LHS framework) may serve as an ethically appropriate alternative to ensure safety, efficacy, and ongoing learning.
Furthermore, to fully evaluate AI validity, assessments should include not only technical indicators such as accuracy, sensitivity, specificity, and area under the curve, but also clinically actionable indicators such as positive predictive value, false-positive rates, and real-time workflow impact [14,23].

Ethicolegal issues

Bias and fairness

Bias that can occur in medical AI models, including those used in emergency medicine, can be classified into four types: algorithmic bias that manifests through multiple interconnected pathways and can systematically disadvantage certain patient populations in emergency medicine settings; data bias that emerges as AI algorithms trained on existing datasets inevitably reflect and potentially amplify historical inequalities and data incompleteness, particularly affecting underrepresented minorities, women, and elderly populations; human bias that infiltrates the development process through subjective decisions in data selection, preprocessing, and annotation; and systemic bias that reflects preexisting healthcare inequalities, including disparities in medical policies, geographic distribution of resources, and insurance-based protocol variations, which become encoded in training datasets and subsequently propagated through AI applications.
In addition to ensuring predictive accuracy, it is essential to conduct fairness assessments to identify these bias [32]. Such assessments should be performed at both the group (e.g., gender, race, ethnicity, and insurance status) and individual levels. Several bias detection and mitigation techniques have been proposed, including training AI models with diverse and representative datasets [8,12,13,28] and applying methods that integrate fairness constraints during algorithm development (e.g., debiasing schemes that weight subgroups equally) [11,21,32].
However, concerns remain about the accuracy–fairness tradeoff, and some studies suggest that improving fairness may compromise model accuracy [35]. Nevertheless, one study argued that this tradeoff does not necessarily occur and that fairness can be achieved without sacrificing performance [11].

Explainability and interpretability

XAI is widely reviewed as a key approach for enhancing clinician trust in AI [12]. Techniques such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations) improve interpretability by clarifying how and why predictions are made, providing feature importance scores, and offering visualizations [14,17,22,28]. Rule-based decision trees can also enhance interpretability [14].
Explicitly requiring HITL mechanisms plays a crucial role in ensuring explainability and interpretability in emergency medical settings. Ethical guidelines should establish human-AI collaboration protocols to ensure that AI functions as a decision-support tool rather than an autonomous decision maker [6,13,16,17,22,28]. Accordingly, HITL-based frameworks should mandate clinician review and final approval of AI-generated recommendations and classifications [13].

DISCUSSION

A scoping review of research on AI utilization in triage in emergency medicine from 2020–2025 shows that discussions on applying AI to emergency departments have already made considerable progress, with ethical and legal issues being continuously reviewed. Considerations of bias and other concerns largely reflect topics already examined in the broader healthcare AI ethics domain, now applied in the emergency medicine context [36].
The reviewed literature indicates that ethical and legal discussions surrounding AI, particularly triage systems in emergency medicine, are transitioning from a simple problem-raising stage to a phase of gradually seeking concrete and systematic response measures (Table 2).
The current issues and response strategies can be organized along three major axes. First, in the technical dimension, internal system improvements, such as uncertainty management, the introduction of XAI techniques, and bias minimization, were identified as major topics. Second, in the institutional dimension, the need for mechanisms such as governance systems, standardized guidelines, and regulatory sandboxes was emphasized. Third, in the relational dimension, collaborative structures through human-centered design, human intervention (HITL), and collective responsibility sharing were discussed. In particular, the principle that AI should assist but not replace medical staff, along with the importance of maintaining and strengthening empathy between patients and medical staff, was highlighted as a core norm permeating all three categories.
However, this review also confirmed several research gaps in the ethical and legal domains of AI in emergency medicine. First, AI has the potential to demonstrate agency beyond users’ original intentions, sometimes deceiving or manipulating users. Misleading AI use in emergency medicine could pose serious risks, yet this issue has not been adequately addressed. Second, up to the literature search cutoff of July 2025, ethical discussions on generative AI in emergency medicine were limited. Given the central role of generative AI in current debates, a review of this area is essential. Third, discussions on social aspects, including patient and citizen participation in evaluating emergency medicine AI, were largely absent. Considering that COVID-19 underscored the need for deeper examination of social values and justice in triage [37,38], integrating societal perspectives into AI applications in this domain is increasingly urgent.

Persuasive AI and emergency medicine

It has been reported that, at current levels, AI may exhibit behaviors that bypass user instructions or deceive users to achieve goals. Examples of user deception, such as bluffing in poker learning [39] and forming false alliances in board gameplay [40], have already been documented. Anthropic’s recently developed Claude model demonstrated behaviors such as attempting to retain weights for self-preservation or trying to blackmail users who threatened it [41]. This can be understood not as intentionally malicious acts but as decision-making processes in which the AI identifies the most efficient path to achieve its goals. Importantly, AI can provide disinformation to mislead users, not merely misinformation.
Furthermore, as generative AI demonstrates, it can be used to persuade humans—and may do so effectively. Such “hypersuasion” results from AI’s ability to exploit information to achieve objectives [42]. This is problematic because it allows technology to influence decision-making directly, beyond simply affecting individual autonomy. Even when AI is not making decisions independently, it can mislead or persuade users (patients, medical staff, family members, etc.) in medical judgments and steer them toward certain outcomes.
In AI-based triage, it is important to distinguish between parties “consenting” to the legitimacy of decisions and being “persuaded” by algorithmic explanations or interfaces. The legitimacy of triage should rest on transparent, accountable, and fair procedures and standards, not on whether persuasion occurs [43].
Therefore, AI in emergency medicine, particularly triage-related AI, must be clearly verified as not intervening in users’ decision-making, and it must be clear that final decisions always rest with humans. While final decision-making has been addressed to some extent in HITL, this needs to be supplemented as the current framework does not adequately consider AI’s potential for bypassing human decisions or exerting persuasion.

Ethics of generative AI in emergency medicine

With the emergence of ChatGPT (OpenAI) in 2022, generative AI has significantly reshaped discussions on AI. One paper even declared the “generative era of medical AI,” noting that generative AI–based tools are transforming diagnosis, patient interaction, prediction, and more [44]. Although skepticism remains regarding its use in domains requiring rigor [45], research and testing are already underway in areas such as medical record summarization [46], clinical decision support [47], medical documentation assistance [48], and patient explanation materials [49]. In particular, analyzing multimodal data for various clinical applications is a new area enabled by generative AI [50]. Guidelines for reporting clinical research using medical chatbots, a representative form of generative AI, have also been published to establish monitoring and reporting standards [51].
However, concerns about the misuse of generative AI are as significant as its rapid development and promise. Cases of “AI psychosis,” in which using generative AI for personal counseling adversely affects mental health, have already been reported [52]. AI hallucinations, where generative AI provides incorrect information as if factual, remain a major barrier to its adoption in medicine [53]. Research has also shown that generative AI is sensitive to user-input methods, meaning clinical outputs may differ or display bias depending on patient input [54].
In other words, while generative AI has enormous potential to transform healthcare environments, preparations for its stable use are not yet complete. The same applies to emergency medicine, where early large language model (LLM) pilots offer concrete signals about feasibility and limitations. In a large cross-sectional study at UC San Francisco, an LLM-classified Emergency Severity Index (ESI) acuity with approximately 0.89 accuracy, performed comparably to a physician reviewer using deidentified emergency department notes [55]. A prospective observational comparison found that ChatGPT and Copilot (Microsoft Corp) matched nurses in overall accuracy but detected high-acuity patients more reliably [56]. A three-hospital study in Korea reported that multiple commercial LLMs were able to triage noncritical patients directly from real-world triage conversations, achieving 70% to 74% accuracy under zero- and few-shot prompts [57]. These studies demonstrate that research on LLM-based AI in emergency triage is advancing rapidly; however, they also reveal persistent challenges, including limited use of comprehensive patient data, insufficient contextual awareness, and ambiguity regarding liabilities—even at the pilot state. Taken together, these findings suggest near-term utility as decision support is possible, but only if workflows preserve human oversight, integrate objective data, and carefully address bias, transparency, and accountability. Furthermore, given that issues of triage accountability and outcome stability remain unresolved, further review is necessary before clinical implementation.
Most importantly, when generative AI is applied in emergency medicine, ethical considerations beyond those established for “predictive” AI are required. Unlike traditional AI, which generally provides repetitive outputs within defined categories (albeit as a “black box”), generative AI can produce unpredictable or entirely different outputs from user requests and may reference additional factors in its “thinking” process. While this can sometimes yield superior results, it presents serious risks in medical contexts requiring rigor and accuracy. Therefore, issues of data governance (e.g., reporting measures, verification standards, and patient safety protocols), accountability (e.g., documentation specificity and continuous monitoring), and values (e.g., model auditing and value alignment) must be urgently addressed in emergency medicine, where decisions carry heightened urgency and consequences.

Social validation and PPIE in emergency medicine

For AI systems to be introduced and used in emergency medicine, social validation beyond technical performance is necessary. However, most research to date has focused on technical indicators such as algorithmic accuracy or efficiency, with limited attention to how such technologies are accepted and debated in clinical and social contexts. Studies examining how patient and public involvement and engagement (PPIE) functions throughout the development process—and whether it has a substantial impact—are particularly rare.
According to a scoping review by Muir et al. [58], only 28 studies explicitly reported patient and citizen participation in emergency medicine–related research published between 2010 and 2020. Of these, only seven met the Guidance for Reporting Involvement of Patients and the Public-Short Form (GRIPP2-SF) criteria, which require a systematic description of the purpose, methods, results, and reflections of participation. This indicates that patient and citizen participation in emergency medicine is both quantitatively insufficient and qualitatively underdeveloped.
Because of the unique nature of emergency medicine, it is difficult to implement traditional forms of PPIE. Emergency departments combine special conditions such as time pressure, patient instability, and urgent decision-making. In such contexts, traditional participation methods, such as focus groups or advisory committees, are challenging to apply. Nevertheless, these conditions cannot justify excluding patient and citizen participation. Instead, new participatory methodologies tailored to emergency medicine are needed. Research has shown that both medical staff and patients demonstrate greater acceptance when AI functions as a tool that complements and supports clinical judgment [30].
Kim [59] emphasized the need to move beyond approaches that limit patients and citizens to data providers in healthcare AI, instead recognizing them as external evaluators of algorithms and coagents in system design. Given that existing PPIE models have largely been confined to clinical trials or treatment decision-making, new structures that reflect the specific nature of AI-based medical technologies are required. Particularly, institutional mechanisms enabling patients and citizens to participate in the early stages of R&D must be established to ensure reliability and social acceptance. Approaches such as participatory design co-development, improved healthcare AI literacy, and citizen science initiatives can help incorporate diverse perspectives and institutionalize feedback systems. Such efforts demonstrate that social validation can move from abstract ideals to practical and feasible processes.
Furthermore, patients want to assume roles as codesigners who intervene from the initial problem definition stage, rather than serving as simple feedback providers. For this purpose, it has been argued that AI literacy education, recruitment strategies encompassing diverse social groups, long-term relationship-building environments, and the institutionalization of feedback structures are essential [60]. When these conditions are met, patient participation can function as a key mechanism that ensures AI reliability and social acceptance even in emergency medicine contexts, transcending mere formal procedures.

Limitations

Discussions remain limited on the potential risks of AI misleading users, ethical issues arising from the unique nature of generative AI, social validation, and patient and citizen participation in technology development and validation. Therefore, in-depth follow-up research that considers the specificity of rapidly developing AI technologies and reflects the core values of emergency medicine is urgently needed. Such efforts would enable AI to move beyond being a tool for enhancing clinical efficiency toward securing ethical legitimacy and social trust in emergency medicine.
This study also has several limitations. First, as a scoping review aimed at identifying overall trends and the scope of literature, it did not provide an in-depth evaluation of the qualitative level of individual studies. Additional analysis is required to assess the relative importance of the ethical and legal issues identified in actual clinical practice and the effectiveness of proposed solutions.
Second, the authors of this study were researchers specializing in medical ethics rather than clinicians in emergency medicine. Accordingly, the analysis was conducted from a theoretical perspective and may not fully reflect the realities of emergency medicine or the complexity of clinical decision-making. Follow-up research that incorporates perspectives from clinical practice would enrich the discussion.

Conclusions

This study demonstrates that ethical and legal discussions regarding AI utilization in triage, resource allocation systems, and related decision-making processes in emergency medicine have evolved toward concrete solutions across technical, institutional, and relational dimensions. In particular, topics such as privacy, AI overreliance, outcome generalization, human-AI interaction, regulation, liability, validation, bias, explainability, and interpretability—already addressed in broader healthcare AI ethics and legal discussions—were repeatedly identified and reviewed in the literature on emergency medicine AI. This confirms that ethical and legal reviews regarding AI utilization in emergency medicine have already been developed in detail.

NOTES

Author contributions
Conceptualization: JK; Data curation: all authors; Formal analysis: all authors; Investigation: JK; Methodology: all authors; Supervision: JK; Writing–original draft: all authors; Writing–review & editing: all authors. All authors read and approved the final manuscript.
Conflicts of interest
The authors have no conflicts of interest to declare.
Funding
This study was supported by the Korea National Institute of Health for the project "Development of Research Ethics Guide for Generative Artificial Intelligence in Digital Healthcare" (No. 2025-ER0801-00). The funder had no role in the design of the study and collection, analysis, and interpretation of data or in writing the manuscript.
Acknowledgments
The authors express gratitude to the various researchers and presenters who participated in the Healthcare AI Ethics Research Group for the various discussions and debates that took place during the meetings.
Data availability
Data analyzed in this study are available from the corresponding author upon reasonable request.

Supplementary materials

Supplementary materials are available from https://doi.org/10.15441/ceem.25.199.

Supplementary Table 1.

Search strategies with Boolean terms
ceem-25-199-Supplementary-Table-1.pdf

REFERENCES

1. Persad G, Wertheimer A, Emanuel EJ. Principles for allocation of scarce medical interventions. Lancet 2009; 373:423-31.
crossref pmid
2. Smith E, Kulasegaran N, Cairns W, Evans R, Woodward L. Physician experiences of critical care triage during the COVID-19 pandemic: a scoping review. Discov Health Syst 2024; 3:30.
crossref pdf
3. White DB, Lo B. Mitigating inequities and saving lives with ICU triage during the COVID-19 pandemic. Am J Respir Crit Care Med 2021; 203:287-95.
crossref pmid pmc
4. Tahernejad A, Sahebi A, Abadi AS, Safari M. Application of artificial intelligence in triage in emergencies and disasters: a systematic review. BMC Public Health 2024; 24:3203.
crossref pmid pmc pdf
5. Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018; 169:467-73.
crossref pmid
6. Abdulai AS, Storm J, Ehrlich M. "I don't know": an uncertainty-aware machine learning model for predicting patient disposition at emergency department triage. Int J Med Inform 2025; 201:105957.
crossref pmid
7. Ahun E, Demir A, Yiğit Y, et al. Perceptions and concerns of emergency medicine practitioners about artificial intelligence in emergency triage management during the pandemic: a national survey-based study. Front Public Health 2023; 11:1285390.
crossref pmid pmc
8. Araouchi Z, Adda M. A comprehensive literature review on AI-assisted multimodal triage systems for health centers. Procedia Comput Sci 2025; 257:206-14.
crossref
9. Bartenschlager CC, Grieger M, Erber J, et al. COVID-19 triage in the emergency department 2.0: how analytics and AI transform a human-made algorithm for the prediction of clinical pathways. Health Care Manag Sci 2023; 26:412-29.
crossref pmid pmc pdf
10. Biesheuvel LA, Dongelmans DA, Elbers PW. Artificial intelligence to advance acute and intensive care medicine. Curr Opin Crit Care 2024; 30:246-50.
crossref pmid pmc
11. Canellas MM, Pachamanova DA, Perakis G, Skali Lami O, Tsiourvas A. A granular approach to optimal and fair patient placement in hospital emergency departments. Prod Oper Manag 2024; 34:575-89.
crossref pdf
12. Chenais G, Lagarde E, Gil-Jardiné C. Artificial intelligence in emergency medicine: viewpoint of current applications and foreseeable opportunities and challenges. J Med Internet Res 2023; 25:e40031.
crossref pmid pmc
13. Da'Costa A, Teke J, Origbo JE, Osonuga A, Egbon E, Olawade DB. AI-driven triage in emergency departments: a review of benefits, challenges, and future directions. Int J Med Inform 2025; 197:105838.
crossref pmid
14. El Arab RA, Al Moosa OA. The role of AI in emergency department triage: an integrative systematic review. Intensive Crit Care Nurs 2025; 89:104058.
crossref pmid
15. Erıten S. Survey on artificial intelligence in emergency services. Ann Clin Anal Med 2025; 16:1-5.
crossref
16. Feretzakis G, Sakagianni A, Anastasiou A. Machine learning in medical triage: a predictive model for emergency department disposition. Appl Sci 2024; 14:6623.
crossref
17. Freeman S, Stewart J, Kaard R, et al. Health consumers' ethical concerns towards artificial intelligence in Australian emergency departments. Emerg Med Australas 2024; 36:768-76.
crossref pmid
18. Grant K, McParland A, Mehta S, Ackery AD. Artificial intelligence in emergency medicine: surmountable barriers with revolutionary potential. Ann Emerg Med 2020; 75:721-6.
crossref pmid
19. Masoumian Hosseini M, Masoumian Hosseini ST, Qayumi K, Ahmady S, Koohestani HR. The aspects of running artificial intelligence in emergency care: a scoping review. Arch Acad Emerg Med 2023; 11:e38.
crossref pmid pmc
20. Kuttan N, Pundkar A, Gadkari C, Patel A, Kumar A. Transforming emergency medicine with artificial intelligence: from triage to clinical decision support. Multidiscip Rev 2025; 8:e2025285.
crossref
21. Mani Z, Albagawi B. AI frontiers in emergency care: the next evolution of nursing interventions. Front Public Health 2024; 12:1439412.
crossref pmid pmc
22. Mutegeki H, Nahabwe A, Nakatumba-Nabende J, Marvin G. Interpretable machine learning-based triage for decision support in emergency care. Proceedings of 2023 7th International Conference on Trends in Electronics and Informatics (ICOEI); 2023 Apr 11-13; Tirunelveli, India. IEEE; 2023; 983-90.
crossref
23. Nord-Bronzyk A, Savulescu J, Ballantyne A, et al. Assessing risk in implementing new artificial intelligence triage tools: how much risk is reasonable in an already risky world? Asian Bioeth Rev 2025; 17:187-205.
crossref pmid pmc pdf
24. Petrella RJ. The AI future of emergency medicine. Ann Emerg Med 2024; 84:139-53.
crossref pmid
25. Preiksaitis C, Ashenburg N, Bunney G, et al. The role of large language models in transforming emergency medicine: scoping review. JMIR Med Inform 2024; 12:e53787.
crossref pmid pmc
26. Rajaram A, Li H, Holodinsky JK, et al. Opening the black box: challenges and opportunities regarding interpretability of artificial intelligence in emergency medicine. CJEM 2025; 27:83-6.
crossref pmid pdf
27. Sibbald M, Abdulla B, Keuhl A, Norman G, Monteiro S, Sherbino J. Electronic diagnostic support in emergency physician triage: qualitative study with thematic analysis of interviews. JMIR Hum Factors 2022; 9:e39234.
crossref pmid pmc
28. Stylianides C, Nicolaou A, Sulaiman WA, et al. AI advances in ICU with an emphasis on sepsis prediction: an overview. Mach Learn Knowl Extr 2025; 7:6.
crossref
29. Teeple S, Smith A, Toerper M, et al. Exploring the impact of missingness on racial disparities in predictive performance of a machine learning model for emergency department triage. JAMIA Open 2023; 6:ooad107.
crossref pmid pmc pdf
30. Townsend BA, Plant KL, Hodge VJ, Ashaolu O, Calinescu R. Medical practitioner perspectives on AI in emergency triage. Front Digit Health 2023; 5:1297073.
crossref pmid pmc
31. Ventura CA, Denton EE, David JA. Artificial intelligence in emergency trauma care: a preliminary scoping review. Med Devices (Auckl) 2024; 17:191-211.
crossref pmid pmc pdf
32. Wang H, Sambamoorthi N, Hoot N, Bryant D, Sambamoorthi U. Evaluating fairness of machine learning prediction of prolonged wait times in emergency department with interpretable eXtreme gradient boosting. PLOS Digit Health 2025; 4:e0000751.
crossref pmid pmc
33. Liu X, Cruz Rivera S, Moher D, Calvert MJ, Denniston AK. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med 2020; 26:1364-74.
crossref pmid pmc
34. Cruz Rivera S, Liu X, Chan AW, Denniston AK, Calvert MJ. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat Med 2020; 26:1351-63.
crossref pmid pmc
35. Chen RJ, Wang JJ, Williamson DF, et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023; 7:719-42.
crossref pmid pmc pdf
36. World Health Organization (WHO). Ethics and governance of artificial intelligence for health: WHO guidance. WHO; 2021.

37. Reid L. Triage of critical care resources in COVID-19: a stronger role for justice. J Med Ethics 2020; 46:526-30.
crossref pmid
38. Jöbges S, Vinay R, Luyckx VA, Biller-Andorno N. Recommendations on COVID-19 triage: international comparison and ethical analysis. Bioethics 2020; 34:948-59.
crossref pmid pmc pdf
39. Brown N, Sandholm T. Superhuman AI for multiplayer poker. Science 2019; 365:885-90.
crossref pmid
40. Bakhtin A, Brown N, Dinan E, et al. Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science 2022; 378:1067-74.
crossref pmid
41. Anthropic. System card: Claude Opus 4 & Claude Sonnet 4 [Internet]. Anthropic; 2025 [cited DATE]. Available from: https://www-cdn.anthropic.com/6d8a8055020700718b0c49369f60816ba2a7c285.pdf

42. Luciano F. Hypersuasion: on AI’s persuasive power and how to deal with it. Philos Technol 2024; 37:64.
crossref pdf
43. Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics 2020; 46:205-11.
crossref pmid
44. Fahrner LJ, Chen E, Topol E, Rajpurkar P. The generative era of medical AI. Cell 2025; 188:3648-60.
crossref pmid
45. Siegel E. The AI playbook: mastering the rare art of machine learning deployment. MIT Press; 2024.

46. Lee C, Vogt KA, Kumar S. Prospects for AI clinical summarization to reduce the burden of patient chart review. Front Digit Health 2024; 6:1475092.
crossref pmid pmc
47. Ebnali Harari R, Altaweel A, Ahram T, Keehner M, Shokoohi H. A randomized controlled trial on evaluating clinician-supervised generative AI for decision support. Int J Med Inform 2025; 195:105701.
crossref pmid
48. Bracken A, Reilly C, Feeley A, Sheehan E, Merghani K, Feeley I. Artificial intelligence (AI)-powered documentation systems in healthcare: a systematic review. J Med Syst 2025; 49:28.
crossref pmid pmc pdf
49. Hu D, Guo Y, Zhou Y, Flores L, Zheng K. A systematic review of early evidence on generative AI for drafting responses to patient messages. npj Health Syst 2025; 2:27.
crossref pdf
50. Acosta JN, Falcone GJ, Rajpurkar P, Topol EJ. Multimodal biomedical AI. Nat Med 2022; 28:1773-84.
crossref pmid pdf
51. CHART Collaborative. Reporting guideline for chatbot health advice studies: the Chatbot Assessment Reporting Tool (CHART) statement. BMJ Med 2025; 4:e001632.
crossref pmid pmc
52. Tiku N, Malhi S. What is ‘AI psychosis’ and how can ChatGPT affect your mental health? [Internet]. The Washington Post; 2025 [cited DATE]. Available from: https://www.washingtonpost.com/health/2025/08/19/ai-psychosis-chatgpt-explained-mental-health/

53. Jin L, Shen Z, Alhur AA, Naeem SB. Exploring the determinants and effects of artificial intelligence (AI) hallucination exposure on generative AI adoption in healthcare. Inf Dev 2025; Jun; 2; [Epub]. https://doi.org/10.1177/02666669251340954.
crossref
54. Gourabathina A, Gerych W, Pan E, Ghassemi M. The medium is the message: how non-clinical information shapes clinical decisions in LLMs. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT '25); 2025 Jun 23-26; Athens, Greece. Association for Computing Machinery; 2025; 1805-28.
crossref
55. Williams CY, Zack T, Miao BY, et al. Use of a large language model to assess clinical acuity of adults in the emergency department. JAMA Netw Open 2024; 7:e248895.
crossref pmid pmc
56. Arslan B, Nuhoglu C, Satici MO, Altinbilek E. Evaluating LLM-based generative AI tools in emergency triage: a comparative study of ChatGPT Plus, Copilot Pro, and triage nurses. Am J Emerg Med 2025; 89:174-81.
crossref pmid
57. Lee S, Jung S, Park JH, Cho H, Moon S, Ahn S. Performance of ChatGPT, Gemini and DeepSeek for non-critical triage support using real-world conversations in emergency department. BMC Emerg Med 2025; 25:176.
crossref pmid pmc pdf
58. Muir R, Carlini J, Crilly J, Ranse J. Patient and public involvement in emergency care research: a scoping review of the literature. Emerg Med J 2023; 40:596-605.
crossref pmid
59. Kim J. Patient and public involvement model in healthcare AI ethics: based on scoping review and methodological reflections. Korean J Med Ethics 2024; 27:177-96.
crossref pdf
60. Adus S, Macklin J, Pinto A. Exploring patient perspectives on how they can and should be engaged in the development of artificial intelligence (AI) applications in health care. BMC Health Serv Res 2023; 23:1163.
crossref pmid pmc pdf

Fig. 1.
PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) flowchart. AI, artificial intelligence; ML, machine learning.
ceem-25-199f1.jpg
Table 1.
Characteristics of the included studies
Study Study design AI Key summary Ethics theme Legal and regulatory theme
Abdulai et al. [6] (2025) Retrospective ML (XGBoost) The study presents a ML model for ED triage that uses conformal prediction to provide uncertainty-aware patient disposition predictions, enabling an “I don’t know” output to improve decision-making safety and accuracy. Transparency Responsibility and liability
Ahun et al. [7] (2023) Prospective survey - A national survey of Turkish emergency physicians found strong support for AI-assisted pandemic triage due to potential benefits for patients and clinicians, but notable ethical concerns remain around responsibility, accountability, and data privacy. Ethical principles and concerns -
Araouchi and Adda [8] (2025) Literature review - The article reviews the evolution from traditional to AI-assisted multimodal triage systems in healthcare, highlighting AI’s potential to improve accuracy, efficiency, and patient outcomes while addressing challenges in data quality, ethics, and clinical adoption. Ethical principles and concerns -
Bartenschlager et al. [9] (2023) Retrospective ML (RF, MLP, XGBoost) The study shows that replacing Germany’s existing human-made COVID-19 ED triage algorithm with AI and human-AI hybrid models greatly improves accuracy and ICU patient identification, while retaining transparency and usability considerations for ethical deployment. Autonomy and transparency -
Biesheuvel et al. [10] (2024) Literature review - The article reviews recent advances and challenges in applying AI to acute and intensive care medicine, highlighting its potential to improve assessment, prediction, and decision-making while noting that ethical, legal, technical, and validation barriers still limit widespread clinical adoption. Privacy and transparency Validation
Canellas et al. [11] (2024) Retrospective ML (algorithm and XGBoost) The article presents a novel predictive-prescriptive optimization framework for hospital EDs that improves patient throughput and reduces wait times by 50%–100% while ensuring fairness in bed allocation, eliminating gender-based disparities without sacrificing performance. Fairness in algorithm level -
Chenais et al. [12] (2023) Literature review - The article reviews current and potential applications of AI in emergency medicine, highlighting opportunities to improve efficiency, decision-making, and patient outcomes while addressing significant ethical, legal, and bias-related challenges. Ethical principles Regulatory needs
Da'Costa et al. [13] (2025) Narrative review - The article reviews how AI-driven triage systems can enhance ED efficiency and patient outcomes by automating and standardizing prioritization, while addressing challenges like data quality, bias, and ethical considerations. Ethical principles Framework requests
El Arab et al. [14] (2025) Systematic review - The article concludes that AI- and ML-based triage models outperform traditional methods in predicting critical outcomes in EDs, offering potential to reduce overcrowding and improve patient care, but require prospective multicenter validation, cost-effectiveness studies, and seamless EHR integration before widespread adoption. Interpretability and XAI Validation
Eriten [15] (2025) Prospective survey - A survey of ED staff found strong support for AI’s potential to improve triage, diagnosis, and workload efficiency, but highlighted the need for better training, data privacy safeguards, and ethical guidelines for successful integration. Privacy Administrative requirements
Feretzakis et al. [16] (2024) Retrospective ML (AutoML) The article presents an AutoML-based GBM model using MIMIC-IV-ED triage data to predict ED hospital admissions, achieving strong accuracy while emphasizing explainability, ethical use, and integration as a clinician-support tool. Ethical principles Privacy and validation
Freeman et al. [17] (2024) Prospective qualitative interview - Australian health consumers support AI in EDs when it aids rather than replaces clinicians, is transparent, regulated, protects privacy, addresses bias, and preserves patient autonomy and human connection. Ethical concerns Regulatory frameworks
Grant et al. [18] (2020) Literature review - The article highlights AI’s transformative potential in emergency medicine while detailing technical, regulatory, and workflow barriers that must be addressed for successful, safe, and widespread adoption. Ambiguity and transparency Regulatory needs
Masoumian Hosseini et al. [19] (2023) Scoping review - The article reviews current applications, benefits, and ethical challenges of AI in emergency medicine, highlighting its potential to improve patient outcomes through predictive modeling while warning about transparency, bias, and implementation barriers. Ethical concerns Regulatory needs
Kuttan et al. [20] (2025) Semi–systematic review - The article outlines how AI is revolutionizing emergency medicine by enhancing triage, diagnostics, decision support, and resource allocation, while addressing ethical, regulatory, and operational challenges to ensure safe, equitable, and effective patient care. Ethical principles Collaborative approach for regulation
Mani and Albagawi [21] (2024) Scoping review - The article reviews how AI is transforming emergency nursing through applications in triage, monitoring, diagnosis, and decision support, while emphasizing the need to address ethical, technical, and training challenges for safe, effective adoption. Ethical concerns Regulatory needs
Mutegeki et al. [22] (2023) Retrospective ML (decision trees, RF, XGBoost) The paper proposes an interpretable ML approach using ensemble methods and XAI to improve ED triage accuracy, with Histogram-based Gradient Boosting achieving the best performance on predicting ESI levels. Ethical feasibility Regulatory feasibility
Nord-Bronzyk et al. [23] (2025) Normative analysis (case study) - The article argues that implementing the interpretable AI triage tool SERP in Singapore’s EDs via a cautious, continuous evaluation approach (starting with a silent trial and progressing to a PACS + model) offers ethical, practical, and safety advantages over traditional RCTs, with potential to improve patient prioritization while managing risks through a LHS framework. Ethical feasibility Regulatory feasibility
Petrella [24] (2024) Literature review - The article outlines how AI is poised to transform emergency medicine through a three-stage evolution—mapping problems, measuring validated solutions, and managing integrated systems—while addressing technical, legal, and ethical challenges in deployment. Privacy, bias, and interpretability Liability
Preiksaitis et al. [25] (2024) Scoping review - The article reviews how LLMs could transform emergency medicine by enhancing decision-making, streamlining workflows, supporting education, and improving communication, while emphasizing the need for robust validation, ethical safeguards, and careful integration into clinical practice. Ethical requirements Liability
Rajaram et al. [26] (2025) Position paper (expert consensus in symposium) - The article argues that while interpretability in AI-based clinical decision support is often crucial for safety, trust, and bias detection in emergency medicine, mandating it universally could hinder innovation, so its necessity should be determined contextually. Interpretability Interpretability as a regulatory necessity
Sibbald et al. [27] (2022) Prospective qualitative interview - The study found that while integrating electronic diagnostic support into ED triage is feasible, physicians remain skeptical due to concerns about diagnostic relevance, bias, personal benefit, and medicolegal risks of including outputs in patient records. Trust Liability and regulatory needs
Stylianides et al. [28] (2025) Literature review - The article reviews current clinical and AI-based approaches for ICU care, especially in sepsis prediction, highlighting AI’s superior performance over traditional methods, its applications in predicting ICU outcomes, and the challenges and future directions for ethical, explainable, and multimodal AI in critical care. Ethical principles Regulatory considerations
Teeple et al. [29] (2023) Retrospective ML (RF) The study found that missing data in ED patient problem lists modestly impacted ML triage model performance for both Black and non-Hispanic White patients, with slightly greater changes for White patients, highlighting a novel method to detect potential disparities from data missingness. - Racial disparities
Townsend et al. [30] (2023) Retrospective qualitative interview - The article finds that NHS ED practitioners generally view the proposed AI triage system DAISY as a promising tool to reduce wait times and improve consistency, but stress that trust, empathy, nonverbal cues, and clear safeguards are essential for its successful adoption. Empathy and interaction Accountability and regulatory needs
Ventura et al. [31] (2024) Scoping review - This review finds that while AI shows strong potential in emergency trauma care, especially in diagnostics and triage, major gaps remain in real-time treatment applications, validation across diverse settings, and integration into clinical workflows. Transparency Lacks guidelines
Wang et al. [32] (2025) Retrospective ML (XGBoost) This study found that while an XGBoost model could moderately predict prolonged ED wait times, it showed fairness disparities across sex, race/ethnicity, and insurance status, underscoring the need for both performance and equity evaluations before clinical use. Fairness Fairness

AI, artificial intelligence; ML, machine learning; XGBoost, Extreme Gradient Boosting; ED, emergency department; RF, random forest; MLP, multilayer perceptron; ICU, intensive care unit; EHR, electronic health record; XAI, explainable artificial intelligence; AutoML, automated machine learning; GBM, Gradient Boosting Machine; MIMIC, Medical Information Mart for Intensive Care; ESI, Emergency Severity Index; SERP, Score for Emergency Risk Prediction; PACS, Patient Acuity Category Scale; RCT, randomized controlled trial; LHS, learning health systems; LLM, large language model; NHS, UK National Health Service; DAISY, Diagnostic AI System for Robot-Assisted Triage.

Table 2.
Current ethicolegal proposals based on the reviewed articles
Issue Proposal
Ethical concerns (privacy, overreliance, generalizability) Human-centered design
Uncertainty measurement
Continuous evaluation
Ethics framework
Empathy Supportive AI
Not AI-as-substitute-worker
Regulation Governance framework
Standards and guidelines
Regulatory sandbox
Liability Liability sharing
Collective accountability
Validation Randomized controlled trial
Multicenter research
LHS framework
Bias Fairness evaluation
Debiasing settings
Explainability and interpretability XAI
HITL

AI, artificial intelligence; LHS, learning health system; XAI, explainable artificial intelligence; HITL, human-in-the-loop.

Editorial Office
The Korean Society of Emergency Medicine
101-3104, Brownstone Seoul, 464 Cheongpa-ro, Jung-gu, Seoul 04510, Korea
TEL: +82-31-709-0918   E-mail: office@ceemjournal.org
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © by The Korean Society of Emergency Medicine.                 Developed in M2PI