Our goal is to systematically map the legal concerns identified in health-related AI and the extent to which they are prioritized across several relevant disciplines, namely law, medicine, nursing, pharmacy, other health professions (dentistry, nutrition, etc.), public health, computer science, and engineering. Consistent with the central objectives of a scoping approach, we aim to examine the scope, scope and nature of research activities in disciplines. summarize and disseminate research results to relevant stakeholders; and identify research gaps in the existing literature. The scoping review will be conducted in accordance with the framework developed by Arksey and O`Malley and expanded by Levac et al.  and the protocol report will be adapted to the PRISMA-P checklist . It will therefore consist of 6 steps: (1) the identification of the research question(s); (2) the identification of the relevant studies; (3) Selection of studies; (4) data mapping; (5) the compilation, summary and communication of the results; and, in this case(6) consultation of stakeholders. When extracting data related to legal concerns, we also extract any specific text that explicitly indicates the prioritization of concerns and proposed solutions (e.g. new or amended regulations), including new interpretations or extensions of private law responsibilities (e.g. in tort law or contract law), ethical reforms (e.g. by AI innovators) and education and training reforms.
If we look at the prioritization of legal issues, we will rely on self-identified explicit priority claims (issues like «high priority», «most important», «most urgent», etc.). describe). It is also likely that companies that are controllers under the GDPR will need to conduct a privacy impact assessment for new AI-based technologies to be used in the clinical field. In general, Article 35(1) of the GDPR requires such an assessment before processing for «new technologies» where the processing «is likely to entail a high risk to the rights and freedoms of natural persons». Art. 35 para. 3 The GDPR expressly specifies when a data protection impact assessment should be necessary in particular, for example in the case of a «systematic and comprehensive assessment of the personal aspects of natural persons based on automated processing, including profiling, based on decisions that produce legal effects vis-à-vis the natural person or significantly affect the natural person» or «large-scale processing» scale «Scope of special categories of data» (e.g., genetic data and health data). Recital 91 of the GDPR states that personal data «should not be considered on a large scale when the processing concerns personal data of patients (…) by a single doctor. » Art. 35 para. 7 The GDPR contains a list of what the assessment must at least include, such as a description of the planned processing operations, an assessment of the risks to the freedoms and rights of data subjects and the measures planned to manage the risks. There are also already AI-based healthcare applications in Europe, and more are in the works.
For example, Ada  is an AI health app that assesses a person`s symptoms and provides advice (for example.dem suggest a doctor`s visit or emergency care to users). Ada  is CE marked (Class I) in Europe – a basic requirement for placing a medical device on the market in Europe – and complies with the EU`s General Data Protection Regulation 2016/679 (GDPR). Datasets can be included if they are publications in English or French that describe a legal issue related to the use of AI that is related to health. Working definitions of key terms («legal concerns», «AI» and «health-related») are briefly summarized in Table 2. For the first searches, we will not impose a language restriction. However, we will only include datasets published in English or French in the scope review itself. Are the legal concerns identified explicitly prioritized? In this section, we discuss US and European strategies for AI and how they are making an effort to compete with their biggest competitor, China, thus adapting the discussion to the ethical and legal debate about AI in healthcare and research. We`ll also look at AI trends and discuss some examples of AI products already in use in clinics in the U.S. and Europe. We anticipate that with the use and development of AI in healthcare, this list of legal considerations will also increase. Once categorized, we will examine the extent to which we see various legal issues relevant to different disciplines (law, medicine, nursing, pharmacy, other health professions, public health, computer science, and engineering). For example, we can see that privacy is raised as a concern by those working in the it or technology fields, while the responsibility of lawyers may be approached more often as a concern.
We will also examine whether we see differences in legal concerns identified by field, country/region, and whether or not an author in the file is identifiable as an expert in a relevant discipline other than that of the lead author. To do this, we rank authors using record-breaking information related to their institutional affiliation (e.g., School of Engineering) or listed credentials. It is of course possible for people trained in one discipline to work in another discipline (e.g., a bioethics lawyer could work in a medical school). However, belonging to departments is a useful marker for the disciplinary perspective and what they represented as co-authors of the work. All normative recommendations contained in the article resulting from the data collected as part of this scoping review are based on the views of our multidisciplinary team on the conclusions that can be drawn from the descriptive findings. In accordance with PRISMA Scr , we will not formally assess the methodological quality of the recordings. 24. Mirbabaie M, Hofeditz L, Frick NR, Stieglitz S. Artificial Intelligence in Hospitals: Providing a Status Quo of Ethical Considerations in Science to Guide Future Research. AI Soc. (2021).
doi: 10.1007/s00146-021-01239-4. [Epub before printing]. 35. Henz P. Ethical and legal responsibility of artificial intelligence. Discov Artif Intell. (2021) 1:2. doi: 10.1007/s44163-021-00002-4 Keywords: artificial intelligence, machine learning, ethical issues, legal issues, social issues Innovation in medicine offers enormous hope, but appropriate regulations are needed to achieve this hope in healthcare and public health while minimizing the risks associated with it. The advent of health-related artificial intelligence (AI) is an example of this need for regulation, which promotes the required balance between potential benefits and pitfalls. Health-related AI is the subject of significant debate in several areas.
Many suggest that AI will improve health systems, for example by increasing diagnostic accuracy, improving healthcare efficiency, or mitigating human bias (e.g., [1, 2]).