Discount on your 1st order · Code: PRIMEIROBIP

bip × Voa Health Partnership Content

bip and Voa Health are joining forces to discuss something that is here to stay: artificial intelligence in medical practice, now with clear rules.


The context

In February 2026, the Federal Council of Medicine published Resolution No. 2.454, the first specific regulatory framework for the use of AI in medicine in Brazil. The regulation does not prohibit artificial intelligence. On the contrary: it recognizes AI as a legitimate clinical support tool, but makes it clear that the final word on diagnoses, procedures, and therapeutic decisions always belongs to the physician.

If you haven't read the full text yet, we'll translate what's important for your daily routine.


What the Resolution says, in practice

Resolution No. 2.454/2026 establishes rights, duties, risk criteria, and governance rules for professionals and institutions that develop or use AI systems in medical practice. The central points:

The 4 pillars you need to know

01

Responsibility always lies with the physician

The use of AI does not transfer or dilute responsibility. Regardless of the tool used, the physician is accountable for the clinical decisions made.

02

Record in the medical chart is mandatory

Whenever AI is used as clinical decision support, its use must be documented in the medical chart. Transparency and traceability are requirements of the norm.

03

Communication to the patient may be necessary

When the use of AI is relevant to the procedure, the patient must be informed. The Resolution defines criteria for what constitutes "relevant use".

04

Tools need scientific validation

It's not enough to use just any system. The Resolution requires safety criteria, evidence, and governance for technologies used in clinical practice.

Adaptation deadline: August 2026 From the publication of Resolution No. 2.454/2026 by the CFM
Attention

Understanding each point

What changes in daily practice

Medical responsibility and the role of AI +
The Resolution is clear: AI is a support tool, not a decision-making agent. In legal practice, this means that even if AI indicates an incorrect diagnosis, the responsibility for the error lies with the physician who adopted that course of action without due clinical judgment. The norm differentiates between medical error and systemic AI failure, but the burden of proof falls on the professional in case of improper use.
How to record the use of AI in the medical chart +
The Resolution requires recording whenever AI is used as clinical decision support. In practice, this includes identifying the tool used, the context of use, and how it influenced the procedure. For continuous use, such as automatic transcription or diagnostic support in all consultations, it is ideal to record it in a standardized manner for each appointment, even if concisely.
What is "relevant use" and when to inform the patient +
The norm considers "relevant use" any AI application that directly influences clinical conduct: diagnosis, prescription, surgical indication, among others. Administrative or support uses, such as scheduling or transcribing notes, generally do not require formal communication to the patient. The practical rule: if AI helped decide what to do with the patient, they need to know.
Scientific validation: how to evaluate an AI tool +
The Resolution requires that the tools used have adequate scientific validation. In practice, this means seeking evidence from clinical studies, technical reports, regulatory certifications, and transparency regarding the model's training data. General-purpose tools, such as ChatGPT used without a clinical protocol, are in a higher risk zone — the norm does not prohibit them, but it increases the responsibility of the physician who uses them without criteria.
Governance and risk criteria for institutions +
For clinics and hospitals, the Resolution goes beyond the individual physician. Institutions that adopt AI systems need to establish governance policies, define risk criteria for each application, and ensure traceability of AI-assisted decisions. Institutional responsibility is recognized by the norm but does not replace the individual responsibility of the professional.

Questions that still remain

  • In legal practice, how is medical error differentiated from systemic AI error?
  • If I use AI continuously, do I need to record it in every consultation?
  • Can I use AI to draft explanations for the patient and just review them? Is this sufficient mediation?
  • Are general-purpose tools considered valid or are they riskier?
  • If AI improves efficiency but increases legal risk, is it worth using?

Voa Health
×
bip

AI in Medicine Immersion — Voa Health + bip

Learn to use AI in clinical practice safely and within the CFM Resolution.

Learn more →

bip
×
Voa Health

Leave your questions in the comments.

The most relevant questions will be included in the panel with specialists that bip and Voa Health are preparing.

Your question might be selected for the bip + Voa Health specialist panel.

 

1 comentário

Laura · 19/03/2026

Como será feito o registro em prontuário?

Deixe seu comentário

Sua opinião é importante para a gente.

Chat Support

Support
Typically replies within an hour

Hi there 👋

How can I help you?
×