The Clinician at the Curriculum Table


A couple of years ago a patient sat down in front of me on a virtual menopause visit and told me she had tried three different SSRIs for hot flashes, prescribed by three different primary care clinicians, before anyone mentioned hormone therapy. She was 50, otherwise healthy, with no contraindications. She was also exhausted, demoralized, and convinced she had failed at something.

She had not failed at anything. The clinicians who prescribed those SSRIs had been trained in a window when the Women's Health Initiative data was being widely misinterpreted, and the corrective education that came after, however well-intentioned, never quite reached them in a form they could act on at 3:47 p.m. on a Tuesday with ten charts left to close.

That visit is the kind of moment I keep thinking about when I sit on the other side of the table designing CME curricula for the same primary care clinicians my patient had seen. Because here is something I have learned in ten years of moving between the exam room and the curriculum table: most medical education is built by people who are very good at writing and design, and who have not seen a patient in years, or ever. That gap shows up. It shows up in the questions the content fails to anticipate, the decision points it skips past, and the credibility cues a working clinician picks up on within the first three slides.

I am writing this for the medical affairs leaders, nonprofit education directors, and program managers who are buying education and wondering why outcomes are flat. The topic is probably fine. The instructional design is probably fine. The thing that may be missing is a practicing clinician at the table when the content is being shaped.

What I see that writers and designers do not

This is not about credentials. There are excellent CME writers without clinical licenses whose work I admire. It is about what happens cognitively when someone who has actually carried a panel of patients reads a needs assessment, a draft slide deck, or a case-based activity.

A working clinician reading a draft is doing several things at once. They are checking whether the case is plausible (does anyone actually present like this?). They are checking whether the recommended action fits inside the constraints of a real visit (would I actually have the time to order this workup at the visit where this question came up?). They are checking whether the educational hook lands in the right place (is this addressing the question I would actually have, or the question the guideline writers think I should have?). They are checking whether the language carries the small tells that signal this was written by someone who gets it.

Those checks happen fast and they are mostly invisible. When they go well, the learner trusts the content and engages. When they fail, the learner closes the tab. The completion data does not always reveal this. The knowledge check pass rate can be 90 percent while behavior change is zero, because the learner has gotten the answer right and quietly decided this activity is not going to help them on Tuesday at 3:47 p.m.

Moore's Outcomes Framework, the most widely used model for evaluating CME impact, makes this point quantitatively. Level 3 (declarative knowledge) is relatively easy to move. Levels 5 and above (performance in practice, patient health, community health) are much harder. The gap between knowledge gained and behavior changed is well documented in the CME literature. What is less often said out loud is that one of the reasons this gap exists is that education is frequently designed without the people who would have to act on it in the room.

What changes when a practicing clinician is at the table

A few specific examples from my own work, with the caveat that I am describing my own contribution to team efforts that involved many other intelligent people.

When I worked on the American Heart Association's CKM Syndrome eLearning, the team had a clear directive: educate clinicians on cardiovascular-kidney-metabolic (CKM) syndrome, its pathophysiology, staging system, screening protocols, and evidence-based management strategies as defined by the advisory for a CKM model of care outlined in the American Heart Association’s 2023 presidential advisory and scientific statement. 

The clinical content was strong. What I shaped at the assessment design stage was the structure. Each module closed with its own case-based assessment, and the program concludes with a comprehensive case that walks learners through a full patient chart. The reason for that structure was practical. Clinicians do not consolidate a new staging framework by being quizzed on the framework itself. They consolidate it the way they consolidate everything else in practice, by working through patients, one chart at a time. The closing chart-based case puts the learner in the cognitive position they will actually be in when they need this knowledge, looking at a patient with hypertension, prediabetes, and a slightly elevated creatinine, and recognizing that these are not three separate problems. That design choice came from sitting in those visits, not from a textbook. 

On a lupus community education project, the target was a fifth-grade reading level. That is hard work, and most of it is done by writers who specialize in plain language. What I brought as a clinician was a different kind of editorial pass. I read the drafts as if I were the patient sitting in front of me at her diagnosis visit. What do they actually need to know in the first 48 hours? What is the question they will be too embarrassed to ask their rheumatologist next month? What is the piece of language that will quietly tell them this disease does not have to define their life? That pass changed the order of the lessons and the framing of the self-advocacy section in ways I do not think a pure writer would have made.

These are not heroic interventions. They are the small contextual reads that come from having been in the room.

What I appreciate most is how seamlessly she (Dr. Marissa Fontanez, DNP) integrates her expertise with a solutions-oriented approach.
— Sr. Program Lead, Professional Education at American Heart Association

What this means for you

If you are commissioning medical education and you want it to do something more than fulfill an obligation, the question to ask is not only "are the writers good?" or "is the design rigorous?" Those things matter, and they should be the floor, not the ceiling. The next question is: who at this table has touched the patient population we are educating about  in real practice?

If your answer to that question is "no one," that is something to fix. The fix is not necessarily hiring more clinicians. It can be as light as bringing a clinician advisor in for two structured reads at the right points in the build: one at the needs assessment and outline stage, one on the near-final draft. The cost is small. The downstream effect on behavior change can be substantial.

If your answer is "we have a clinician, but they sign off at the end," I would gently push back on the placement of that review. Late-stage clinician review catches errors of fact. Early-stage clinician involvement catches errors of framing, sequencing, and audience read. The errors of framing are the ones that determine whether your Moore's Level 5 needle moves.

A note on what I am not arguing

I am not arguing that clinicians automatically make better medical educators. I have met clinicians whose CME work wasn’t great. Clinical practice is necessary, not sufficient. The skills of curriculum design, instructional theory, and audience analysis still have to be learned and practiced. ACCME accreditation rigor still has to be honored. Adult learning frameworks still apply. The argument is that those skills, layered on top of recent practice experience, produce a different kind of content than those skills practiced in isolation.

I am also not arguing that pharma medical affairs and nonprofit education teams are doing this wrong. Many are doing it well. I am suggesting that the ones whose programs are not landing might find the missing ingredient is not another iteration of the design, but a different set of hands shaping it.

Three questions to ask before you sign the SOW

  1. Who is at the design table at the architecture stage, not just the review stage, and have they directly cared for the patients in question?

  2. Where in the program does the content meet the clinician at a decision point that mirrors their actual practice, rather than presenting information in the order the source documents present it?

  3. How is your outcomes measurement designed to capture practice change, not just knowledge change? Because if your evaluation only goes to Moore's Level 3, you will not know whether the gap I have been describing exists in your program.

These are the questions I find myself asking on every project I take on. They are the questions that distinguish education that informs from education that changes practice. And they are the questions that get asked more reliably when there is a clinician at the table from the start.

If you are working on an educational program where these questions are worth a conversation, let’s connect.


Marissa Fontanez, DNP, RN, FAIHM

Dr. Marissa is the founder of Atabey Medical Communications. With a decade of direct clinical experience, she designs ACCME-accredited CME and medical education for pharmaceutical, healthcare, and nonprofit clients, bringing a clinician's lens to every program.

https://www.linkedin.com/in/drmarissa-fontanez/