Results

AAMC - Association of American Medical Colleges

01/15/2026 | Press release | Distributed by Public on 01/15/2026 11:09

AI will now draft your residency recommendation letter

AI will now draft your residency recommendation letter

Faculty say artificial intelligence tools help them write these important letters faster and better. But questions remain about data privacy and the human touch.

By Patrick Boyle, Senior Staff Writer
Jan. 15, 2026

Lisa Daniels Torrisi, MD, knew that the letters of recommendation she drafted for students who were applying for residency were not ready for prime time. She wrote them the same way she dictated her notes in the ICU: with lots of bullet points and no consideration for the niceties of punctuation and grammar.

"My grammar is atrocious," says Daniels Torrisi, clerkship director for critical care medicine at Emory University School of Medicine, in Atlanta.

Daniels Torrisi routinely passed her drafts to her husband to polish them before submission. He observed: "Your letters are just like your notes."

He suggested that she try using artificial intelligence (AI) to help. Daniels Torrisi uploaded her notes about several students into Microsoft Copilot. The tool turned her bullet points into coherent paragraphs, corrected the grammar, and added punctuation where it belonged. Today she routinely uploads notes for letters of recommendation (LORs) into AI and gets back clean versions, which she edits and submits.

Daniels Torrisi is among the many medical school faculty who are using AI to aid in drafting recommendation letters to support student residency applications. They say that working with AI saves time and helps them produce higher-quality letters than they used to draft pm their own.

As with much of AI, however, the practice raises quandaries: in this case, about the privacy of student data and the need to ensure the personal touch of humans in these letters, including the professor's observations and a review for factual accuracy.

"We don't want to see people uploading someone's Facebook profile [into an AI tool] and saying, 'Go write about this person,'" says Dana Dunleavy, PhD, senior director of admissions and selection research and development at the AAMC.

AI addresses a need: It's not unusual for faculty members to write dozens of residency letters each year (plus letters to support fellowship applications and faculty promotions).

"Faculty come to me saying, 'I don't have enough time'" to produce the carefully crafted recommendations that students deserve, says Michael Kozak, MEd, IT instructional designer at the University of Nebraska Medical Center.

Here is how some faculty are tapping into the technology while navigating the ethical and practical challenges.

How AI changes the process

To understand the value that AI brings to recommendation letters, consider the rote composition process that faculty members typically use to write those letters.

Artificial Intelligence and Academic Medicine

The AAMC has developed an online resource that includes webinars, virtual communities for learning, and guidelines for AI use.

For each applicant, the writer uses a previous letter (or a generic one they've created) as a template. That template can provide the organization of the letter, basic information about the professor and their relationship with the student, a block to summarize the student's class work and clinical experience, and a block of attributes that the professor often focuses on, such as problem-solving, analysis, overcoming obstacles, or building trust with patients. The professor customizes the template to fit what they want to say about each applicant.

"I used to pull up an old letter, change the name of the student and other information about them, and add some observations about that particular student," says Ronald Rodriguez, MD, PhD, professor of urologic science at the University of Texas Health Science Center at San Antonio.

That process is faster than starting with a blank page, but it takes significant time to copy information about each student from various sources (such as their résumés and personal statements for the residency applications), decide what to focus on, and craft sentences to express the points the professors want to make.

Even after all that, faculty say the process tends to produce LORs that feel like form letters.

That's why some faculty are putting information into AI tools to create the first drafts. They feed the tool basic information about their academic position, about the accomplishments the students cite on their résumés and in their personal statements, and their observations of the student.

Rodriguez does this "in sort of stream of consciousness form," in no particular order, because "AI can organize those thoughts more effectively than most of us can." For example:

"I say this person worked with me for six months on a project in which we did this or that, and I can reflect on the kind of person he is. He's kind, compassionate, has a strong work ethic, is very intelligent. He made some insights into this project that a lot of people at his level would never be able to make. He has a level of independence that I don't often see in students. He shows initiative."

The AI tool gives Rodriguez an organized draft in about 30 seconds, then produces several rounds of revisions according to his instructions. He edits the final version.

Such use of generative AI (that is, as the author of an LOR draft) is one of two main approaches identified in a recent Academic Medicine article about how medical school faculty can use generative AI to write letters of recommendation for colleague and student applications. The other approach, as taken by Daniels Torrisi, is writing a draft and using AI as an editor.

But does AI improve the quality of the letters? Robert Snedegar, MD, assistant professor at West Virginia University (WVU) School of Medicine, set out to prove that they do not.

How AI affects quality

Snedegar first heard about ChatGPT in January 2024, at a conference of the Society for Teachers of Family Medicine. Speakers talked about using AI to help with various patient care and educational tasks, including drafting LORs. "They were lauding it as this next great thing," Snedegar recalls.

The family medicine specialist was intrigued but skeptical. On the plane ride home, he typed into his phone an idea to study the use of AI to create letters supporting residency applications.

"I was hoping to disprove that AI could produce better letters of recommendation than I would," he says. "I failed miserably."

He and colleagues set up a small study, published in Cureus: Journal of Medical Science. LORs for seven applicants who had been invited for interviews with the WVU Department of Family Medicine Residency were rewritten with AI. Ten WVU faculty members rated all the letters for quality without knowing that AI had helped to compose some of them. Overall, they rated the AI-generated letters as superior.

Snedegar acknowledges that this was a small, subjective sample, with quality not defined. Nevertheless, he believes the responses of faculty say something valuable about AI-assisted letters, even if that value remains amorphous. The AI letters "flowed better. They appeared more professional," he says.

Because widespread use of AI is so new, studies of its impact on LORs are few and small. A 2024 study of orthopedic surgery residency applications to a Canadian hospital compared 13 human-written LORs with alternatives generated by ChatGPT. The three program directors who assessed the letters preferred the human-written versions. A study of LORs for faculty promotions found that academic reviewers could not distinguish a quality difference between human- and AI-generated letters.

On an individual level, faculty who use AI to help produce recommendations attest to an increase in quality.

"It's much more personal," Rodriguez says. "I can write into the AI prompt" about his experiences with a student and get back well-written personal reflections to revise.

The lead student author of the Academic Medicine article - Juliana Coraor Fried, MD, PhD, now a resident at Pennsylvania Hospital - has seen the value of AI in letter writing both as a recipient (in the case of a faculty member who used the technology to support one of Coraor Fried's project proposals) and as a producer (writing letters to support the college applications and research proposals of others).

"AI can help give us the time back to focus on those parts of the letter that really matter, those interpersonal parts, because AI doesn't know the applicant as much as we do," Coraor Fried says.

Considerations

Faculty thinking about using AI to help write LORs don't have institutional guidelines that address that specific task. Current guidelines tend to be more general about AI use; some schools are developing guidance to address using the technology to compose text for various purposes. Meanwhile, here are some of the main issues that faculty grapple with:

Privacy of student information - If a faculty member puts information about a student into most publicly available AI tools, they cannot protect that information from being shared elsewhere and used by AI to learn how to generate a response.

"Anything you put into ChatGPT or Claude or Gemini is the property of those companies," Rodriguez says. "If you put in someone's name, position, date of birth, and what they want to do in life, that information can be distilled and sold to the users of the large language models [LLMs] and the data collectors and the data brokers" to help improve their AI products.

Faculty are guarding against privacy violation in two basic ways. First: They don't give the AI tool identifiable information about the students, including where they grew up or other schools they attended. Rodriguez, for example, adds that information when he edits drafts produced by AI.

Some institutions have purchased AI systems that are contained within the university, or entered into agreements with the AI providers that the information entered into the AI tool will be kept in-house. The Academic Medicine article describes those types of AI environments as "'sandboxes' that enable employees to access LLMs in a secure environment."

Keeping it human - "Keeping humans in the loop" is a common refrain in guidance about using AI for medical education and practice. The message is that faculty should review, edit, and sign off on anything created with the help of AI to make sure it's accurate and to put the writing in their own voice.

"I'm not just going to take what it spits out," says Kozak of the University of Nebraska Medical Center. "I'm going to do my due diligence to personalize it, to enhance it."

Whether they give the AI tool information about the particular student or add that on their own when reviewing the AI-generated draft, faculty should make sure that the letter is not just a summary of information they get online, such as from the student's web page and résumé.

"They should put in information about the student that is relevant to what that writer has observed and relevant to the competencies and skills required for success," Dunleavy says.

Transparency - It can be awkward for a faculty member to tell students that they use AI to create anything, including letters of recommendation. "There is somewhat of a stigma associated with using AI to help you do something that everyone assumes you did completely on your own," Rodriguez says.

Daniels Torrisi advises that faculty tell students and the institutions that receive their LORs that they use AI to help compose their text. The Academic Medicine article offers an example that includes, "I use AI to assist in writing all my letters of recommendation and do so because I believe it creates a better reflection of my thoughts on each applicant."

Patrick Boyle, Senior Staff Writer

Patrick Boyle is a senior staff writer for AAMCNews whose areas of focus include medical research, climate change, and artificial intelligence. He can be reached at [email protected].

AAMC - Association of American Medical Colleges published this content on January 15, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 15, 2026 at 17:09 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]