From the outside, the University Medical Center Utrecht (UMCU) in the Netherlands looks like a massive Tetris game frozen in glass. Inside, three transparent columns reach from the ground floor to the roof. A staircase the color of red blood cells connects the floors. ”What score would you give the architect of this building?” asked keynote speaker Olle (Th.J.) ten Cate, Ph.D., at the annual Medical Education Day at Yale on May 19.

No one answered—they needed more information to give the architect a fair shake. ten Cate, director of UMCU’s Research and Development of Education Center, clicked to his next slide: a classic ranking on a scale of 1 to 10. Still no answer. The Dutch researcher’s wry smile reflected what his audience knew: the numbers were meaningless without criteria. Did the numbers correspond to the building’s functionality, costs, design, or the charisma of the architect?

Through this exercise, ten Cate acknowledged the difficult task education professionals face when assessing on-the-job skills of medical students and residents. His keynote talk anchored an afternoon of workshops sponsored each year by Yale School of Medicine’s Center for Teaching and Learning (TLC) for teachers, students, and physicians to enhance educator development and scholarship. The goal of the TLC is to develop a community of educators as well as a place where they can meet and learn more about education.

According to the Association of American Medical Colleges (AAMC), most medical schools use a competency-based form of assessment—the norm for more than a decade. The observer rates a student based on his or her ability to complete a task. This method runs the risk of being inadequate for two reasons, ten Cate explained. It’s based on an interaction between the observer and the student or resident—leaving out the patient. And the assessment doesn’t consider the end goal—allowing a medical student to work unsupervised. ”All of this can lead to unreliability” in an assessment of a student’s skills, ten Cate said.

Several years ago, ten Cate created a new assessment measure: entrustable professional activities (EPAs). This fancy term translates to a simple, important question: Would the observer trust the student or resident to take care of a patient? To illustrate this point, ten Cate projected another image onto the screen. A resident faces her observing physician. The physician justifies his rating decision in a thought bubble: ”She works hard. It won’t hurt and might help if I mark her as ’superior.’” Such subjective evaluations work for competency tests, which tend to judge abstract skills like ”professionalism” that relate more to the personality of the individual than an observable skill. ten Cate suggested a different scenario where the observing physician used an EPA approach. The physician’s thought bubble changed to: ”She works hard, but it may hurt my patients if I mark her ’superior’ and ready for unsupervised practice.” If a resident or student has mastered a skill well enough to act without supervision, then an EPA has been met. ”One can possess competencies, but not EPAs—they are units of work,” ten Cate said.

As another example, a competency-based assessment might ask an observer to rate a student’s bedside manner. The EPA equivalent would gauge a student’s ability to gather a patient history and perform a physical examination. To do this, EPAs require that supervising physicians and medical professionals spend more time with students, ten Cate said. This assessment method also requires the observer to decide how much—if any—supervision the student still needs for a task. A student’s competence is still observed, ten Cate said, but consistent behavior and a willingness to ask for help are also built into the ”entrustable” aspect. ”Trust takes time and you cannot trust a person if you don’t know them,” ten Cate said. He suggested that longitudinal clerkships, which put a medical student inside a clinic, are an important aspect of EPAs. (The School of Medicine incorporated 12-week longitudinal clerkships into the new curriculum last year.) EPAs also differ from most assessment tools because they have an expiration date, which means that the skill needs to be re-checked with some frequency, ten Cate said.

The idea of EPAs has caught on since ten Cate and his colleagues introduced the idea a decade ago, but really got popular over the last five years. In 2014, the AAMC published a list of 13 core EPAs all medical students should be able to perform before entering their residency. ”We know that medical schools which adopt EPAs have liked the results,” ten Cate said. ”Now we need to study curricula with and without.”

ten Cate focused on assessing skills and knowledge, but the afternoon workshops covered a gamut of topics, from how to use technology to build a collaborative learning space to the Tao of Small Group Facilitation. During a wine and cheese reception at the end of the day, 50 research posters created by Yale faculty, fellows, residents, and students were on display in The Anlyan Center lobby. The posters addressed innovations in education and education research with awards given to five posters judged by a committee of faculty, residents, and students.

For more information on the Teaching and Learning Center, visit tlc.yale.edu.