EvalNow
Transforming Clinical Feedback into Actionable, Equitable, and Psychologically Safe Learning
Team
Srishty Bhavsar, Tori Stroud, Jenn Choi, Farong Ren
Role
UX Researcher and Designer
Tools
Figma, Figjam, Qualtrics
Duration
Clients

Dr. Marci Levine
Clinical Associate Professor, Oral and Maxillofacial Surgery (NYUCD)

Dr. Elizabeth McAlpin
Director of Educational Technology Research, (RIT-NYU IT)
Design Solution
A. The EvalNow Faculty Dashboard showcases a basic overview of the completed and pending student reflection evaluations. Faculty can easily assess their students daily.
B. As Faculty open an assessment for their student, they can see basic information about their students and the procedure, and complete a reflection about the overall clinical case. Faculty have access to simple features such as a voice-recorded note to speed up their feedback process.
C. Faculty can score their students skills through quick tags and a scale ranging 1-5. As Faculty review their final notes for students, they are able to use AI to rewrite the tone of the message to positively reinforce progress and actions steps if needed. Faculty can additionally utilize AI to suggest relevant literary sources to assist them in their progress.
Design Challenge & Research Motivation
Feedback arrives too late to influence the next clinical session.
Verbal comments are quickly forgotten.
Emotional discomfort discourages honest conversations.
There is no longitudinal record of growth or repeated issues.
Prototype and User Flow audit
We concluded that the current EvalNow prototype was limited in its survey-like questions, lacked a student flow, and did not have a mobile-friendly format that would be intuitive for both students and faculty.
Heuristic audit of old EvalNow Application
Foundational Research & Insights
Patterns across CURRENT solutions
Comparative Analysis
We examined platforms used in clinical and competency-based education, including MedHub, One45, MyEvaluations, LiftUpp, F3App, MedSimAI, and ShadowHealth, as well as existing NYU systems based on the following criteria:
User flows and clarity of steps
Feedback and rubric structures
Reflection processes
Dashboards and progress views
Longitudinal assessments
Key Findings from Comparisions
High-performing systems centralize student information and feedback history in a single view.
Real-time documentation must be extremely lightweight to fit into clinical environments.
Students benefit from clear progress indicators that reduce uncertainty and support self-regulation.

Example of MedSimAI Feedback User Flow
Evidence-Based User Insights
Faculty and Student Surveys
Using Qualtrics, we surveyed 14 dental faculty members and 2 dental students to identify patterns in satisfaction, clarity, workflow alignment, and emotional responses to the current process.
Students consistently seek clarity about expectations and performance trends.
Faculty report friction when documenting feedback during patient care.
Both groups feel the current system does not effectively support long-term skill development.

Example Question from Faculty Survey
Student Interviews
We conducted two semi-structured interviews with fourth-year dental students across diverse clinical rotations to understand how they prepare for patient encounters, interpret varying faculty expectations, and emotionally and cognitively respond to feedback. The interviews also explored how students currently track progress outside existing systems and identified moments when feedback supports learning versus when it undermines confidence or clarity.
Students often enter the clinic without knowing which faculty member they will be assigned, creating unpredictable expectations.
Variability in scoring and comments makes it difficult for students to see whether they are improving.
Reflection is often rushed and disconnected from the rest of the workflow.

Affinity Diagram of Faculty Surveys, Student Interviews, and Client Notes
IDeation
User Flow
The user flow was created during the ideation phase to map how faculty might move through EvalNow as part of their real clinical routines. It explores end-to-end interactions, from reviewing students and capturing in-the-moment verbal feedback to reflecting on longitudinal insights and delivering structured, growth-oriented evaluations. The flow helped surface key design considerations around reducing cognitive load, supporting flexible feedback inputs (voice and text), and embedding reflection and AI assistance without disrupting clinical workflows, ultimately guiding early feature prioritization and system architecture decisions.
*While we additionally ideated a student user flow, we prioritized the faculty flow for our final high-fidelity design because we're still currently collecting student survey and interview data in order to design a dashboard that showcases students' longitudinal success and questions that allow students to metacognitively assess their own performance.*

Conceptual Wireframes
The interface follows a form-like design but allows for quantitative feedback types such as radio buttons, multiple choice, quick tags, and a spectrum selector. At the end, there is a space for faculty to give additional feedback that was not applicable in the earlier sections.

Low Fidelity prototype of Faculty Flow
High-Fidelity Design
These high-fidelity screens illustrate the redesigned faculty assessment flow, from reviewing pending evaluations to submitting structured, growth-oriented feedback. The experience prioritizes speed and clarity through progressive disclosure, quick tags, voice input, and competency sliders, allowing faculty to capture insights efficiently during busy clinical sessions. AI-assisted rewrites and competency comparisons help translate observations into psychologically safe, actionable feedback, while draft states and confirmations support flexibility without disrupting workflow.


Competency Comparison
Our clients requested a clearer understanding of students’ self-perception before writing their own evaluations. They emphasized that without seeing the student’s reflection first, their comments often lacked alignment, specificity, or meaningful dialogue. As a result, we designed a competency comparison card that allows faculty to review the student's own assessment from a scale of 1 to 5.

AI Re-write and Supplemental Recommendations
Our clients were interested in the use of AI within the EvalNow app. Through our literary review research and competitor analysis, we decided that the implementation of an "AI Growth-Mindset Rewrite" feature and an "Add extra studies," card would help faculty give constructive feedback that promotes growth within students to reduce the high emotional feelings associated with the existing feedback process. The rewrite summarizes the student's progress and recommends actionable steps for them to take. The "Add extra studies" cards supports the concept of giving students actionable steps by providing them real resources on the technical challenges they are facing.

Voice Feedback
Finally, we added a voice note feedback that would speed up the typing process for faculty. This voice memo is transcribed into a text format and can be edited by the faculty if inaccurate.
IMPACT AND NEXT STEPS
Reflection
Overall, this project has been extremely satisfying to work on. I am grateful that our clients instilled trust in our team to reimagine the EvalNow interface and take direction on complex issues such as HIPAA, constructive feedback flows, and efficiency.
Understanding patients' privacy through empathy and healthcare research was critical in my understanding how features such as voice recording and AI were included in the app. Because patients' names and information could not be disclosed, we had to be strategic about how to create a flow that focuses primarily on the student dentist treating them and their skills.









