EvalNow
Transforming Clinical Feedback into Actionable, Equitable, and Psychologically Safe Learning
How might we transform EvalNow from a static evaluation form into a low-friction, psychologically safe feedback app that supports clinical growth?
Team
Srishty Bhavsar, Tori Stroud, Jenn Choi, Farong Ren
Role
UX Researcher and Designer
Tools
Figma, Figjam, Qualtrics
Duration
12 weeks and ongoing
Clients

Dr. Marci Levine
Clinical Associate Professor, Oral and Maxillofacial Surgery (NYUCD)

Dr. Elizabeth McAlpin
Director of Educational Technology Research, (RIT-NYU IT)
Design Challenge & Research Motivation
Clinical feedback in high-stakes learning environments lacks transparency, consistency, and continuity, resulting in fragmented workflows, emotionally charged experiences, and weak standardized evaluations by faculty.
Feedback arrives too late to influence the next clinical session.
Verbal comments are quickly forgotten.
Emotional discomfort discourages honest conversations.
There is no longitudinal record of growth or repeated issues.
Prototype and User Flow audit
We concluded that the current EvalNow prototype was limited in its survey-like questions, lacked a student flow, and did not have a mobile-friendly format that would be intuitive for both students and faculty.

Heuristic audit of old EvalNow Application
Foundational Research & Insights
Our research focused on understanding how clinical feedback is delivered and experienced across NYU Dentistryโs undergraduate clinics.
1
How do students and faculty currently experience the clinical feedback ecosystem, and what pain points emerge across workflows, tools, and evaluation practices?
2
How do students interpret, respond to, and internalize clinical feedback, and where do gaps exist in supporting self-assessment, reflection, transparency, and progress tracking over time?
3
What constraints shape faculty feedback behaviors and how might they inform opportunities for a more structured, equitable, and actionable feedback process?
Our literary research emphasizes that effective clinical feedback must be both timely and embedded within existing workflows, allowing instructors to capture brief, in-the-moment insights without disrupting care or increasing cognitive load. The literature also highlights the importance of bidirectional feedback, where students and faculty reflect on the same clinical encounter, compare perspectives, and engage in structured dialogue, transforming feedback from a one-way evaluation into an ongoing learning conversation.
Patterns across CURRENT solutions
The current market for medical feedback systems is fragmented and largely reliant on generic evaluation tools that struggle to balance ease of use, real-time clinical integration, and meaningful longitudinal insight, leaving significant opportunity for solutions tailored to the unique demands of healthcare education.
Comparative Analysis
We examined platforms used in clinical and competency-based education, including MedHub, One45, MyEvaluations, LiftUpp, F3App, MedSimAI, and ShadowHealth, as well as existing NYU systems based on the following criteria:
User flows and clarity of steps
Feedback and rubric structures
Reflection processes
Dashboards and progress views
Longitudinal assessments
Key Findings from Comparisions
High-performing systems centralize student information and feedback history in a single view.
Real-time documentation must be extremely lightweight to fit into clinical environments.
Students benefit from clear progress indicators that reduce uncertainty and support self-regulation.

Example of MedSimAI Feedback User Flow
Evidence-Based User Insights
"Students do not fully recognize the value of constructive feedback so more often I am working with student feelings and emotions rather than constructive teaching." - Dental Faculty Member (Survey)
Faculty and Student Surveys
Using Qualtrics, we surveyed 14 dental faculty members and 2 dental students to identify patterns in satisfaction, clarity, workflow alignment, and emotional responses to the current process.
Students consistently seek clarity about expectations and performance trends.
Faculty report friction when documenting feedback during patient care.
Both groups feel the current system does not effectively support long-term skill development.

Example Question from Faculty Survey
Student Interviews
We conducted two semi-structured interviews with fourth-year dental students across diverse clinical rotations to understand how they prepare for patient encounters, interpret varying faculty expectations, and emotionally and cognitively respond to feedback. The interviews also explored how students currently track progress outside existing systems and identified moments when feedback supports learning versus when it undermines confidence or clarity.
Students often enter the clinic without knowing which faculty member they will be assigned, creating unpredictable expectations.
Variability in scoring and comments makes it difficult for students to see whether they are improving.
Reflection is often rushed and disconnected from the rest of the workflow.

Affinity Diagram of Faculty Surveys, Student Interviews, and Client Notes
Concept Strategy & Direction
User Flow
The user flow was created during the ideation phase to map how faculty might move through EvalNow as part of their real clinical routines. It explores end-to-end interactions, from reviewing students and capturing in-the-moment verbal feedback to reflecting on longitudinal insights and delivering structured, growth-oriented evaluations. The flow helped surface key design considerations around reducing cognitive load, supporting flexible feedback inputs (voice and text), and embedding reflection and AI assistance without disrupting clinical workflows, ultimately guiding early feature prioritization and system architecture decisions.
*While we additionally ideated a student user flow, we prioritized the faculty flow for our final high-fidelity design because we're still currently collecting student survey and interview data in order to design a dashboard that showcases students' longitudinal success and questions that allow students to metacognitively assess their own performance.*

Faculty Flow by Jenn Choi
Conceptual Wireframes
The redesign effort aimed to integrate high-priority features that allows faculty to complete formative assessment.
The interface follows a form-like design but allows for quantitative feedback types such as radio buttons, multiple choice, quick tags, and a spectrum selector. At the end, there is a space for faculty to give additional feedback that was not applicable in the earlier sections.

Low Fidelity prototype of Faculty Flow
High-Fidelity Design
Although the final design is still ongoing, our high-fidelity prototypes will allow us to translate the finalized concepts into technical specifications ready for the next stage of funding, testing and development.
These high-fidelity screens illustrate the redesigned faculty assessment flow, from reviewing pending evaluations to submitting structured, growth-oriented feedback. The experience prioritizes speed and clarity through progressive disclosure, quick tags, voice input, and competency sliders, allowing faculty to capture insights efficiently during busy clinical sessions. AI-assisted rewrites and competency comparisons help translate observations into psychologically safe, actionable feedback, while draft states and confirmations support flexibility without disrupting workflow.


Competency Comparison
Our clients requested a clearer understanding of studentsโ self-perception before writing their own evaluations. They emphasized that without seeing the studentโs reflection first, their comments often lacked alignment, specificity, or meaningful dialogue. As a result, we designed a competency comparison card that allows faculty to review the student's own assessment from a scale of 1 to 5.

AI Re-write and Supplemental Recommendations
Our clients were interested in the use of AI within the EvalNow app. Through our literary review research and competitor analysis, we decided that the implementation of an "AI Growth-Mindset Rewrite" feature and an "Add extra studies," card would help faculty give constructive feedback that promotes growth within students to reduce the high emotional feelings associated with the existing feedback process. The rewrite summarizes the student's progress and recommends actionable steps for them to take. The "Add extra studies" cards supports the concept of giving students actionable steps by providing them real resources on the technical challenges they are facing.

Voice Feedback
Finally, we added a voice note feedback that would speed up the typing process for faculty. This voice memo is transcribed into a text format and can be edited by the faculty if inaccurate.
IMPACT AND NEXT STEPS
Although the final design is still ongoing, our high-fidelity prototypes will allow us to translate the finalized concepts into technical specifications ready for the next stage of funding, testing and development.
FinaL Prototype
Final Design Revisions


