I created a plan to interview and observe members of the Ops team.
I outlined my questions and created a script around them, then recruited the members of the Ops team to meet with me. With each user, I interviewed them about their experiences with the tool, then asked them to do 3-4 files while I observed.
With each user, I recorded the session so I could review later, and share them with my team.
After all sessions were done, I synthesized my findings to present and discuss with my team.
How could we reduce the time needed for a review?
Why did it require such a high learning curve? What made it so that newer reviewers struggled with the tool?
What types of decisions can be automated? Which can't?
What things about the tool really bugged the reviewers?
Organizing updates by “type” obscured the relationships between them, so the narrative is hidden.
No way to track progress.
If they got distracted or lost internet, they’d need to start all over again.
Not enough relevant data shown.
Users often needed to hunt down a member’s current information in order to compare it to the incoming updates - often needing to have multiple tabs open at once.
The tool, before the redesign
I created a testing plan in order to validate our prototype and see what we needed to iterate on. I met with users individually, asked them to review a file in our tool, and then the same file in the existing tool (in alternating order) so I could observe. I also asked them questions to get a sense of how they felt using the new tool. I recorded the sessions.
Note: We opted to test our prototype by building out the application, instead of using clickable Figma wireframes, because it allowed us to quickly test real files and compare it with the existing tool.
Did organizing the updates by member create a clear narrative? Did that narrative make reviewing the file less complex?
Were we missing any functionalities?
Which tool made file-reviews faster?
How easy was it to use for newer Ops members?
Were we displaying enough contextual information?
Could users track their progress in this tool as intended?
Was the information more scannable?
Were there any friction points in this UI?
Organizing updates by family created a narrative of the update(s) in a way the existing tool did not.
Participants could quickly scan them and tell us the story of the updates. They were not able to do that in the existing tool
Because of this, they had a much easier time making decisions about which updates to accept and which ones to reject.
We were displaying the right amount of contextual information.
Many participants commented, unprompted, that they loved seeing the side by sides of “current” and “new” enrollment data. They no longer had to hunt down that relevant information through overpopulated little tables in the tool, or through other apps when they couldn’t find it here.
Our tool was much faster for reviews that required lots of thought-required decisions, and was slower for reviews that required them to “skip” most updates automatically.
We realized we needed to add a bulk decision-making option to account for this.
This view shows updates for a member: adding dental and vision enrollments.
Note current existing enrollments displayed underneath - this allows the user to know that these new enrollments might be duplicates.
Be clear and focused about what you want feedback on, especially with leadership.
Early in the process, I had done some design reviews with leadership that I didn’t have a focused purpose for, so we ended up getting slowed down by some minor-UI discussions at a time when we should have been discussing more high-level things, such as the problem, how we wanted the tool to affect our business goals. and what our desired success outcomes for this new tool would be.
In a tool like this, where users need to open every row to make their decisions, maybe there’s a different way to display the information.
We ended up defaulting the tool to “open all rows”, because that cut down on the number of clicks people needed to do. But if I had to do this project over again, I would probably opt to go with a UI that didn’t use collapsable rows at all.