Context
Noyo is an insure-tech platform that offers both API and non-API solutions to enrollment management.
The Noyo operations team needs to manually review files that are sent in (almost daily) by benefits-administration companies.
The operations team reviews the updates to check for potential issues (such as duplicate enrollments or accidental removals) and needs to make decisions quickly and with confidence.

The tool, before the redesign
Discovery & Observation
How could we reduce the time needed for a review?
Why did it require such a high learning curve? What made it so that newer reviewers struggled with the tool?
What types of decisions can be automated? Which can't?
What things about the tool really bugged the reviewers?
I created a plan to interview and observe members of the Ops team.
I interviewed each team member them their experiences with the tool, then asked them to review between 3-4 files while I observed.
After all sessions were done, I synthesized my findings to present and discuss with my team.
Problem 1: Missing narrative
Organizing by "type" of update made connections of related updates hidden behind tabs, which made it difficult for reviewers to confidently and quickly scan for a narrative to make decisions with.

Problem 2: Hidden or missing context
Sometimes, relevant information about enrollments was missing, which meant that users needed to have multiple apps open in order to find that data.
Other times, the data would be hidden within the table, requiring a lot of horizontal scrolling to read it. And not all the data in the table was even useful for making a decision.

Problem 3: Progress frequently lost
The way to make a decision in the app was to check this box on or off, and they loaded "on" by default. If you lost internet or got distracted, you had no way of knowing if a box was checked due to making a decision to keep the item, or checked because you hadn't reviewed it yet.
Participants told me it was frustrating to start from scratch each time.

Problem 4: Not enough guidance in-app
New reviewers would ping their managers very often with questions about rules, how to make decisions, and other related questions.
Problem 5: Third-party app
Being built in a third-party app meant that our engineers could not easily maintain or update the tool, customization was limited, and it struggled with large file sizes.
Usability testing
I met with users individually, asked them to review a file in our new tool, and then the same file in the existing tool (in alternating order) so I could observe.
I also asked them questions to get a sense of how they felt using the new tool.
We opted to test using the actual tool, instead of using clickable Figma wireframes, because it allowed us to quickly test real files and compare it with the existing tool.
Did organizing the updates by member create a clear narrative? Did that narrative make reviewing the file less complex?
Were we missing any functionalities?
How easy was it to use for newer Ops members?
Which tool made file-reviews faster?
Were we displaying enough contextual information?
Could users track their progress in this tool as intended?
Was the information more scannable?
Were there any friction points in this UI?
Problem 1: Missing narrative — Solved
Organizing by employee created a narrative that connected related items under the same update, and thus allowed reviewers to easily scan for a narrative to make decisions.
I also grouped updates according to a hierarchy - updates that could only be approved within another update (such as adding enrollments for a new member) were grouped underneath that top update.

Problem 2: Hidden or missing context — Solved
The new app displays the right amount of contextual information for users to make a decision about an item. It also uses the entire width of the screen, as opposed to a third of it, to show that data.
As a result, reviewers no longer need to keep multiple reference tabs open, nor do they need to scroll horizontally to hunt for data.
Participants especially loved to see the side-by-side view of "current" and "new" enrollment data.


Problem 3: Progress frequently lost —Solved
We added a checkbox to let the reviewer indicate when they were "done" reviewing a group of updates, and a status for each grouping that changed when the box was checked.

Problem 4: Not enough guidance in-app —Solved
We added tips and hints throughout the UI to answer common questions, highlight potential problems, and provide guidance about how to make decisions. Some of these tips are based on various carrier rules, which differ between carriers.



Problem 5: Third party app —Solved
Because the tool was built by our engineers, it could handle large files.
Finding: Bulk actions was needed
Reviewers were making decisions faster, but were slowed down by doing a bunch of clicking. So, after testing, we added the ability to "bulk action".
Solution
A custom tool that uses member narratives to guide the user through file reviews faster and with less complexity.
An example of an enrollment update - Dental and Vision being created for a member that already has dental and vision.
An example of an demographic update as well as updates to existing enrollments.
Impact
After switching over to our new tool, the operations team anecdotally reported:
Faster turnaround times.
Fewer questions for their team leads.
Less frustration.
Less of a learning curve required for new reviewers.
My takeaways
Be clear and focused about what you want feedback on, especially with leadership.
Early in the process, I had done some design reviews with leadership that I didn’t have a focused purpose for, so we ended up getting slowed down by some minor-UI discussions at a time when we should have been discussing more high-level things, such as the problem, how we wanted the tool to affect our business goals. and what our desired success outcomes for this new tool would be.