project-preview.png

Noyo: Issue review tool

Date: 2022
Role: Senior product designer

Background: The issue review tool is an internal tool that allows our operations team to audit incoming data files from benefits-administration (ben-admin) customers for any potential problems, such as duplicating or accidentally removing members, or ingesting wrong SSNs. Some of this is automated, some of it is yet to be automated, and some of it by nature cannot be automated and must be done manually.

Problem: The existing review tool that operations was using was built in Retool, which was great for a while but then became really slow as larger and larger files were ingested. Additionally, there were UX things we could not do in Retool that would have made the process easier for our team.

Goal: We needed to make a custom-built review too that would help speed up the process of issue reviews and also be easier to parse than the existing tool.

Process: We spent time with the operations team, interviewed them about their experiences using the existing tool, watched them do reviews, and used the tool ourselves. We made observations about what was working and what wasn’t. We spent some cycles iterating on possible solutions, and reviewed each solution with stakeholders and operations, and eventually landed on a solution that tested well and was in scope for the project. Once we launched it, we spent time conducting more usability sessions and getting more feedback, in order to keep iterating on it. We also compared the time it took to review files in the old tool vs the new tool.

Result: The tool we built was reported to be faster in most review-cases, and contained more information that the operations team needed to make decisions with. They had to scroll less to get this information, which they liked, and the information itself was more relevant to the issue they were reviewing. The operations team was able to successfully switch from the old tool to the new tool.

Screenshot of final product

Screenshot of final product

Below, I will go into more details about the process and iterations and what we learned from each in order to arrive at the launch version.

Early iterations

Early iterations

From some of the earliest iterations, we knew that we wanted a list of “issues”, which was similar to the existing tool. A list is easy to scan and click through, and was consistent also with our existing ux patterns.

From this iteration, it was decided that separating by “type” (missing, new, etc) was a pattern from the existing tool that we did want to carry over into the new tool, because in theory that separation did not help making decisions easier or faster.

We learned that it would be faster to make decisions on the item itself, but that there would be more than one decision that would need to get made for each item - and that the context provided here was not enough to make this decision without seeing more details.

The next screens show some of the (more relevant) iterations where we explored potential ways to show those details and provide enough actionable items per issue.

 We began exploring the detail-view of an issue. We created a side-by-side view - the left side was the new information, and the right side was existing information with which to compare the new stuff to. The right side was basically a fully function

We began exploring the detail-view of an issue. We created a side-by-side view - the left side was the new information, and the right side was existing information with which to compare the new stuff to. The right side was basically a fully functioning profile of the group and/or member, so that the operations user could drill down into any part of the profile that would help make a decision.

On the left side was the details of the new information coming in with this file. Each section was editable in case of wrong-but-easily-fixable data, and also had it’s own decision to be made. That way, users could select which parts of the new data to carry into our system.

What we learned from this iteration was that even though seeing existing and new data side by side was very helpful for comparisons, the data was not targeted enough to be useful, and it would have been too many clicks and too overwhelming for the user. It would not have sped up the process. It was helpful to highlight where the system found a potential problem, as that would indicate to the user that this was something to pay more attention to. It was correct though, to allow different decisions for different parts of the issue.

Users also wished that they would not have to go into a whole focused view for each issue to make a decision, because it would be too many clicks and in 80% of cases they would not need as much information as we were displaying there.

The other thing that we learned was that editing the new data was for now out of scope.

Desktop - 44.png
 So, I made this prototype, which had a similar side-by-side view but did not show as much data all at once, and did not require going into a whole new page. I thought maybe a “quick view” would be the solution to having fewer clicks.   This prototyp

So, I made this prototype, which had a similar side-by-side view but did not show as much data all at once, and did not require going into a whole new page. I thought maybe a “quick view” would be the solution to having fewer clicks.

This prototype did not last long though, because we soon realized that if we needed to have any further modals, it would become a very unusable experience. Also, it was still too much clicking and too much information that was not needed.

So, we iterated until we arrived at a list view that would have just enough information required to make a decision in 80% of cases, with an option to go into a detail view if desired. Below is one screen from those series of explorations.

 This list was actionable just by expanding the item. It contained enough contextual information to make a decision in most cases (the information displayed would depend on the type of issue).   We learned from this one that displaying the resulting

This list was actionable just by expanding the item. It contained enough contextual information to make a decision in most cases (the information displayed would depend on the type of issue).

We learned from this one that displaying the resulting type of transaction was out of scope, and that detail view was actually not really required for at least the first launch if we could get the contextual information accurate for each type of issue.

We also learned that while it was easier to take actions, it was more difficult to draw a narrative to make the right decision when they couldn’t see all the incoming changes for a specific employee or member.

So we took what we learned from this version, and eventually landed on a version that was organized by employee and actionable on the list view without a detail view.

 The final iteration for build was a list view that allowed scanning the whole list, expanding any items of interest in order to make a decision, contained enough contextual information to make that decision, and was organized by employee so the user

The final iteration for build was a list view that allowed scanning the whole list, expanding any items of interest in order to make a decision, contained enough contextual information to make that decision, and was organized by employee so the user could easily construct a narrative of what was changing with that member.

It calls out when there’s potential problem detected by the system, and allows for actioning on specific parts of each issue.

 If you choose not to include specific parts of the issue, those parts are highlighted as removed so that visually it’s confirmed that you are not including those pieces. The retool version did not do this as effectively.   The retool version also di

If you choose not to include specific parts of the issue, those parts are highlighted as removed so that visually it’s confirmed that you are not including those pieces. The retool version did not do this as effectively.

The retool version also did not save your place as you made progress reviewing, and this one does.

 For all of them, but for demographic changes especially clearly, we retained the “side by side” view of existing vs new data. No longer did users have to scroll or leave the app to find this existing data. And, it was very relevant data, because it’

For all of them, but for demographic changes especially clearly, we retained the “side by side” view of existing vs new data. No longer did users have to scroll or leave the app to find this existing data. And, it was very relevant data, because it’s the same field. This was well received verbally by the team (they loved it), and also it tested well, helping make fast decisions. This ended up being one of the most positively-referenced features when asked about their experience in the new app.

 When the incoming change is adding a new person or enrollment, we display the existing people or enrollments for that person - this way, we are providing the contextual information needed for the operations user to decide if the change should be app

When the incoming change is adding a new person or enrollment, we display the existing people or enrollments for that person - this way, we are providing the contextual information needed for the operations user to decide if the change should be approved, or rejected if it would mean creating a duplicate of that member or enrollment.

 A few more changes were made after launching, such as adding more quick-filtering options and changing how we displayed “type” in the table, but I don’t have a screenshot of that.

A few more changes were made after launching, such as adding more quick-filtering options and changing how we displayed “type” in the table, but I don’t have a screenshot of that.