Expertise

Leadership, Crisis Management, Design Sprint, Moderated Research, Client Management, Hi-Fidelity Mockups, User Testing

Platforms

Web, Tablet

Deliverables

Testing Script, Quantitative + Qualitative Test Results, Hi-Fidelity Mockups, Style Guides, Pitch Deck, Rolling Implementation

More?

View the design process and deliverables below.

View Hi-Fi

Unhappy Client + Uninformed Users

The Client: FoodCorp

For purposes of privacy, we will call the client FoodCorp. This was a large client during my time working for the Human Resources SaaS Platform, TalentGuard. FoodCorp had a large, distributed user base of part-time and seasonal workers who would staff restaurants and service industry jobs. Additionally, FoodCorp had a small percentage of workers centralized in their Buffalo, NY, headquarters.

The Users: Service Workers

FoodCorp's headquarters users were staffed with Administrative and central operations management. These were first round implementation users, with the large percentage of distributed service workers to come in later waves. For purposes of our design sprint and subsequent moderated user testing, our focus was on the larger array of service workers.

Photo taken on Day 2 of on-site moderated User Testing in Buffalo, NY.

Crisis Analysis and Management

Phase 1: Crisis Management, Client-User Empathy, Planning

Crisis Management

Our small Product Team at TalentGuard had started our last major Sprint Epic of 2019. After our initial problem-scoping and research-generated content, we were given direct work from organization leadership that one of our top clients were unhappy with how the product addressed usability and user-orientation. This threatened the current state of our relationship with FoodCorp.

The result was for our leadership to implement a walkthrough experience for novice first-time users, that would be mobile friendly and scale across our service.

There was no previous precedent of Walkthroughs in our product. Nor would a 3rd-party widget suffice based on tech stack requirements.

Time-Crunch

Our team was given a two-week deadline.
One week to pivot from our major epic to address FoodCorp's user needs in a Hi-Fi test-ready design. One week to develop a metrics to measure task-completion, informed navigation, and ease-of-use.

It was an incredibly exciting time, and I believe this two-week challenge is what solidified the teamwork among the Product Team, allowing me to demonstrate my leadership in problem-solving.

Two weeks to understand the problem, design, mockup up, prepare interactive hi-fis, develop testing materials, and moderate tests in-person.

Team Collaboration

Mode of Operations

Once the challenge was handed down from leadership, I brought together my Product Manager, my Design Manager, and Technical Writer. We blocked off the schedule for our War Room, starting Monday, with whiteboard paint on both sides of the walls, and established the following:

1) Decision-making protocol for unanimous, actionable decisions.

2) We work backwards to determine our individual commitments from our Friday deadline for a testable prototype.

3) We need to understand our principles for what a well-designed walkthrough provides a user. Understand how the different functions in our product bring value. Bring users to understand why they are in a specific location in our software. Orient users to how the experiences they may have in one location relates to the experiences they may have in other places of our software.

Design Planning

Establish Alignment and Deliverable Expectations

A messy whiteboard brings about a Research Plan consisting of:

  1. Stakeholder Alignment on Scope/Objective
  2. Our Predispositions/Assumptions about Users/Problem to lead our design hypotheses
  3. Knowledge about the testing Field Site to construct a proposal for the client to redline
  4. Profiles of Participants to recruit from client
  5. Data Collection and Plan
  6. Test Script for Evaluative Design (to be created from the Hi Fi work)

Team Leadership

Phase 2: Scoping Design, Decisions Taken, Concept Alignment

Process Alignment

Starting with Purpose

Our Client & CTO suggested we solve this first-time user problem with a Walkthrough.

But what is a Walkthrough?
What makes a good Walkthrough?
What makes a Walkthrough scalable?

Photo taken of Purpose Alignment

Scoping

We needed to deliver a walkthrough experience to attenuate novice users the most common areas this client used.

We agreed that Success Criteria included necessary user walkthroughs for:

10 areas of the UI, the walkthroughs must Generalize to all current Clients, and it must have functionality to scale to other product areas for maintenance and growth of the UI.

Photo of 10 UI Areas

Aligning

To align my team, we proposed our Walkthrough to answer 3 primary values as to what makes a "good" walkthrough:

Why am I here?

Where am I?

How does this experience relate to others I might have (in this software)?

Photo of "Purpose" Ideation activity

Time-Box Ideation

Then, we Ideated on what UI elements and Content might satisfy each of the 3 Walkthrough Primary Values above, for each of the 10 areas of the UI we needed to walk our users through.

We Affinity Diagramed our individual results to define common threads of thought. Then, identified which was the summative Category of the thread.

Finally, we Voted for Main Areas of Focus based upon those Categories.

Now we were ready to Concept on the Form of the Walkthrough.

Photo of colleague voting on Main Area of Focus

Insights to Concepts

From here, we knew why were were doing what we were doing and where.

Ultimately, the experiences the users were to have, as we hypothesized, were laid out.

Now, all that was left was to map the info architecture of the content and form of the experience.

Photo of a Voting Results Section

Architecture + Visualization

Phase 3: Mapping Content Architecture, Test Script Writing, Executing Visual Design

Design Execution

Walkthroughs are like Tour Guides: An Anology

How do we think of Hierarchy?

When a Tour Guide walks you through an experience, an exhibit, an individual painting, a city block----what does it feel like to be situated to your context?

When it is done well, doesn’t the history, reference, and ultimately meaning feel clear, although it was alien just moments ago? This was the analogy for our North Star regarding our Content Heirarchy structure.

Photo of Content Architecture abstract

10 Pages of Content Heirarchy, Mapped

Applying the heirarchy described in the image above resulted in whiteboard markers over every wall of the design War Room. This material became the skeleton that would support the visual design.

Photos of Content Architecture

Concepting the Look & Feel

The form of the design didn’t come after our information architecture ideation sessions, but alongside them.

What walkthroughs do we like?
What exemplars are out there?
How might this user access this thing?
What would that feel like?
How does it feel to be greeted here?

Photos of Sketch Ideation

How Might We Test This Story?

After we documented our Content Architecture, we had enough information to create a testing script based on the primary user goals we knew our users would go to a page in the UI to accomplish.

Each of these goals included task-completion information, tracking of errors, as well as attitudinal information from 7-point likert-scale reporting by the user. These documents were produced by myself, and moderated by myself. Tracking of errors/missed-clicks were observed and recorded by my colleagues.

Writing Testing Scripts that explored a variety of variables for evaluation:

  1. Moderated, Onsite
  2. Directive Questions
  3. Open Questions
  4. User Scenarios
  5. Overall Task-Completion
  6. Discoverability
  7. Ease-of-Use
  8. Informative Use
  9. Likelihood-of-Use
  10. Why?
Examples from the Testing Script

High Fidelity Visual Design: A Long Weekend

After 4 Days of preparation, design work, research scripting, and communications with our client, we married our lo-fi sketches and Content Architecture into High Fidelity interactive prototypes. This was weekend work in anticipation of the flight out to our client on Monday.

All-in-all, we were prepared for testing:

  1. 3 Prototypes
  2. 10 Testing Flow
  3. 5-7 Flows per Test User
Representation of Visual Design Modal

On-site User Testing

Phase 4: A mixture of User Testing and Brand Ambassadorship

Evaluative User Testing

Client Communications & Data

Demonstrating design usability that works and persuasion are two different skills. There is theory (usability) and there is business (persuasion).

In our 4 day testing, one challenge we encountered was that 13 of the vetted users were client stakeholders in the corporate structure. Several were leadership. These were not the users of the system or designs were were testing.

What became clear by Day 3 of testing is that the reputation as a client partner was in focus. After all, this product is what our client's users leveraged in order to help map out their careers as hourly-wage workers for a corporate staffing company.

The light at the end of the tunnel of testing was the hope and reaction gained by the 3 "real" users who these walkthroughs were meant to empower. Their feedback both validated the design direction from the quantitative standpoint, but also gave shape to the need for an easy-to-understand tool to help them manage career growth in their hourly-based industry. An industry conventionally taken for granted as high-turnover with little guidance up the ranks.

Thinking Fast, Changing Lenses of "Intuitiveness" in Design

By Day 3, our team was demonstrating strong daily reporting to the client. This laid out positive trends in our Usability Evaluation, with transparency in our reporting and raw data at the end of each daily session.

However, there was a strong opponent voice among one of the client leadership stakeholders. There was a murmur of skepticism. Something like,

"How do we know this is intuitive based on your measurements?"

As UX professionals we know our work is based on an outsider understanding of "intuitive." We also know that "intuitive" is loosely defined, and can mean many things.

So what do we do?

We code for intuitiveness through and evaluative method called Progressive Task Completion.

Progressive task completion looks something like the following.

  1. If I am prompting the Participant to complete Task 1, then I give them as little supporting information (or as little nudges) as possible as to how they may complete Task 1. If the user succeeds, then they get a full point value for the task. Let's say 1 point.
  2. If the user cannot complete the task with the information I provide, then I give them a hint, a nudge. If the user succeeds, then they get a fraction of a point value for the task. Let's say 0.75 of a point.
  3. If the user cannot complete the task with the nudge, I will give them further nudges until the task is all but obvious to them as to how they can complete it.
  4. This trend of task-prompting continues until the possible point value reaches 0 possible points. Why? Through this progressive disclosure of information, we are testing how well all the affordances of the design may lead a user, based on their mental model, to the conclusion of a task. The theory is that an "intuitive" interface should be able to provide that for the user with minimal human support.

We successfully recoded the Testing Script for one final day of On-site Testing in order to determine intuitiveness through this method, and gained sign-off for this method from the opponent client stakeholder. Finally, we were able to analyze our results and implement accordingly.

Results and Design Implementation

Phase 5: Test Results, Refining the Interactive Hi-Fi Prototype for Implementation

Results

Connecting Design Measurements to Success Metrics (North Star)

Why am I here?

  1. Informative Use (Likert Scale)
  2. Desireability (Likert Scale)
  3. Discoverability (Observation)

Where am I?

  1. Findability (Observation)

How does this experience relate to others I might have (in this software)?

  1. Qualitative questions for thematic analysis

General Ease-of-Use, Task-Completion

  1. Findability (Observation)
  2. User Completion Percentage (Observation)
  3. Average Number of Errors (Observation)
  4. “Intuitiveness”
    (Day 3, Progressive Task Completion)

Reporting: Key Metrics

37/38

Total Successful Task Completions

96%

On Progressive Task Completion
(Coded for "Intuitiveness")

Other Notable Averages Include

  1. 6.2 Ease-of-Use
  2. 5.9 Informative Use
  3. 6.4 Desireability

User-Reported Scales were 7-Point Likert

Final Hi-Fi Walkthrough

Below is a looping video representation of a small portion of the final design. For proprietary reasons, the actual design is not presented.

The 10 separate walkthroughs were implemented and passed Quality and Assurance 4 weeks after Usability Evaluation concluded.

Notable core features include:

  1. an easy-to-discover Help button in the top right corner.
  2. a first time user experience checklist of what you completed
  3. ability to dismiss and not see again
  4. textual walkthroughs for each core page with additional visuals for at-a-glance review
Video loop of a representation of the core features in the Walkthrough design.
(Best if viewed on Laptop.)
Back to Top

Reflection

Understanding why a set of users may feel pain, lack of guidance, or ability to understand the way in which an interface mediates their their actions towards their goals is one form of practice for UX professionals. Understanding why their corporate strategy is willing to invest technology into their own user's success is another. Working through this challenge was one of the most challenging in my career, not for its scope (as it is contained to a well-worn concept of "walkthroughs"), but for the intra-corporate political strategy that brushes up against what is good for users, what is good for a company, and what kinds of ways companies wish to relate to one another.

The Good News

The designs worked! The numbers showed overall success in guidance towards the primary tasks in the interface, and with almost no nudging from the moderator (myself in this instance). That is reason to be excited. After all, pushing out team alignment, a rigorous design process, as well as a thoughtful research plan in 4 days is never easy. Then, to hop on a plane, spend a week somewhere to represent the interest of users as well as a pseudo-ambassador for you company presents its own set of challenges. I am very proud of our work, and I am glad it was ultimately implemented throughout the product suite.

The Hard News

Along with the onset of the COVID-19 pandemic, the large client we were working to persuade ultimately had to churn. Even worse, many of their employees they staff (which our designs were meant to serve), were subsequently cut. It is hard to design for good of helping others when caught up in the interrogations of business dealings. But that just means a different frame of mind is called into the picture. I've learned about differences between design and research practices, versus operation, versus strategies. Depending on where the money lies, this is typically where we must devote energy towards persuasion, if not ultimately for the good of the user.

I want to thank my teammates.

Sonia

Sherwin