Sallie Wormer

“For me, I am driven by two main philosophies: know more today about the world than I knew yesterday and lessen the suffering of others. You'd be surprised how far that gets you.” ― Neil deGrasse Tyson

a human-centered research and design portfolio

 

Current Portfolio


 
 

Strategy-Informing Mixed Methods

This partnership discovery study included behavioral analytics mixed with data from qualitative interviews, surveys, content analyses, and focus groups. This project had a snowball effect of interest that allowed us to accelerate our inclusion efforts and broaden our reach, but still narrow down to the most important focus areas. Learnings were shared in a two-day planning workshop that helped prioritize the product team’s roadmap in 2021, and spurred a new engagement model.

In the timeline shown here, research-led activities are in green, research-consulted activities are in white, and things that were nudged into action without much oversight are in grey.

Upwork, Categories/Match team | November 2019 - January 2020

Timeline of overall research plan.

 

behavioral and attitudinal segmentation study

I relied on Data Science friends to identify the key behavioral segments of current talent, and recruited on those segments for an attitudinal survey exploring beliefs, context and goals. The analysis married talent behaviors with the survey data. One secondary deliverable was an infographic of key highlights, shown here.

Results provided a foundational understanding of the various talent types using Upwork, and helped inform new onboarding design explorations.

Upwork, Talent team | March 2021

Infographic from segmentation study.

 

jobs-to-be-done

We backed into this methodology when we observed and listened to participants choosing different platform tasks in a collage study, depending on which ‘mode’ they were in and what their freelancing goals were. The learnings from this study informed a revamp of the logged-in landing page (original intent) and an eventual IA redesign proposal. By layering in learnings from the talent segmentation study (above), it also helped create a fundamental framework for understanding interactions with both product and services. I later used the JtbD framework at Indigo to better understand organizational needs (multi-user JtbD).

I used the high-level overview shown here as a wayfinding mechanism in the final readout, and systematically evaluated the current experience against user needs and goals.

Upwork, Talent and Match teams | June 2021

Excerpt from readout.

 

research defining workshop

This short video includes a few excerpts from the context setting portions of an interactive research workshop. The ask from the design director was to pull together something for his team in a few days, to help move them from tactical questions to more foundational research. Questions elicited from this workshop formed the basis for three studies.

IndigoAg | November 2021

 

Perceived Relevance in Search

These three collage studies were a vertical-focused follow-up in the Design & Creative category when we discovered search behavior might change with the type of talent needed. As we also suspected that the current search layout was not meeting the needs of those seeking creative talent, we used the collage format to learn what we were missing and what was important.

The collages sparked the conversation, but we did not take their ‘designs’ literally, and I did a cluster analysis of their commentary, example shown here. The final study summarized learnings from all three, and results informed search layout, search filters, ontology, and kicked off talent-facing portfolio design explorations.

Upwork, Categories team | Summer 2019

Cluster analysis in Miro.

 

UX metrics

Sometimes it's not enough to do a behavioral analysis; sometimes you need to map customer behaviors to their motivations and intent. In search, this meant understanding qualitatively - in a quantitative way - if user clicks were successful and satisfactory (i.e. pearl growing) or frustrating and fruitless (i.e. thrashing). In 2019, I proposed and we launched two in-product surveys, one for perceived relevance and one for how the search features themselves were working (UMUX Lite). While we got great data, made improvements based on the results, and wanted to keep collecting for new algorithm releases, we realized we should coordinate with other teams in other product areas who were excited by our UX metrics!

Shown here is an attempt to identify the various attitudinal UX measures and owners prior to proposing a more cohesive plan (not shown) and guidelines.

Upwork, overall Product and Design | 2019 - 2021

‘Map’ of the explosion of attitudinal UX metrics.

 

lightweight concept testing

I’ve been lucky that most of the designers I’ve worked with appreciate user feedback on design concepts before expensive development work begins. This project was very lightweight in n, but we had done more foundational work ahead of time to mitigate risk, and were able to recruit, conduct and get results in about two weeks. Not my favorite if it’s a regular occurrence, but doable in cases that truly warrant it.

The PM and designer were in all sessions with me. As I organized the Miro both by designs and labeled themes, neither felt the need for a more formal deliverable.

Upwork, Categories team | Summer 2019

Analysis and ‘deliverable’, in Miro.

 

end-to-end scenario/task testing

Product had planned on only doing internal “user acceptance testing” for a new multi-sided product launch. I’m grateful the design director supported my push to get the clickable prototype in front of both sets of primary users before development was completed. The timeline was incredibly tight, the time of year made recruiting tricky, and the complexity of the space made the tasks challenging to get real life, but the learnings we got headed off more than one critical ‘gotcha’ and will help both fast follow work and future considerations.

For the first set of E2E testing, I created a rubric for task evaluation for clarity, and employed UMUX-Lite questions to prompt participants to explain their reactions. The team was surprised that the behavioral results (task success) were quite different from their attitudinal responses.

For the second set of E2E users, we had multiple challenges with the prototype. As the timeline was so tight, we pivoted from a pure scenario/task test to a concept test. As a result, we could not use the task rubric for analysis, and instead did a more traditional concept test / cluster analysis.

IndigoAg, Market+ team | December 2021 - January 2022

E2E analysis as task-lite concept test. User group 2.

 

E2E analysis with task success rubric. User group 1.