Update for 2018 – Our paper has been accepted for publication with ACM International Conference on Supporting Group Work. The conference takes place in January 2018 where we will present some of the ups and downs of contextualizing the needs of users.
This year I’ve worked closely with Jared Bauer on his recommendation system called “Tria” or “Reflektor”. Initially begun in 2012-13, Jared’s Intel-sponsored system analyzes a corpus in an attempt to offer personalized recommendations. Having entered my final year at the University of Washington’s Master’s program for Human Centered Design and Engineering, I was brought on to assist in the conceptualization of potential user interactions and experience.
We began the summer sketching user flows and hashing out rough concepts.
What were the important actions and pivotal functionality? How should the user interact with the system?
My original interpretation of our concepts led me to design a preliminary mobile prototype. I focused on fleshing out the mobile experience, showing a user’s experience attempting to associate a feeling with an image Tria had provided.
I felt it was key to keep the experience a visual one; one that would not only be aesthetically pleasing to the user but, by extension, keep them engaged. Initially, we felt that the more engaged the user, the more concise and relevant future recommendations might become.
At this early stage the color palette is kept to a minimum, allowing the user to focus on the images returned by the recommendation system.
At this point, Jared asked me to pivot to the group mood-board design & development. The design direction I was given revolved around the Pinterest functionality. A tiled look that was a playful visual to support the data.
The latest problem we are trying to solve for involve allowing the user to sort images into a variety of mood-boards.
Putting a bit of fit and finish into the moodboard before we move into the testing phase. We added in a visual cue for the rating system and tinkered with the title display. I’m still working on ways to make this a bit more playful, but for now it has the required information the user needs for our study.
A last minute tweak to the study was put in place. As part of the interview process, we are attempting to observe differences between the way the users interact with the system when presented with the interpreted visual image created from the system compared with a word cloud representation.
For the study, we decided to run users through two scenarios: one in which participants are asked to chat about music they might like to hear at a small gathering, and one in which the should imagine they are at a dance party. After they chat, participants run through a few activities that allow them to select and vote on a portion of extracted data generated from the conversation identified as pertinent. This allowed us to generate either a word cloud or moodboard which we used to prompt the participants’ discussion with qualitative questioning.