Longitudinal New User Study


Customer Goal

I want to learn about Plex and start using it to stream my personal media files to all of my connected devices.

Role

Lead Researcher (solo)

Key Deliverables

  • Study plan (including goals, methods, schedule and tasks)
  • Implementation in remote research tool
  • Bugs/issues in github
  • Narrative journey of participants
  • Study report (including video clips)
  • Discussions with stakeholders
  • A “lessons learned” document for future reference when conducting longitudinal studies

Impact

  • Provided executives and stakeholders a realistic view of challenges faced by prospective users
  • Focused the company on providing a better experience to new users (later iterative usability research demonstrated the positive effect)
  • Highlighted fundamental technical challenges that either Plex or the users have to work around

Process

Through prior usability research I conducted on getting started with using plex.tv and the Plex Media Server (a client-server solution for personal media streaming), it was clear that a hurdle existed for those less-technically inclined when it came to understanding the product and getting it set up. I proposed a short longitudinal study over the course of 3 weeks, consisting of 5 sessions with 8 participants, all to be conducted remotely and asynchronously to best accommodate the resources available, a big part of which was my time as the one and only researcher. Why a longitudinal study? The company had never looked wholistically at the experience of a new user across products, devices and time.

After writing the study plan and reviewing it with stakeholders, I met with the remote testing vendor to communicate and coordinate the research. The sessions included a warm-up to practice with recording using their mobile phone, the first-run experience and purchasing a paid account, core tasks I identified with stakeholders, focus on playing back various types of media, repeating the core tasks at the end of the study and then answering questions and cancelling their paid account. Between each session, I worked with the vendor to send an email in preparation of the next session. Most of the time I was able to view and annotate every video of every session before the next session occurred, in case I needed to adjust instructions or the tasks themselves (which I recall doing once).

Once I had all the data, after entering in bugs/issues into the tracking database (I also reported some along the way), I created two deliverables. The first was a written, narrative journey of each individual participant, including some demographic or personal facts, an overall summary of their journey through the sessions, and a more detailed description of each session they participated in. I wanted to make it easier for my audience to keep picturing them as people, not just data. The second deliverable was a summary presentation of the goals, results and methods of the study for broader consumption and discussion with the stakeholders.

Lessons Learned/Constraints

  • Even with compensation for participation with an added bonus of a paid subscription to the service if they wanted at the end, there was attrition, as 3 of the participants ended up dropping out – I suspect the remote, asynchronous aspect makes it convenient, but also eliminates the interpersonal aspect that can help tie things together
  • Segmented feature/product teams responsible for portions of the user experience not only can drift apart, but develop “cracks” into which a user can fall if there’s an issue that occurs between those segments, or during the transition from one to the next – wholistic user research can help identify these issues
  • Longitudinal studies can be very involved when it comes to the time of the user researcher (as well as participants) and generate a lot of data – making the whole thing remote definitely adds another layer of complexity… in a strange and fun way, the challenges in the study execution somewhat mirrored those of setting up Plex for a new user
  • Any product relying on a user’s home network and knowledge thereof faces additional usability challenges when things don’t go smoothly

Additional Context

Earlier studies on individual components of the overall first-run, new user experience indicated several issues, but the company had never looked at the experience of new users from end-to-end. As with many companies, work is segmented between teams, who do their best to create an awesome, reliable experience for the segment they’re responsible for. Especially in a system with multiple components, such as a website, server, clients, network equipment, the Internet and more, there can develop opportunities for providing a tighter overall experience.

To stream, say, a personal photo using Plex, you would: sign up for a free account (free for this feature) on plex.tv, download and install the Plex Media Server software on a computer connected to your home network, and point the server to where your photo is stored. You’d then use a Plex client app on pretty much any popular device, from mobile to TV to just the Web, to view the photo anywhere you are. For people very comfortable with technology, it isn’t very challenging. For people without that level of comfort, there’s a lot of room for usability issues to occur.