Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Recording: 

Slides:


Attendees: Jong Lee, Luigi Marini, Lisa Gatzke, Sandeep Puthanveetil Satheesan, Pengyin Shan, Malika, Jessica Saw, Max Burnette, Jonathan Kim, Bingji Guo, Sara Lambert, Charles Blatti, Chen Wang, Chris Navarro, Doug Friedel, In Kwon Choi, Jonathan Kim, Kate Arneson, Leigh Fu,  Matt Berry, Ya-Lan Yang,



Comments; Discussion of how to gather information when you are doing analysis, feedback, and tasks.  UIX uses Miro Boards to determine difficulty vs workflow..  This will become a part of Best Practices Handbook.

Has anyone had difficulty with usability studies.

Testing has a magic number of 5 for the best accuracy; however, if you are working with several different groups, you will want to test each group.  It is often challenging to get people to agree to user testing.  But 5 tests seem to give about a 90% accuracy.

There are different methodologies for usability testing.  Pengyin suggests that timeline for the testing be built into the timeline and cost involved with the usability testing.

The software we develop is not for sale - this is important to remember.  Most times the software will be used by the external team that requested our services.

Jonathan Kim notes that we need to focus on which groups we are trying to test, based on their technical skills.

Lisa mentions Guerilla testing should be done once a month as practice.  Tutorial session could be given with lunch.

Fangyu discussed having a focus group or perhaps 1:1 sessions where they are more likely not to be influenced by peers.

It is helpful to get the end users to test the project qt the beginning or the project rather than at the end.

Pengyin discussed the specifics of setting up a usability study in order to streamline collecting data, and getting feedback.

Fangyu notes that criteria is quite important and needs to be prioritized.

These studies take time, but if we could do micro-testing before we implement the actual formatting and language.

We should ask our clients who should be tested, rather than just random testers.

MMLI is such a huge project that it's difficult to find target users.  Testing on wireframes to weed out testers that would not use the tool being developed.



Links mentioned in this Round Table:

...