The goal of this assignment is to test your prototype with people and learn from that experience. To get the kinks out of your application, first test it with a couple of people.
Watch at least two different people use your interactive prototype. These do not have to be planned out ahead of time, but try to get people who are somewhat similar to the real people who will be using your application.
Just like when you conducted the Heuristic Evaluations, each team member will have its own role during testing. One person will be the facilitator, one will be in charge of taking notes/photos/video/etc. Any team member who is not currently in one of those two roles should observe and take notes. This time your user will not be writing down the problems they find for you. It's your job to learn what the people testing your prototype are thinking; the feedback they provide you will be invaluable for your next iteration. Your goal is to find ways to improve your interface. Look for breakdowns and pain points in your interface and try to understand what the problems are and how you might fix them.
When possible, modify/update your prototype before running the next participant.
It is important you keep the task consistent between participants: use the same script, follow the same protocol, answer questions in the same way. This is a fast study, not a formal one. Still, any study requires a bit of planning: Prepare an outline of how the user test will be run, written instructions that you will read to the users, and any other materials (e.g. questionnaires, interview questions) that will be used during the session. This is called your experimental protocol, which is a step-by- step guide on how to run the experiment. It should be detailed enough that someone else could take the protocol and run your experiment without you being there. It ensures that you carry out the experiment in the same way for each participant.
Submit your experimental protocol and signed consent form for each participant. Immediately after each test, do a quick debrief with your team and write down any reactions or thoughts that came up. You will most likely forget them, so it's important to write them down right after the test.
Submit a photo or sketch of each participant testing your prototype. As with the needfinding assignment, these photos and captions should show breakdowns and design opportunities. How you compose your photo will depend on the app/task. If the key thing is that it happens in a particular location (say in a grocery store), make sure to show the context. If the key issue is what's on screen, take an over-the-shoulder shot of the person interacting.
After testing, take some time with your team to reflect on your findings. Go through all the notes and other recordings. Try to be objective; don't write problems off. Discuss as a team and try and define some general patterns in people's behavior. When you identify some interesting points, talk deeply about them - ask each other questions, recreate the different tests, analyze the decisions people made, other paths they could have taken, and so on. Let your insights guide redesigns for your next prototype iteration. The end product of your discussion should be a list of changes that you will implement. Make this list detailed and understandable to people outside of your team. Of all the bugs that you have identified, fix the bugs that are either small and easy to fix or too severe to ignore. Make sure that you do this before moving on to the next step of this assignment.
Select ONE component of your prototype and come up with a redesign for that part. The redesigned component needs to be significantly different from the original design and likely to satisfy a real user need either not satisfied or satisfied differently in the original design. This is NOT meant to be a huge redesign of your prototype. We want you to select something small and manageable in scope (similar to the alternative website designs shown in lecture). Designers often go through many many different versions of prototypes before their final design, so we want you to get familiar with the process (given our time constraints for the course). Mock up a few screens of your redesign that you will be able to use with a user, and keep it simple. Electronically create and submit a URL to your redesign (build the interactive screens from your paper prototype). Remember to keep the URL to this redesign separate from your original application (you will need both for testing). As you think about how to revise your design, it may help you to sketch it on paper before you implement it in code.
Next week, you will use the Web to gather usage data about your application by launching an A/B test online. Refer to the lecture notes regarding controlled web experiments (e.g. A/B testing, control groups, etc.). Think about what you are asking, and how to prepare a study to answer it. Try to think of A/B tests that will give you information that this week's user testing didn't (or couldn't). Next week, when you launch your A/B test online, the alternative redesign you implemented this week will be the 'B' of your A/B test. You will need to have access to both versions of this component to run your online tests - don't write over your old code!
In studio, you will present your ideas informally to the other teams: What were some major findings? What changes did they translate to? What are you going to do moving forward? Then you will work with your studio leader to prepare for A/B testing!
Here are two student examples from last year. Keep in mind that last year, students had to submit paper prototypes of their redesign in addition to the implemented version.
By Mike Davison, Community TA and UX Project Manager
Testing your high fidelity prototype with users closes the circle. It is vital to ensure your solution meets the needs identified during the first assignment, and that the agency has not simply spent months drifting further from the problem.
It also allows you third party reflection and suggestions for tweaks to the design. Everything we learn here and correct is a problem we don’t have to live with because it has already been coded and is too costly to change....it’s a valuable phase of the process.
Remember this - feedback is not criticism, feedback is not personal. User centred design works best when pride is left aside, and the feedback of others is incorporated into your design thinking!
The following statements are common feedback given on this assignment. We call these statements 'I like' feedback because they are a way to express positive aspects of the submission. You could think of these as elements to aim for.
|Experimental Protocol & Documentation
|Protocol, consent forms, or documentation missing/sketchy. OR participant names not anonymized in report.||Complete but poorly conceived protocol. Consent forms included. Scant documentation.||Protocol well-designed. Consent forms included. Fully provided documentation.|
|Photo(s)/Sketch(es) of less than two participants.||Photo(s)/Sketch(es) of two or more participants. Captions missing or vague.||Photo(s)/Sketch(es) of two or more participants. The photos show and the captions thoroughly describe clear breakdowns.|
|No changes listed or irrelevant changes.||Presented several possible changes derived from the user testing data, although not all of the changes were useful or some important changes were overlooked.||Suggested several possible changes based on the user testing, all of which were important and directly addressed the problems identified in user testing.|
|No redesign or irrelevant redesign. No URL to interactive prototype.||The redesign doesn't address an issue found in testing and seems unlikely to satisfy a real user need. The prototype is incomplete and barely interactive.||The redesign addresses an issue found in testing, but seems unlikely to satisfy a real user need. The prototype is somewhat interactive, but not ready for user testing.||The redesign addresses an issue found in testing to satisfy a real user need. The alternative prototype is fully interactive and ready for user testing.|
|Online Test Plan
|No description submitted, or description omits quantifiable measures.||Test will quantitatively measure user behavior. However, the task is not motivated by real user needs, or the measure cannot be compared with a chi-squared test (or alternative that the submission proposes).||The task is motivated by a clear user need and the measure can be compared with a chi-squared test (or alternative that the submission proposes).|
|Updated Development Plan
|No updates or only minor changes to plan.||Plan is mostly updated, but is lacking some detail or deadlines seem unreasonable.||2: Plan is detailed and reflects progress, new tasks, and any changes to previous tasks.|
|Outside the Box
1 pt. Up to 5% of submissions.
|Not just a small tweak or bit of polish. The alternative redesign introduces a creative, conceptually different, and likely much improved user experience.|