We are going to be using Woopra to run rate comparison you planned last week.
Since you will be statistically analyzing the results, recruit at least 20 users. Put another way, that means that each person on your team should recruit at least 7 people each: a couple dorm-mates and a few family members and you're basically there. You can also use social media like Facebook and Twitter to recruit. You can even recruit other people in class! Make sure to launch your test several days before the deadline. How you schedule your analysis around your implementation plan is up to you. Do what's best for your app and your group. Here's some ideas from Optimizely.
Collect your results from your A/B test that compares rates between conditions. You can use your A condition as the 'Expected' Value and the B condition as the 'Observed'. Present your data and the results: show the inputs to your Chi-squared analysis, your calculations, the Chi-squared result, and the p value. This tool should help. Remember that a Chi-squared test compares rates, not times. Discuss your findings: can you draw solid conclusions or are additional tests needed? What changes would you make based on these results?
Use Woopra to track the pages that users view on your app. (You do not need to use Woopra's funnel feature.)
For all you designers out there, get ready. It's time to make it pretty. Concentrate on completing the changes based on the results of your in-person test. Attention to detail will serve you well. Make sure the app is optimized for the mobile interface. Same drill as the other weeks, keep updating your development plan.
Note: since we may grade your assignment up to a few days after submission, per the honor code, we expect that the prototype URL show the state of your prototype at the time of submission. You will very likely be updating your prototype after submission, but please do so on another version.Submit your formatted pdf here
|Online Test Results
|No conclusions listed.||Either only statistical analysis or insights are given.||Statistical analysis has errors. Two common errors: 1) using time as input to a CHI-squared test (CHI-squared compares rates). 2) assuming a 50% success rate as your 'expected', as opposed to using the rate in the A condition as 'expected'.||Statistical analysis is correct. Conclusions are trivial: they don't engage the content of the design.||Statistical analysis is correct, and includes a clear assessment on the strength of the data. Conclusions are clear and straightforward.|
|List of Potential Revisions
|No revisions listed.||The revisions are obvious or trivial. The major problems are not addressed by the revisions.||Revisions are made without much consideration to the user experience. The revisions are chosen to cater to a lower level of technical difficulty.||Several possible revisions are presented for different portions of the user interface. The revisions are creative and address the major problems. Changes are clearly justified and informed from the data gathered (both in person and online).|
|Fit and Finish
|Prototype is incomplete still has some bugs.||Prototype is complete, but may lack some details or polish.||Prototype is very polished, and ready to be presented.|
|No goals were met.||Only a few goals or equivalent were met.||Most, but not all, of the goals or equivalent were met.||All goals or equivalent were met. Stretch goals need not be met.|
|Updated Development Plan
|No updates or only minor changes to plan.||Plan is mostly updated, but is lacking some detail or deadlines seem unreasonable.||Plan is detailed and reflect progress, new tasks, and any changes to previous tasks.|
Outside the Box
1 point. Up to 5% of submissions.
|Not only are several possible revisions presented for different portions of the user interface, your revisions are extremely creative and approaches the problems in your interface in an effective and innovative way. An incredible amount of thought and consideration has gone into solving the underlying problems.|