Final Evaluation
You can see the videos of the interviews here , here ,and here. You can read an edited transcript of the interviews: interview1 transcript, interview2 transcript, and interview3 transcript. (n.b. – I used my questions and Hyojeong’s questions to conduct these three interviews. Lucie and Jisu conducted three interviews using their sets of questions. My evaluations are primarily based on the results of the interviews I conducted; an analysis of responses both to my questions and to Hyojeong’s.)
My questions are for the general evaluation of the app’s purpose and functions.
Questions:
1. In a few sentences, can you describe the main function of our device and app as you understood it?
2. You saw Judy using FISCHE in her daily life. In what situation do you imagine using the app?
3. Can you think of other apps, device, or services that do something similar to FISCHE? What are they? How is FISCHE different?
4. Are there improvements you think could be made? Or does anything concern you?
My questions was meant to evaluate the clarity of the app and our presentation of the app (video & panel). All respondents seemed to understand exactly what the app was about, and when they would use it, suggesting our presentation of our app was successful. Interviewees also said that it was quite unique and differentiated from existing services because no other service had AIs intended for conversation training. Many saw it as an alternative to speaking to a native speaker or a tutor.
Interviewees felt that the general look and feel of the app was “comfortable.” I was initially concerned that they meant “comfortable but not interesting,” but it seemed that they meant “comfortable and not too rigid.” Considering that the point of our app is to make the users feel un-threatened and okay with making mistakes, “comfortable” seems like an appropriate feel to give off.
The single concern that everybody brought up was how big the database would have to be in order to support an AI that is as intelligent as our app needs it to be. This is a concern that came up during our user research as well. We did not make specific provisions to fix this concern because these technical provisions are beyond the scope of this class and the limits of our team’s ability. However, since everybody was quite concerned about whether an AI could make fluent, natural conversation, pick up on error, and use multiple languages, it may be necessary to include some “fixing” options where users can “fix” the situation if the AI misunderstands or makes a mistake. We could also request users to log mistakes so that the AI might improve in the future.