This project has significant gaps, some of which are listed on this page. Some of these gaps could easily be addressed, while others would require much more research and user testing. Some great design features are still to be prototyped, planned, or even imagined.
Over the course of only one year, I was not able to find scientific evidence and develop a valid rationale to support all Principles. Therefore, these Principles remain as my strong beliefs, taken from my professional experience as a language teacher.
Currently, self-assessment videos are the only form of assessment available for learners. Without affecting the fundamental principles of the LanguageBug approach, other forms of feedback could be imagined and/or developed.
At this moment, LanguageBug does not provide learners with any chance to interact with other learners (besides assuming that learners would share their self-assessment videos with individuals in their networks).
Encouragement prompts are displayed during exercise practices. However, these prompts are not responsive, but automated, pre-established messages that do not depend on the learners’ performance. Saffer (2009) states that
“we need to know that the product ‘heard’ what we told it” (p. 64).
In other words, clever designs are delightful, while the current so-called “dumb” encouragements in LanguageBug are not.
There is an easy way to optimize the experience of LanguageBug: removing of blank spaces (“__”) from exercises. The app could prompt learners to answer some basic question and fill those blanks by itself.
This could be a way to structure repetition. In other words, levels could help learners engage with the same exercise practices many times. For example, as learners progress, the goals of each exercise would become increasingly challenging.
Include a dashboard with more quantified data is desirable. Learners would then have a better sense of what they have accomplished, which would increase their motivation to remain engaged.