“Alex” is an iOS language screener designed to detect speech delays in children ages 24-48 months old. The educational app was created as a series of mini-games to test a child’s vocabulary comprehension, speed, and accuracy, all while keeping them engaged and focused. By testing children’s speech and language abilities at a young age, “Alex” will allow parents and physicians to tackle deficiencies early on in their children’s lives.

Traditional language assessments have multiple shortcomings which include the child losing interest and focus during the assessment, parents under or over-reporting their child's progress, and the level of difficulty in quantifying clinician's play time and observations. Creating a screener in the form of a game would help to reduce the ambiguity in the ratings and observations, while ultimately keeping the child engaged.
Our Team:
Dr. Gillian Hayes, UCI Department of Informatics
Stephanie Reich, Professor, UCI School of Education
Alex Panganiban, Vinson Gotingco, Jenni Sangkavichitr, Christelle Valmores and Nathaniel Valerio, Senior Design Project Team (Jan – Jun 2016)
Alex Panganiban and Yao Du, Research (Sep 2016 – Oct 2017)
Staff of OC Regional Center
My primary roles were UX design and illustration.
The heavy lifting of the research was done through observations at the OC Regional Center. Our team sat in on an appointment with a child and speech therapist to observe the patient's behavior, which ultimately served as inspiration for the flow of the game. Majority of the session consisted of the child having to identify the objects the therapist held in front of them and observing how they were communicating with the therapist. 
We were then given a list of words from the MacArthur Bates Communicative Development inventory (CDI) that children were expected to know at a certain age. Because children are expected to first learn the items around the house, this influenced our game and app designs to replicate real-life scenarios such as playing with toys, coloring, cooking, and taking care of pets.
With a stronger sense of the kinds of tasks the children would be doing and how the sessions would go, we created personas and user flows to better understand our users.
Use-Case scenarios
After gathering our research, we began brainstorming game ideas. I sketched out our ideas, careful to include as many words from the list that the speech therapist gave us.
Preview the high-fidelity prototype here: https://invis.io/EHE544GQK
Below are the home and login screens. The games that the children are assigned to play are based on their age. Children that are 24-36 months old are expected to complete 1-step commands. Children at 36-48 months are expected to complete 2-step commands.
Game 1: Coloring Book (for children 24-36 months old)
The child is prompted to tap on one out of four objects on the screen. Once they select the correct object, that image is colored, and they’re asked to tap on a new object. They are able to move onto the next screen once they have successfully identified and colored all objects.
Game 2: Cooking (for children 36-48 months old)
For the second game, the child goes through multiple steps to complete the task. Older children at this age should have the ability of performing two-step commands. As seen in the second image below, the child is prompted to put the apple (step one) in the sink (step two). By the end of the game, the child is expected to wash, prepare, then feed the food to the pet cat and dog successfully.
Once the child completes the game, an assessment report is created and analyzed by the speech therapist. 
The report would include a breakdown of the child's performance in each game, assessing their understanding of each word, the time it took for them to recognize it, and how many attempts it took to get the correct answer. 
Learning outcomes:
- Keep a consistent schedule with stakeholders.
- Always get the low-fidelity mockups approved before continuing onto the high-fidelity.
- Focus on the end goal, not the deadlines.
Back to Top