Although no two people learn in exactly the same way, the process of learning typically involves seeing, listening/speaking, and touching. For most young children, all three senses are engaged in the process of grasping a new concept.
For example, when a red wooden block is given to a toddler, they hear the words “red” and “block,” see the color red, and also use their hands to touch and feel the shape of the wooden block.
Uzma Khan, a graduate student in the Department of Computer Science at the University of Toronto, realized the Kinect natural user interface (NUI) could provide similar experiences. She used the Kinect for Windows SDK to create a prototype of an application that utilizes speech and gestures to simplify complex learning, and make early childhood education more fun and interactive.
The application asks young children to perform an activity, such as identify the animals that live on a farm. Using their hands to point to the animals on a computer screen, along with voice commands, the children complete the activities. To reinforce their choices, the application praises them when they make a correct selection.
Using the speech and gesture recognition capabilities of Kinect enables children to not only learn by seeing, listening, and speaking; it lets them actively participate by selecting, copying, moving, and manipulating colors, shapes, objects, patterns, letters, numbers, and much more.
The creation of applications to aid learning for people of all ages is one of the many ways we anticipate Kinect for Windows will be used to enable a future in which computers work more naturally and intelligently to improve our lives.
Business and Strategy Director, Kinect for Windows