The implementation in the Kinect is designed and coded in such a way that using full-body interaction, the data can be explored as displayed on the screen. The Kinect is coded with Unity and inputted the datasets relating to ice cream sales and the number of drownings, for example, which is visualized on the world map and the data representation is by the country color. Also, the datasets will involve certain variables which will be common to understand and can be the reason about correlation and causation. The datasets have been collected through the government websites to have real-time data that could be displayed on the globes. The user interface of the installation is designed for all the types of users but as the teenagers are our major target users, so it is highly designed to keep our users engaged. Also, there are different types of learners in a real scenario so we designed based on auditory, visual and kinesthetic learners that could be benefitted from this interactive feature. There will be an avatar that will have voice input which will be played initially to make them for what the interaction is all about. Visually, people can see the interactive globes on the display and by using full body gestures the kinesthetic learners can learn easily.
Working of the Full body interaction:
There are four strategies for providing entry-points to the interaction which involves:
1) Instrumenting the floor;
2) Forcing collaboration;
3) Implementing multiple body movements to control the same effect; and,
4) Visualizing the visitors’ silhouette beside the data visualization.
Also, there will be different gestures that will be implemented with the Kinect to operate with full body, for example, when you move back the globe will rotate in backward direction and when you move in front, the globe will rotate in forwarding direction, Similarly, we will include a clap feature that can also control the globe.