Problem
Manipulate computer objects and applications without using the keyboard or mouse.
Our solution
Our Approach to Design
Design
There were several sketches and models until we had 2D and 3D representations of the final prototype. Next, the computer-aided design and model was created for all the components of the device. Then, all the components were 3D printed and we checked tolerances and adjustment to then reprint them again if necessary. Finally, we post-processed the parts, sanding and painting.
Primary research
We interviewed a class instructor, a lab manager, and a student to understand how the current inventory management system works in the University of Washington laboratories. Then we created surveys for students and lab staff, to understand the manual process to manage the laboratory equipment and items. Also, we sent surveys to potential users, including users of labs, warehouse employees, and libraries administrators to obtain information related to potential features and concerns about interacting with robots.
User Testing
First, we did 1:1 User Evaluation to test the Hardware/Software and the check-in/check-out process. Participants performed a series of tasks under instruction by one of our team members. Notes and video recording were taken, six participants are involved in each round of the 1:1 evaluation. Then, we ran a Fly On The Wall session to observe the Human-Robot Interaction. We took notes of people’s behaviors when the mobile robot was navigating through the environment, without and with sound alerts. The robot received a series of navigation goals sent by the operator and it was up to the navigation stack to do the routing and planning. We observed users for 20 mins in the GIX laboratory.
Functional Testing
First, we defined metrics for each part of the system. For the navigation, we sent a navigation goal to the Fetch and then we measured the success rate, time, distance from the nav goal, and the number of collisions. Next, for the Fetch and Kinova grasping, we also measured the pick and place time and success rate.
Implementation
First the Kinect RGB color VGA video camera and a depth sensor work together to detect the motion of the user. Then we used FAAST to read the joints of the user. FAAST is a middleware to facilitate integration of full-body control. FAAST includes a custom VRPN server to stream skeletons over a network, allowing applications to read the skeletal joints as trackers using any VRPN client. The interaction with Windows programs and to manipulate 2D objects is based on gestures created by the user. Once the joint data axis was obtained we need to create the gestures. We associated the gestures with an specific action to manipulate Windows programs. For example, open Paint and manipulating it with gestures or change the RGB vector of a 2D shape to change its color. All the above was performed using C# scripts. To do the Unity3D dynamic Project interaction based Kinect we used the Kinect for Windows Software Development Kit (SDK). This kit enables developers to create applications that support gesture, using Kinect sensor technology on computers. The kit contains the Kinect Manager and Gesture Listener, with this tools we can access the Kinect with Unity and obtain information of some pre-charged gestures.