Hauptseite

Aus Gussfehlerkatalog
Wechseln zu: Navigation, Suche

These information are labeled according to the label offered by the user and stored within a dataset. When the training session has completed, the dataset is processed by the Weka machine learning framework [40], which builds a model establishing the relations involving the poses and their associated labels. That is, the learned model establishes the rules that define when a determined pose is connected using a certain label. In the exploitation phase, the robot continues receiving snapshots with the skeleton model at every single frame. Nevertheless, this time, it does not acquire the auditive input telling it what the user's pose is. Alternatively, the robot has to predict it making use of what it has discovered from the user. For this goal, it loads the incoming Kinect's information for the discovered model, which evaluates it and returns the guessed label corresponding to that pose. To supply feedback, the robot says the pose in which it believes the user is standing. This can be accomplished by giving the guessed pose label to the ETTS talent with the robot, which is in charge of transforming this label to an utterance that the user can fully grasp. four.1. Information Acquisition and Preparation This section enters much more detail into how the information is captured and processed prior to it's fed for the finding out system. Initially, it describes the robot's vision technique and, later, its auditive system. 4.1.1. Processing Visual Information The Kinect information supplies raw depth information in the type of a 3D point cloud. This point cloud must be processed before it really is fed to the understanding technique. The preprocessing is carried out by an external library, named OpenNI (NI stands for Natural Interaction). OpenNI delivers tools to extract the user's physique in the background and to build a kinematic model from it. This kinematic model consists of a skeleton, as shown in Figure 3. OpenNI's algorithms supply the positions and orientations of these joints at a frame rate of up to 30 FPS (frames per second). The skeleton model may be the model utilised in our technique to feed our pose detection technique. In other words, the details which is supplied to the finding out framework comes from the output of OpenNI's skeleton extraction algorithms.Sensors 2013,Figure three. OpenNI's kinematic model from the human body--OpenNI (NI stands for Natural Interaction) algorithms are capable to make and track a kinematic model on the human body.Head Left Shoulder Left Elbow Torso Left Hand Left Hip Right Hip Right Hand Neck Right Shoulder Correct ElbowLeft KneeRight KneeLeft FootRight FootThis model includes the information that may be going to become used in our finding out system. The data of each skeleton instance (S) is composed of 15 joints represented as: S = (t, u, J) (1)where t would be the time-stamp of the information frame, u may be the user identification (Here, the user identification refers to the user becoming identified by the openNI framework. It's a value in between a single and four, and it serves only in the case that greater than a single user is being tracked by the openNI's skeletonization algorithm.) and J represents the joint set in the user's skeletonized model depicted in Figure three: J = (j1 , j2 , . . .