Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Real-time predictions from unlabeled high-dimensional sensory data during non-prehensile manipulation Troniak, Daniel Michael 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2014_november_troniak_daniel.pdf [ 4.49MB ]
JSON: 24-1.0167630.json
JSON-LD: 24-1.0167630-ld.json
RDF/XML (Pretty): 24-1.0167630-rdf.xml
RDF/JSON: 24-1.0167630-rdf.json
Turtle: 24-1.0167630-turtle.txt
N-Triples: 24-1.0167630-rdf-ntriples.txt
Original Record: 24-1.0167630-source.json
Full Text

Full Text

Real-time Predictions from UnlabeledHigh-Dimensional Sensory Dataduring Non-prehensile ManipulationbyDaniel Michael TroniakB.CS. Hons. Co-op, The University of Manitoba, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)October 2014c© Daniel Michael Troniak 2014AbstractRobots can be readily equipped with sensors that span a growing range ofmodalities and price-points. However, as sensors increase in number and va-riety, making the best use of the rich multi-modal sensory streams becomesincreasingly challenging. In this thesis, we demonstrate the ability to makeefficient and accurate task-relevant predictions from unlabeled streams ofsensory data for a non-prehensile manipulation task. Specifically, we addressthe problem of making real-time predictions of the mass, friction coefficient,and compliance of a block during a topple-slide task, using an unlabeled mixof 1650 features composed of pose, velocity, force, torque, and tactile sensordata samples taken during the motion. Our framework employs a partialleast squares (PLS) estimator as computed based on training data. Impor-tantly, we show that the PLS predictions can be made significantly moreaccurate and robust to noise with the use of a feature selection heuristic,the task variance ratio, while using as few as 5% of the original sensory fea-tures. This aggressive feature selection further allows for reduced bandwidthwhen streaming sensory data and reduced computational costs of the pre-dictions. We also demonstrate the ability to make online predictions basedon the sensory information received to date. We compare PLS to other re-gression methods, such as principal components regression. Our methodsare tested on a WAM manipulator equipped with either a spherical probeor a BarrettHand with arrays of tactile sensors.iiPrefaceThe thesis is based on work conducted jointly between the Sensorimotor Sys-tems and IMAGER laboratories with professors Dinesh K. Pai and Michielvan de Panne, and students Daniel Troniak and Chuan Zhu of the Universityof British Columbia. This section outlines the contributions of each of theabove individuals. Students worked under guidance of the professors.Software Troniak wrote the C++ software framework for the control,perception and user interface of the robot, and developed MATLAB scriptsin support of the design and usage of the TVR algorithm. Zhu implementeda MATLAB framework for data analysis, processing and figure generation,including the final implementation of the TVR algorithm.Experiments Troniak designed and built the environment for the robot,collected all data with the robot, maintained and configured the hardware,and designed and executed the robot motion. Pai supported the purchaseand maintenance of the robotic hardware. Zhu analyzed preliminary datacollected during various manipulations to help Troniak determine successfulmotion trajectories. Data analysis, discussion, and final figure generationwas accomplished collaboratively.Algorithms van de Panne suggested the TVR feature selection metricand proposed the usage of the PLS algorithm. Troniak validated the effec-tiveness of the TVR algorithm on data collected during manipulations. Zhuperformed similar validation on the usage of the PLS algorithm, as well asdesigned the scheme for realtime predictions using PLS.iiiPrefaceWriting Chapters 3, and 5 of the thesis are adapted from a paper submit-ted to the 2015 IEEE International Conference on Robotics and Automa-tion (D Troniak, C Zhu, D Pai, M van de Panne, Real-time Predictions fromHigh-dimensional Unlabeled Sensory Data during Non-prehensile Manipula-tion, ICRA 2015, submission #1575, 01 October 2014). Troniak performedand wrote the background literature review and drafted the remainder ofthe manuscript, which was then improved and prepared for submission byvan de Panne.Figures Figure 1.2 was generated by Troniak. Part of Figure 1.1 was de-signed by van de Panne and is present in the paper submitted to ICRA 2015.Figures from Chapter 5 were developed collaboratively and are also presentin the paper. Figures borrowed from the literature include appropriate ref-erences within the caption.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Learning to Interact with the Real World . . . . . . . . . . . 11.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . 21.3 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . 52 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1 Robotic Manipulation . . . . . . . . . . . . . . . . . . . . . . 62.1.1 Model-free schemes . . . . . . . . . . . . . . . . . . . 62.1.2 Model-based schemes . . . . . . . . . . . . . . . . . . 72.1.3 Human-inspired schemes . . . . . . . . . . . . . . . . 92.2 Sensory Information Processing . . . . . . . . . . . . . . . . 102.2.1 Force and tactile sensing for robotic manipulation . . 10vTable of Contents2.2.2 Anomaly detection in streaming data . . . . . . . . . 142.2.3 Dimensionality reduction . . . . . . . . . . . . . . . . 142.3 Neuroscience & Physiology . . . . . . . . . . . . . . . . . . . 162.3.1 Object manipulation: definitions . . . . . . . . . . . . 162.3.2 Force and tactile sensing . . . . . . . . . . . . . . . . 192.3.3 High-level processes . . . . . . . . . . . . . . . . . . . 213 Predicting Environment Properties from Sensory Inputs . 263.1 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . 263.2 Prediction Framework . . . . . . . . . . . . . . . . . . . . . . 283.2.1 Feature selection via the task variance ratio . . . . . 293.2.2 Property prediction with partial least squares . . . . 314 Control Software . . . . . . . . . . . . . . . . . . . . . . . . . . 324.1 System Overview . . . . . . . . . . . . . . . . . . . . . . . . 324.2 Libbarrett API . . . . . . . . . . . . . . . . . . . . . . . . . . 324.3 WAM Control . . . . . . . . . . . . . . . . . . . . . . . . . . 334.4 BarrettHand Control . . . . . . . . . . . . . . . . . . . . . . 354.5 Realtime Systems . . . . . . . . . . . . . . . . . . . . . . . . 355 Experiments and Results . . . . . . . . . . . . . . . . . . . . . 385.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.1.1 Actuation and sensing . . . . . . . . . . . . . . . . . . 385.1.2 Kinesthetic teach-and-play . . . . . . . . . . . . . . . 385.1.3 Software architecture . . . . . . . . . . . . . . . . . . 385.1.4 Experimental testbed . . . . . . . . . . . . . . . . . . 395.1.5 The block topple-slide task . . . . . . . . . . . . . . . 395.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.2.1 Learn the task trajectory . . . . . . . . . . . . . . . . 405.2.2 Record sensory dataset . . . . . . . . . . . . . . . . . 405.2.3 Feature selection . . . . . . . . . . . . . . . . . . . . . 405.2.4 Partial least squares modeling . . . . . . . . . . . . . 415.2.5 Online prediction . . . . . . . . . . . . . . . . . . . . 425.2.6 Calculating environment properties . . . . . . . . . . 44viTable of Contents5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 546.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.2 Limitations and Future Directions . . . . . . . . . . . . . . . 55Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57AppendicesA Surprise-and-adapt Pseudocode . . . . . . . . . . . . . . . . . 66viiList of Tables2.1 Common analytical measures that may be optimized or be-come a part of grasp constraints [10]. . . . . . . . . . . . . . . 184.1 Specifications of the WAM Internal PC/104 [67] used forframework development and robot control in support of ex-periments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.2 Libbarrett data-types when communicating with the robot.The jp type for the WAM is a seven-dimensional vector whereasfor the BarrettHand, it is a four-dimensional vector – one en-try for each finger and one for the spread. . . . . . . . . . . . 335.1 Mass, coefficient of Coulomb static friction and complianceproperty sets, as measured for a variety of blocks and surfaces. 395.2 Effect of 5% Γ feature selection on LOOCV block mass pre-diction root mean squared error (RMSE). . . . . . . . . . . . 505.3 Bandwidth/runtime/accuracy tradeoff following different amountsof Γ selection. Optimal tradeoff is achieved when between 5%and 20% of the data is selected using Γ. . . . . . . . . . . . . 515.4 Effect of different levels of additive Gaussian noise N (0, σ2)on the sensory input data for LOOCV mass prediction RMSE(m = 1335 g). Accuracy degrades smoothly as sensor noiseincreases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52viiiList of Figures1.1 The topple-slide task. . . . . . . . . . . . . . . . . . . . . . . 31.2 Example sensory streams during the topple-slide task in dif-ferent environments (one colour for each environment). . . . 42.1 Schematic drawing of the pen-twirling task. Drawings repro-duced from [18]. Use of the reproduction is by permission ofthe copyright owner, John Wiley and Sons. . . . . . . . . . . 112.2 Force and tactile sensor processing to estimate object pose [11]. 122.3 Dichotomy of human grasping, reproduced from [9]. c©1989IEEE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4 Tying a knot: manipulation task combining precision andpower grips. Figure inspired by [49]. . . . . . . . . . . . . . . 172.5 Example slipping and crushing thresholds of everyday ob-jects. The difference between the required Minimum Forceand observed Grip Force is known as the safety margin. Dataobtained from [56]. . . . . . . . . . . . . . . . . . . . . . . . . 202.6 Characteristic of tactile afferents within human fingertip skin.Reproduced from [34]. Permission to reproduce granted byR.S. Johansson. . . . . . . . . . . . . . . . . . . . . . . . . . . 223.1 Collection of data matrix D = (X,Y): (a) data X are col-lected across all trials and environments; (b) environmentalquantities Y are provided by the human expert. . . . . . . . 26ixList of Figures3.2 Intuition behind the task variance ratio (Γ) algorithm. Wewish to select good sensors (at each time-phase) which exhibitlow variance when the environment remains constant and highvariance when the environment changes. Colours signify datastreams collected in distinct environments. . . . . . . . . . . . 295.1 Effect of varying degrees of PLS dimensionality reduction onmass estimation performance. A 20% reduction is achievablewith trivial loss in estimation quality. . . . . . . . . . . . . . 415.2 Online estimation of the mass from sensory data using varyingdegrees of dimensionality reduction. The estimated mass ofthe block are shown at various blue points throughout themotion. The dotted red horizontal line denotes the actualmass of the block. . . . . . . . . . . . . . . . . . . . . . . . . 425.3 Online estimation of mass, friction and compliance from sen-sory data following Γ feature selection and PLS feature ex-traction. Time-phases 1 through 3, 9, and 15 through 18 areignored since sensor readings during these time-phases do notprovide any information with respect to distinguishing envi-ronment properties, i.e. their respective Γ values are belowΓmin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.4 Calculating numerical representations of environment prop-erties (a) coefficient of friction, µ, and (b) compliance, c. . . . 455.5 Environment properties used in experiments. . . . . . . . . . 465.6 Visualization of features selected according to the task vari-ance ratio, Γ, of data collected using (a) robot with sphericalprobe and (b) robot with BarrettHand. Colour is added tosignify task-phases. Gaps between task-phases signify an ab-sence of task-relevant information within corresponding time-phases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.7 WAM robot with (a) spherical probe and (b) BarrettHand asend-effectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 49xAcknowledgementsThis thesis would not have been possible if not for my supervisors ProfessorDinesh Pai and Professor Michiel van de Panne, whose guidance, patienceand trust allowed this project to meet its full potential. They are bothphenomenal advisors, and together brought complementary perspectives tothis work, enhancing both its depth and breadth. Individual thanks go toProfessor Pai for securing the robotic system, for providing high-level direc-tion and for donating much of his personal time while on official leave, andto Professor van de Panne for providing concrete instruction and for neverfailing to find ample time in his busy schedule for stimulating discussion.I am grateful to my second-reader and mentor, Professor Jim Little whoprovided much needed moral support through the inevitable dark times andwho patiently guided me through finalizing my thesis.Thanks to Chuan Zhu, my ”un-a-believa-da-ble” lab-mate, with whomI shared much elation (and dissapointment!), for providing encouragementand comedy in the late-hours before deadlines.Special thanks to Professor van de Panne and all members of the MOCCAreading group for listening to my (oft mundane) talks on topics related tomy interests. Special thanks to Professor Pai and the SMRT reading groupfor helping me guide topics in Neuroscience into this thesis. Special thanksto Professor Little and the Robuddies reading group for lending me theirundivided attention at my thesis talk.Last but certainly not least, I would like to thank my family, both inCanada and in Taiwan, for their tireless effort to keep me secure, well-rested(and well-fed!) during the process of writing this thesis.xiTo Jill and Jade.xiiChapter 1IntroductionWe begin this chapter by introducing robotic manipulation using force andtactile sensors, and present the challenges in programming robots to per-form manipulation tasks in the physical world. Following this, we state thespecific problem we wish to address in this thesis as well as the motivationsbehind seeking its solution. The chapter concludes with main contributionsand thesis structure.1.1 Learning to Interact with the Real WorldOur world is infinitely complex. By the second law of thermodynamics,any real-world system constantly changes as energy transfers between itscomposing particles of matter [77]. Thus, if one’s goal is to interact withsuch a dynamic ever-changing world, one needs to continually infer relevanttruths which exist in that world through some form of external sensing.This type of approach to system identification and control is knownas model-free [64] in the sense that one functions without needing explicitknowledge of physics.Living organisms are known to take this approach [16]. A child doesnot learn to walk given absolute knowledge of the dynamics which governhis interaction with the environment. Rather, he must build an action-consequence model of the walking task through much trial and error.By learning a model which maps sensory information to relevant proper-ties of an environment, the entity also gains an ability to predict anomalies.Novel events are important as they present opportunities to gain deeper in-sight into the true underlying principles which define the environment notpresently described by the current model.11.2. Problem StatementHumans, who are among the most successful of living organisms, havethe ability to both model their environment and efficiently perform inferenceon that model using their available senses. For a trained athlete to succeedin the game of basketball, she must learn to ignore irrelevant sensory data,such as the cheering of the crowd, and focus on more important information,such as the image of the basket on her retina. How can she identify relevantinformation with respect to her current task? We are interested in exploringhow humans might accomplish this by enabling robots to possess such a skill.1.2 Problem StatementWe wish to address the problem of supporting robotic manipulation by ef-ficiently predicting properties of an environment using unlabeled sensorydata. The ability to predict properties of an environment, such as the massof an object on a table, is an important prelude to closing the loop andallowing robots to adapt to unexpected change.One barrier to achieving this goal, however, is that the size of the inputspace increases with the number of sensor readings available to the system.This makes making predictions susceptible to Bellman’s infamous curse ofdimensionality. We therefore take steps to reduce the number of sensorsconsidered when predicting properties of an environment.To test our solution, we consider a block topple-slide task (see Figure 1.1)performed by a robot equipped with force, torque and tactile sensors. In thistask, the end effector of the robot pushes on the side of a foam-encapsulatedblock, causing it to topple, and then pushes the block back to its startingconfiguration against a wall. A prescribed joint-angle trajectory to achievethis manipulation task is provided by a human expert through kinestheticteaching. The given example trajectory is assumed to be robust in two ways:repeating the trajectory should cause repeated rotations of the block, andthe same trajectory should remain successful when applied to blocks withvariations in mass, friction, and compliance.21.3. MotivationsFigure 1.1: The topple-slide task.1.3 MotivationsAs sensing technologies become more affordable, robots will be able to con-currently make a wide range of measurements within the environments inwhich they operate. As can be seen in Figure 1.2, it is clear that sensors withthe capacity to measure properties of an environment will obtain distinctreadings as those environment properties change. Notice how the streamsbecome disjoint during phases of robot-object contacts (during toppling andsliding) and recombine during phases of non-contact. It then becomes pos-sible, given some form of ground-truth, for the robot to learn a model whichexplains how changes in its sensor readings relate to changes in its environ-ment.This releases the need for experts to model the relationships betweensensory readings and environmental phenomena. For example, distinct forcemeasurements may relate to the compliance of an object, which in turn couldprovide insight into some high-level feature such as its ripeness (if the objectwere a piece of fruit). We thereby achieve a highly scalable system capableof mapping arbitrary sensor readings (vision, olfactory, inertial, etc.) toarbitrary environment properties, given the availability of relevant ground-truth information on which to train the model-learning system.31.4. ContributionsFigure 1.2: Example sensory streams during the topple-slide task in differentenvironments (one colour for each environment).1.4 ContributionsThe contributions of this thesis are three-fold: (1) a unifying bridge be-tween literature in robotics, physiology and neuroscience on the topic ofexploiting force and tactile sensors for dexterous manipulation, (2) an unsu-pervised feature selection algorithm based on a new metric called the taskvariance ratio, which filters important sensor readings and motion-phaseswithin high-dimensional sensory data streams during manipulation tasksand (3) a supervised learning algorithm with partial least squares (PLS)regression at its core, which builds statistical data-driven models that useimportant sensory data traces from a robot to predict properties definingits environment.The feature selection algorithm allows the robot to distinguish between41.5. Thesis Structuresensors which provide task-critical information, and sensors which provideinformation that is noisy or irrelevant to the task. This ability to gaugethe usefulness of sensor readings becomes important as the number of sen-sors in the system increases, and real-time processing of all sensors becomesimpossible. Disregarding all but the most important sensors reduces algo-rithmic complexity to constant time, which guarantees support for real-timeoperation irrespective of the number of sensors physically available to therobot. The developed models are then used to efficiently predict environ-ment properties when novel sensory data traces are recorded by the robot’ssensors.1.5 Thesis StructureFollowing this introductory chapter, in Chapter 2 we provide some back-ground on the topic of exploiting force and tactile sensors in support ofphysical manipulation, as presented in a sample of the literature from therobotics, neuroscience and physiology research communities. In Chapter 3,we more formally define the problem we wish to solve in this thesis anddefine the algorithms and data structures used in our approach. Chapter 4serves as an overview to the robot control software developed to supportexperiments. In Chapter 5, we present details on experiments conducted totest the effectiveness of our approach on a physical robotic system, alongwith results of said experiments. Finally, conclusions are drawn and futurework is provided in Chapter 6.5Chapter 2Background2.1 Robotic ManipulationIn the present study, we consider the general research question: how shouldrobots be programmed to manipulate (e.g. grasp) objects? There exist twocommon approaches to this problem in the robotics literature. One approachis for a human expert to provide the robot with an analytical model of itselfand its environment. Using this model, the robot achieves its manipulationgoals in two phases: planning and execution. Planning is often performed insimulation using models of object geometry and robot/environment dynam-ics. Once the robot has found a feasible plan, it then executes that plan viaphysical robot actuation. One disadvantage to this approach is that due toimperfect precalculated dynamical models and robot calibration, executedmanipulations can easily become unstable and fail [13].Another approach is to remove the requirement of human-supplied mod-els and have the models learned autonomously from data using statisti-cal learning techniques. The key advantage of this type of model-free ap-proach [30, 64] is that apriori object and dynamical models are not requiredto succeed in the task; the model is learned automatically from data. Inthis thesis, we define model-free [64] systems as those that function withoutgiven explicit knowledge of physics or geometry from human experts.2.1.1 Model-free schemesOne example of such a learning technique is called the Self-Organizing Map(SOM), an architecture of Artificial Neural Networks (ANN) that spatiallyand uniformly organizes features automatically by input signals [39]. An62.1. Robotic Manipulationexample where SOMs have been used successfully is in [65]: objects aregrasped based on hand posture and tactile experience of previously success-ful grasps. Experience is represented as a low dimensional smooth manifoldin hand posture space.A similar system was devised in [54], where a SOM was used to mapfinger joint angles and tactile readings to object shape and size. The systemcould identify previously grasped objects as well as categorize new objectsas being a particular shape and size.The same authors obtained similar results with another algorithm in-spired by biological spiking neurons, called a spiking neural network [53].For this scheme, joint angle input is encoded into a series of spike trainswhich result in three feature outputs that are then used to recognize andclassify grasped objects. In addition, similar objects tended to cluster inoutput feature space. The authors’ system was able to recognize objects ofdifferent shapes as well as objects with the same shape but different size.In [14], the authors present blind grasping: a novel approach to objectgrasping that does not require visual feedback or apriori 3D object models.Their scheme works from a database of one thousand stable grasps fromthe Columbia Grasp Database [25] using the model of a BarrettHand [71].Corresponding tactile feedback during grasps of objects simulated in theGraspit! [45] simulator are also recorded. They proceed to create featurevectors comprising simulated tactile and robot kinematic data which theythen use to train an SVM to classify grasps as being stable or unstable. Inthis way, the system was able to learn tactile feedback indicative of a stablegrasp.2.1.2 Model-based schemesIn model-based schemes, a human expert provides the robot apriori modelswhich, for example, map sensory input signals to specific control policies.This approach grants the advantage of providing the robot access to theunderstanding of the task dynamics of the researcher. The disadvantage ofthis approach is that the supplied model may contain errors and is limited72.1. Robotic Manipulationby the domain expertise of the researcher.In [20], the authors present sensor-based atomic controllers for a robotichand/arm system to empty a box containing an undefined number of un-known objects. Manipulation primitives are defined that search, grasp andtransport objects from the box to predefined locations. A finite state ma-chine (FSM) is used to transition between motion primitives based on corre-sponding sensory feedback. The authors also compare a vision-plus-tactile-based version of their system to a purely tactile-based version. They foundthat while the version which incorporated vision was more efficient at com-pleting the empty-the-box manipulation task, the tactile version was alsosuccessful. Vision was only crucial in determining if the box was empty; inthe non-vision based system, a human moderator was required to tell therobot when it had finished its task. The authors in [20] also present an inter-esting scheme that compensates for errors in translation of the robotic hand.The hand repositions itself if there is force experienced by only one finger,denoting a single hand/object contact. The controller compensates by mov-ing the hand in direction of single contact, which effectively repositions themanipulator above the object.In [44], the authors take an analytical approach to the non-prehensiletoppling task. This approach, while successful, assumes knowledge of thedynamics of the entire system and would therefore not generalize to oper-ation outside of controlled factory environments where complete models ofrobot-object interaction dynamics were unavailable. Another analytical ap-proach to a non-prehensile tumbling task, given apriori models of how thesystem reacts to the robot at each phase of the task, is studied in [58].In [76], an analytical approach is applied to the non-prehensile task ofmanipulating an object with rolling contacts across a robotic finger tip usingtactile sensor feedback. Their approach relies on accurate apriori kinematicand dynamic models of the robot and its environment.82.1. Robotic Manipulation2.1.3 Human-inspired schemesIn the human hand, there are a wide variety of receptors in the skin and mus-cles which in turn respond to a wide variety of stimuli. Sensed phenomenainclude skin stretch, skin curvature, vibration and muscle force and length.One baffling aspect of the human motor system however is that informationbandwidths range from just a few Hz to possibly several hundred Hz. Interms of technological performance, this is horrendously slow. Informationis also time varying, nonlinear, and its encoding scheme (known as pulse-frequency) obscures much of the raw inputs from the nerve endings. Howthese deficiencies are made up for, however, is a high degree of parallelismand redundancy [31].It is also known that the human motor system executes manpulationsas a series of discrete states that transition based on afferent signals [36].Since this type of model is appropriate for execution on a computer, it hasbeen quite popular to model the robotic grasping task as an FSM, whichtransitions between states based on tactile or other intrinsic contact inputevents [41, 61].The authors in [41] present an approach to tactile-motor coordination ofa robotic hand based on a neurological model of the human tactile-motorsystem. This model is implemented as a series of ANNs whose functionand structure reflect discoveries in the human sensory areas specific to ob-ject grasping. A scheme based on SOMs was chosen to model these sensoryareas, since they must process a high rate of combined tactile and somatosen-sory input. Use of the SOM controlled the volume of incoming inputs bymaking small, efficient adjustments to the model each time a new inputvector became available.The authors in [56] developed a human-inspired robotic grasp controllerthat gently picks up and sets down unknown objects. They employ pressuresensors and accelerometers to mimic SA-I, FA-I and FA-II tactile channels(see Section 2.3.2). An FSM is programmed to transition between six dis-crete states: (1) Close, (2) Load, (3) Lift and Hold, (4) Replace, (5) Unload,and (6) Open. Transitions are based entirely on tactile event cues. Their92.2. Sensory Information Processingcontroller also dynamically adapts its initial grasp force depending on tactileevents such as slipping, and judges when to set down the object in light ofdetected contact events with the table.In [61], a new tactile-based object manipulation strategy was proposed,called tactile servoing. Analogous to vision-based visual servoing, each statein the manipulation task sequence is characterized by tactile images detectedvia tactile sensor arrays on the robot hand. The authors’ conclusion wasthat tactile sensors are useful in simple, direct and effective control of robotsduring manipulation tasks. The literature supports the fact that tactile datais processed much the same way that visual data is processed in the humanbrain [35, 59].2.2 Sensory Information Processing2.2.1 Force and tactile sensing for robotic manipulationForce and tactile sensing can provide information about mechanical proper-ties, such as compliance, coefficient of friction, and mass, which are not per-ceivable through other means (e.g. vision) [31]. Obtaining object propertiesvia force and tactile sensing for the purposes of succeeding in manipulationtasks has been the subject of numerous studies [17, 27, 42].The application of force and tactile sensors to many robotics problemsaffords new solutions that have previously been intractable via traditional,often computer vision-based methods [40]. In their 2005 review article,Tegin and Wikander [68] stress that, in contrast to the amount of literatureon the application of vision-based solutions to robotics problems, literatureon exploiting contact information (e.g. tactile) remains relatively rare. Onereason may be simply due to the lack of availability of force and tactilesensors in comparison to cameras [31].While vision is arguably the dominant sense in primates, including hu-mans, there are certain scenarios in which vision fails, such as during objectocclusion or when sensory resolution is too low for a given task. In suchcases, more detailed and versatile contact information may compensate for102.2. Sensory Information ProcessingFigure 2.1: Schematic drawing of the pen-twirling task. Drawings repro-duced from [18]. Use of the reproduction is by permission of the copyrightowner, John Wiley and Sons.these deficiencies.In [11] the authors review techniques for processing and combining forceand tactile information to develop abstract understanding of a given manip-ulation. An excerpt of these processing techniques is shown graphically inFigure 2.2: as raw sensory data travels from left to right, they are processedand combined to provide increasingly abstract understanding of a manipula-tion. The authors state that force and tactile sensors have potential to yieldthe following information: (1) object contact/no contact; (2) contact con-figuration (surface, edge, point, etc.) based on pressure-patterns; (3) objectslip via vibrations in the grasped object, (4) properties (compliance, texture,friction, etc.) of an object via haptic exploration; and (5) feedback for con-trol. Given the above information, a robot can more appropriately controlthe force and moment on an object to accomplish the desired manipulationtask.ExampleConsider the following example: how might one accomplish the task oftwirling a pen end-over-end between one’s fingers, as demonstrated in Fig-ure 2.1? The position and orientation of the object must somehow match112.2. Sensory Information ProcessingFigure 2.2: Force and tactile sensor processing to estimate object pose [11].imposed forces to maintain stability. Successfully tracking the movement ofthe pen requires the knowledge of many variables, such as the configurationof one’s hand, the locations and movements of contacts between the penand one’s fingers, the magnitudes of grasp forces, the contact conditionswith respect to friction limits, etc. How is it that, with enough practice, onecan control all of these parameters effortlessly, even in the absense of visualfeedback?A potential answer can be seen in Figure 2.2: we can combine the forwardkinematic model of the hand together with current finger joint angles todetermine the positions and orientations of the finger tips. When combinedwith force/torque measurements at the points of contact, it is possible toobtain local information of object shape, surface normal orientation, etc.,which could then be combined to track the geometric pose of the object.Tactile sensingFor robotic hands with tactile sensor arrays, such as the BarrettHand, cur-vature and shape information can be obtained by measuring the local cur-vature at each element of the sensor array [11]. From there, it is possibleto extract features, such as corners and edges of the object by combininglocal shape information from multiple sensors. This task can be greatly en-hanced if at least a partial model of the grasped object is available apriori,in which case the object can be statistically matched via surface or data122.2. Sensory Information Processingfitting methods [19].The most common application of tactile information has been to classifyand recognize objects from a known set based on calculated geometric in-formation of the object from raw tactile data. Features, such as holes, edgesand corners [11] and object surfaces [50] have been used and extracted fromtactile array, force and/or joint sensor information. For example, Siegel [60]devised a way to extract object pose of a known object in a robot’s graspvia joint angle and joint torque measurements.Active sensingSince force and tactile sensors provide only local object information, recogni-tion and disambiguation often require the hand to actively explore multipleareas of the object surface. These types of strategies are referred to as ac-tive sensing. There exist many example applications of active sensing, suchas tracing object contours, measuring compliance and determining lateralextent of object surfaces. In [47], the authors propose an active sensingstrategy to edge-finding by exploring the surface of an object until contigu-ous segments of tactile array impressions are found. In [42], tactile sensorsare used to discriminate shape and position of various textured cylindricalobjects. In [17], grasp affordances are obtained through exploration of thepose space of manipulable objects.Dynamic sensingThe ability to detect tactile events with respect to time (e.g. object slip) isimportant to many manipulation tasks such as lifting fragile objects. Thechallenge lies in detecting such events reliably in the presence of sensornoise. Highly sensitive tactile sensors capable of detecting minute eventscan be easily thrown off by e.g. vibrations from the robot actuators or byrapid robot acceleration. Robust dynamic event detection can be solvedby comparing tactile sensor readings at and away from contact regions, oreven more robustly via statistical pattern matching methods that detect thesignature of particular dynamic events [73].132.2. Sensory Information Processing2.2.2 Anomaly detection in streaming dataSince high-dimensional data streams often exhibit considerable structure,information that does not fit within this structure is most likely an anomaly,or outlier, in comparison to the vast majority of other input data. Ananomaly can be defined as an event or pattern which does not conform tosome well-defined notion of normal phenomena. Detecting the existenceof anomalies within data streams is an important topic within both thedata mining and machine learning communities [1, 15, 38, 63, 75] and hasfar-reaching applications in such areas as fault detection, fraud detection,sensor-networks and image processing [6]. In [30], a model-free approach istaken to find anomalies in high-dimensional sensory streams. Data collectedfrom the robot are first passed through a PCA-based feature extractor beforebuilding models of normal operation.2.2.3 Dimensionality reductionAs the number of sensors available to a system increase, the computational,storage and transmission costs in inferring information from all availablesensor readings also increase. Therefore, given a large number of sensorreadings, it is important to develop a reduced set of measurements or derivedfeatures that can model desired information in a compact fashion. Dimen-sionality reduction techniques can be classified into two broad categories: (1)feature extraction and (2) feature selection. Most feature extraction tech-niques take an unsupervised [24, 28, 57, 69] or self-supervised [2, 12, 43, 62]learning approach. The result is a transformed, lower-dimensional set offeatures that more compactly describe the underlying structure of the data.In contrast, feature selection techniques, such as those based on feature sim-ilarity [46] and genetic algorithms [32], achieve dimensionality reduction byconsidering only a subset of input dimensions. In [51], the authors present anovel feature selection method comparing the variance of sensor readings tochoose to encode either force or position information while recording user-demonstrated trajectories.142.2.SensoryInformationProcessingFigure 2.3: Dichotomy of human grasping, reproduced from [9]. c©1989 IEEE.152.3. Neuroscience & Physiology2.3 Neuroscience & PhysiologyStudying the manipulation capabilities of humans and animals for the pur-pose of designing better robotic systems is a challenge. First, it is hard todiscover the precise algorithms that our brains employ. Second, the mechan-ics of the human hand is highly complex and thus the algorithms our motorsystem employs may not be appropriate for the relatively simple mechanicsof a robot. Nevertheless, studying human manipulation can provide insightinto designing more efficient and effective robotic systems. In this section,we attempt to draw such insight by exploring the human motor system aspresented in a sample of the neuroscience and physiology literature.2.3.1 Object manipulation: definitionsIn this section, we present some common vocabulary used by researchers indescribing object manipulation tasks as performed by humans.The power-precision dichotomyHumans employ a wide variety of manipulation skills depending on the ob-ject being manipulated. When opening a jar, for example, a power-style gripis required to loosen the jar. Once the lid is loose and required torque islessened, a lighter grip is adopted for speed and precision. This dichotomy ofpower/precision prehensile (i.e. grasping) activities was proposed by Napierin 1956 [49]. Figure 2.4 provides an example of these two patterns of ac-tivity in the manipulation task of tying a knot: power is required to holdthe rope in place while precision is required to tie the knot. Cutkosky andWright also propose a taxonomy of human grasps in [9], breaking down thedichotomy of power/precision even further (see Figure 2.3). Depending onthe weight and size of the object as well as the desired dexterity of the hand,a human adopts a different style of grip.162.3. Neuroscience & PhysiologyFigure 2.4: Tying a knot: manipulation task combining precision and powergrips. Figure inspired by [49].172.3. Neuroscience & PhysiologyAnalytical measures of grasp qualityThe authors in [10] present common measures of grasp quality, which maybe optimized or become part of the set of constraints with respect to a givenmanipulation task. An overview of these analytical grasp-quality measuresis presented in Table 2.1. The set of ideal grasps of any object then existswithin the space of grasps that satisfy all hard constraints and optimizeimportant soft constraints with respect to the given task. For example,Nakamura et al. search for grasps that minimize internal forces (i.e. grasp-ing effort), subject to constraints on force closure and manipulability [48].According to physiological findings, humans tend to employ a similar schemeas proposed by Nakamura et al. where a certain frictional safety margin ismaintained [55].Human grasps have also been studied in terms of these analytical mea-sures. For example, power grasps can be thought of as having higher com-pliance, stability and slip resistance than precision grasps. Power graspsalso tend to have a connectivity of zero (since the fingers tend not to play amanipulating role). In contrast, precision grasps have high manipulabilityand connectivity (of at least three and often six) [10].Metric DescriptionCompliance Inverse-stiffness of the object with respect to the handConnectivity Number of DOFs between grasped object and the handForm closure External forces are unable to unseat the grasped objectForce closure Object held without slipping (a.k.a. frictional form closure)Grasp isotropy Fingers are able to accurately apply force/moment to objectInternal forces Kinds of internal grasp forces hand may apply to the objectManipulability Fingers can impart arbitrary motions (i.e. connectivity = 6)Slip resistance Amount of force required before object starts to slipStability Tendency of grasped object to return to a spatial equilibriumTable 2.1: Common analytical measures that may be optimized or becomea part of grasp constraints [10].182.3. Neuroscience & PhysiologyForce vs. form closureA subtle yet important distinction must also be made between force closureand form closure. Form closure refers to grasping without the use of frictionwhereas force closure uses friction to keep objects seated in the hand. Anobject likely requiring form closure would be for example a wet bar of soapor a slinky.Slipping vs. crushing thresholdWhile adequately large grip forces must be maintained to keep the objectwithin a force closure grasp, exceedingly large forces are also not desirableas they impose unnecessary fatigue on the hand and may even crush fragileobjects [33], [26]. Thus, grip force is constrained by both the slipping andcrushing thresholds of objects (see Figure 2.5 for some examples).The amount of force that subjects apply over and above the slippingthreshold is known as the safety margin. The magnitude of the safety marginvaries across subjects, and was found to be dependent on the dexterousmanipulation skill of the subject in performing the given task [36].When manipulating visually fragile objects, the initial force in humansubjects is lighter and their action is slower when compared to manipulat-ing visually non-fragile objects. Once contact with the object is made, tac-tile feedback complements the missing information with respect to the truefragility of the object. Subjects can then properly carry out the plannedaction [7]. Accurate predictions are crucial however due to the relativelyslow response rate of corrective actions [34].2.3.2 Force and tactile sensingThe elements of the human sense of touch can be broken up into two distinctcategories: proprioceptive and tactile. Proprioceptive sensing refers to theperception of limb motion and forces using internal receptors, such as musclespindles (responding to changes in muscle length), tendon organs (measur-ing muscle tension), and cutaneous afferents (reacting to skin deformationsaround the joints) [34]. Proprioceptive receptors within the joints of the192.3. Neuroscience & PhysiologyFigure 2.5: Example slipping and crushing thresholds of everyday objects.The difference between the required Minimum Force and observed GripForce is known as the safety margin. Data obtained from [56].202.3. Neuroscience & Physiologyhand are also present, which report joint angles, forces and torques [31].Tactile sensing deals with the perception of contact information with recep-tors beneath the surface of the skin [74].Actuation of the hand is imparted by muscles in the forearm throughtransmission of tension by tendons passing through the wrist. It has beenshown that due to dynamics of transmission such as friction, backlash, com-pliance and inertia, accurate control of endpoint position and forces basedon proprioceptive signals alone is difficult [37]. Thus, tactile afferents areessential for fine-grained mechanical measurements at contact locations [36].Tactile afferents have received much attention in the physiology andneuroscience literature; a comprehensive summary of which may be foundin [74] and, more recently in [34]. There are in total four specialized types ofmechanoreceptive nerve endings within the skin of the human hand, each ofwhich can be categorized as having large or small active areas (Type I andType II respectively) and responding or not responding to static stimuli (SAfor slowly adapting and FA for fast adapting, respectively). See Figure 2.6for a description of each of these types. It has been calculated that a totalof 17,000 specialized mechanoreceptors exist in the grasping surfaces of thehuman hand [34]. In addition, there are free nerve endings that are sensitiveto thermal and pain stimuli [31].2.3.3 High-level processesIn addition to low-level tactile and proprioceptive processing, the mam-malian central nervous system performs many high-level processes such asprediction, planning and memory. These processes support, guide, and orga-nize our more primitive manipulative functions to accomplish more complexmanipulation tasks.PredictionIn [33], the authors preclude that the magnitude of fingertip forces imposedon objects are determined by at least two high-level control processes: (1)anticipatory parameter control (APC) and (2) post-contact control. The212.3. Neuroscience & PhysiologyFigure 2.6: Characteristic of tactile afferents within human fingertip skin.Reproduced from [34]. Permission to reproduce granted by R.S. Johansson.222.3. Neuroscience & Physiologyauthors model APC as a feedforward controller that uses predictions ofcritical characteristics of the object (weight/friction/initial condition, etc.)based on the results of previous object manipulation experience. Followingcontact with the object, sensory information can be extracted to (1) modifymotor commands automatically; (2) update sensory memories for APC; (3)inform central nervous system of the completion of subgoals of a task; and (4)trigger subsequent subgoals. The central nervous system monitors specific,expected events and produces control signals appropriate to each subgoal.In contrast to feedback controllers, this feedforward, sensor-driven controlstrategy predicts appropriate control output several steps in advance. Slipsare avoided and force across digits is coordinated by independent controlmechanisms based on local sensory information.PlanningPlanning plays an important role in anticipating future events as well. In [35],Johansson, et al. demonstrate the importance of eye-hand coordination dur-ing manipulation tasks. Subjects’ gaze were tracked during a block-stackingtask. It was found that their gaze played an important role in planning eachpick-and-place action. The authors then further propose and demonstratein [22] the direct matching hypothesis, which predicts that subjects will un-consciously produce eye movements when observing a familiar action as ifthey were performing the task themselves.Supramodal processingIn a study conducted by Bicchi et al. [59], it was found that the V5/MTcortex (the same area in the brain that responds to optical flow) is activatedduring tactile-flow perception, i.e. when dynamic movement is detected viatactile afferents. This is consistent to other findings that there exists asupramodal, or multi-modal, organization of regions in the brain involved inboth tactile-flow and optical-flow processing [21]. In another study by Bicchiet al. [4], it was found that certain experiments could fool the subjects’ tactileflow processing in the brain through tactile illusions, much the same way232.3. Neuroscience & Physiologythat optical-flow processing can be fooled, which is known as the apertureproblem.Action-phase control strategiesFindings by Johansson et al. indicate that, during certain manipulationtasks, the human motor system functions as a sort of state machine thattransitions based on sensory predictions and sensory inputs. These states,or action-phases, are defined as sequences of specific sensory events that areeach linked to subgoals of a given task [34].Action-phase goals are evaluated by matching patterns in tactile afferentsignals. For example, grasp contact – a required action-phase subgoal formany manipulation tasks – detects patterns in SA-I and FA-II afferent in-puts. Combinations of certain afferents provide information such as contacttiming, location, force intensity and direction. Contact location is definedas the spatial center of all afferents involved in the overall signal. Forceintensity is characterized by the number of afferents involved as well as thefiring rates of each. Patterns of activity in combinations of afferents give usthe direction of the detected contact force.DexterityOnce a grasp is attained, adequate force within the friction cone of theobject must be imposed to retain force closure. Dexterity is then definedas the ability to adapt the balance of grip and load forces to object surfaceproperties [31]. Dexterous manipulation abilities are attributed mainly totactile afferents since a loss in these abilities is experienced during digitalanesthesia [36].Object identificationTactile afferents during initial contact also provide object surface propertyinformation, which is frequently combined with visual cues and/or sensorymemories to develop abstract understanding. Reactions of FA-I, SA-I and242.3. Neuroscience & PhysiologySA-II afferents to object surface are used to determine object surface proper-ties. For example, FA-I afferents react more strongly to slippery surfaces [5].Filtering noiseRobust processing of tactile afferent information is attributed to the brain’sinnate ability to detect coincidence: a phenomenon in which its centralneurons receive synchronous input spikes from many distinct tactile affer-ents [29]. Therefore, noise in the environment, i.e. information unrelated tothe current focus of attention, can be characterized by input spikes which donot arrive at the brain at the same precise moment in time as input deemedvaluable to the current task.25Chapter 3Predicting EnvironmentProperties from SensoryInputsIn this chapter, we first formally define the problem of predicting character-istics of an environment using high-dimensional force and tactile sensor data.We then present the prediction framework designed to solve this problem.3.1 Problem Definition(a) X (b) YFigure 3.1: Collection of data matrix D = (X,Y): (a) data X are col-lected across all trials and environments; (b) environmental quantities Yare provided by the human expert.The manipulation task considered is shown in Figure 1.1. The end ef-fector pushes down on a foam-encapsulated block, causing it to topple, thenpushes the block back to its starting position against the wall. A prescribedjoint-space trajectory to achieve this manipulation task is provided by a hu-263.1. Problem Definitionman expert via kinesthetic teaching. Upon replay, the robot tracks the giventrajectory using standard computed-torque control. We assume the trajec-tory is robust in two ways: (1) repeating the trajectory causes repeatedrotations of the block, and (2) the same trajectory remains successful intoppling blocks for all mass, friction, and compliance values. The use of aprescribed trajectory also implies an approximate correspondence betweenthe current elapsed time within a motion and the manipulation phase.Each sensory sample collected at each timestep of the prescribed trajec-tory is defined by a sensory data stream:s ∈ Rns : (ρ, ρ˙, τ, p, ω, f, α) (3.1)where ρ, ρ˙, τ ∈ R7 represent respectively the angles, velocities and torquemeasurements of each of the seven joints of the manipulator arm, p ∈ R7gives the end effector’s pose measurements (3D position and 4D quaternion),ω ∈ R6 is the task wrench measured via the force-torque sensor mountedto the wrist of the robot, f ∈ R4 are torque measurements for the joints inthe robot hand, and α ∈ R72 are tactile sensor measurements on each of thethree fingers, reshaped into a single vector.A single execution of the manipulation task leads to the capture of asensory stream, x ∈ Rn, which consists of the observations of ns sensorreadings each sampled at nt points in time, and then stacked into a singlevector; here, n = ns × nt.The environment properties to be predicted from the sensory streamdata are given by y ∈ R3 : (m,µ, c), where m is the object mass, µ thecoefficient of friction between the object and its support surface, and c isthe material compliance of the object.In order to learn a predictive model y = f(x), training data is firstgathered for np different combinations of the environment properties, i.e.,variations of mass, compliance, and friction. For each setting, the task isrepeated nh times in each environment. The final dataset is thus defined by273.2. Prediction Frameworkthe following data pairs:D = (X,Y) = {(xp,h,yp) | p ∈ [1 · · ·np], h ∈ [1 · · ·nh]}, (3.2)shown graphically in Figure 3.1. This dataset is used to learn the predictivemodel that we now describe.3.2 Prediction FrameworkGiven a new sensory stream, xnew, which consists of nt samples of s, we wishto predict the environment properties y. For our work, the prediction taskis characterized by having a small number of observations, p = 144, and alarge number features to use for the prediction, n = 16501. Furthermore,many of the features are likely to be highly correlated. These character-istics are problematic for many common regression methods. In contrast,PLS is a good alternative for such problems [23]. In particular, it couplesthe dimension reduction and the regression model, making the dimensionreduction dependent on the input, X, and the output, Y. While PLS is apopular tool in the biological sciences and elsewhere, its use in the contextof robotic sensing remains rare.An important remaining challenge with PLS, however, is that while itprovides a principled estimation method, the method itself is not tailored forvariable or feature selection [8]. Relatedly, it can be shown that the PLS es-timator does not guarantee statistically-consistent predictions for problemslike ours with a high feature-count to sample-count ratio, and that noise vari-ables act to attenuate the predictions of the regression parameters [8]. Wespecifically address this by imposing an aggressive feature selection methodbefore applying PLS. In our results, we demonstrate this to be highly ef-fective in helping to achieve improved prediction performance. We notethat other methods have also been recently proposed to address the limita-tions of PLS, such as imposing sparsity in the dimension reduction step of1Note that our notation for p and n is the reverse of that used in the statistics literature,where n commonly represents the number of samples, and p represents the number ofpredictors, i.e., the feature count.283.2. Prediction FrameworkFigure 3.2: Intuition behind the task variance ratio (Γ) algorithm. We wishto select good sensors (at each time-phase) which exhibit low variance whenthe environment remains constant and high variance when the environmentchanges. Colours signify data streams collected in distinct environments.PLS [8] or Supervised Principle Components [3]. As currently developed,these alternative methods are motivated-by and tested on gene expressionproblems.With the above motivation in mind, our two-stage solution consists of (i)selecting the most relevant input features from x, and (ii) using PLS to fur-ther learn a compact latent linear subspace that is well suited to predictingenvironment properties. We now discuss these two stages in further detail.3.2.1 Feature selection via the task variance ratioWe define the Task Variance Ratio (Γ) vector asΓ = {Γi =VarenviroiVar triali| i ∈ [1 · · ·n]} (3.3)where Var triali models the variance of a given element of X across all trials,and Varenviroi models the variance of the same element across all environ-ments. Specifically, for feature i:Var triali =∑p,h(x2i,p,h − µ2i,p)npnh − 1(3.4)293.2. Prediction FrameworkandVarenviroi =∑p,h(x2i,p,h − µ2i )npnh − 1, (3.5)where i, p, h are the indices for features, environment properties, and re-peated trial number, respectively;µi,p =∑hxi,p,h/nh;andµi =∑p,hxi,p,h/npnh.A large value of Γi indicates a good feature, as it implies that variationoccurs as changes to the environment take effect, while observable noisebetween repeated trials in the same environment is relatively small.The feature selection is then implemented using a simple threshold func-tion to produce a reduced input matrix X∗:X∗ = Xp · diag(Γ > Γmin), (3.6)whereXp =1nhnh∑h=1xp,h | p ∈ [1 · · ·np], (3.7)diag(v) produces a square matrix with the elements of v across the diagonal,and Γmin is chosen such that the desired number of elements of Γ are selected.The resulting reduced dataset is given byD∗ = (X∗,Y) = {(x∗p,yp) | p ∈ [1 · · ·np]}. (3.8)In this way, we identify features in X that exhibit small variation acrossrepeated trials when the environment is kept constant and exhibit large vari-ations as the environment changes (see Figure 3.2), which we approximateas the degree of relevance of the sensor reading to predicting environmentproperties.303.2. Prediction Framework3.2.2 Property prediction with partial least squaresThe PLS algorithm provides us with an estimated weighting matrix β ∈Rc×ny , where c is a parameter denoting the number of components to factor.β is calculated iteratively according to the following algorithm: first,defineA0 = X∗TY,M0 = X∗TX∗, andC0 = I,(3.9)then iterateqj = eigv1(ATj−1Aj−1) qj → dominant eigenvectorwj = Aj−1qjcj = wTj Mj−1wjwj =wj√cjstore into column j of Wrj = Mj−1wj store into column j of Rqj = ATj−1wj store into column j of Qvj = ηCjpj η → normalizing constantCj = Cj−1 − vjvTjAj = CjAj−1(3.10)for all j ∈ [1 · · · c]. With R, Q and W assembled, we now compute:β = WQT (3.11)Finally, we use β at runtime to predict environment properties:yˆ = β · x∗new (3.12)where x∗new represents the reduced version (i.e. following Γ feature selection)of a new unlabeled sensory data stream collected during a repetition of themotion in an unknown environment.31Chapter 4Control SoftwareIn this chapter, we provide an overview of the software deployed to controlthe motion of the robot and collect sensor readings in support of experi-ments.4.1 System OverviewTo support experiments, a sensor processing and robot control frameworkwas written in C++ leveraging the libbarrett API provided by Barrett Tech-nology (MA, USA) (Section 4.2). Our framework was developed exclusivelyon the internal PC of the WAM robot [72], the details of which are presentedin Table 4.1.Motherboard Aaeon PFM-540IProcessor 500 MHz AMD LX-800 x86-compatibleMemory 256 MB 200-pin DDR-333 SODIMMLinux distribution Ubuntu 9.10Linux kernel Xenomai 2.5Ethernet 10/100 Base-TCANbus Peak PCAN-PC/104, 2 portsTable 4.1: Specifications of the WAM Internal PC/104 [67] used for frame-work development and robot control in support of experiments.4.2 Libbarrett APIThe API used to communicate with the robot is called libbarrett: a C++library from Barrett Technology Inc. [66]. Our framework was tested using324.3. WAM Controlversion 1.1.0 of the API. The libbarrett API provides abstract control of theWAM and BarrettHand and allows them to be controlled in tandem. Sampleprograms that perform simple control and sensor monitoring routines areprovided, upon which our controller in the current study is based.Libbarrett provides three high-level constructs to interface with the robot:(1) the WAM object (2) the Hand object and (3) the ProductManager ob-ject. The WAM and Hand objects provide high-level control of the WAMand attached BarrettHand respectively. The ProductManager provides ac-cess to the WAM’s optional components, such as attached tools (e.g. Bar-rettHand) and Force/Torque sensor. The control software communicateswith the Hand and WAM through high-level function calls to their respec-tive libbarrett objects using a variety of pre-defined data structures. Thesedatastructures are presented in Table 4.2.Name Type UnitCartesian Position cp type MetersJoint Position jp type RadiansJoint Velocity jv type Radians/sJoint Torque jt type Radians/sTable 4.2: Libbarrett data-types when communicating with the robot. Thejp type for the WAM is a seven-dimensional vector whereas for the Barrett-Hand, it is a four-dimensional vector – one entry for each finger and one forthe spread.4.3 WAM ControlCartesian-space Control in Cartesian space is a convenient way to pro-totype motions and is sufficiently repeatable for tasks which require lowaccuracy. Accuracy of the Cartesian positioning of the WAM is advertisedat two millimeters. In practise, however, this accuracy depends largely onthe joint-angle configuration of the WAM at the beginning of the Cartesian-space move. In our experiments, position error could easily accumulate toreach as high as one centimeter, depending on the joint-angle position of334.3. WAM Controlthe robot at the beginning of the Cartesian-space move. These high errorsmay be due to imperfect inverse-kinematics currently available through thelibbarrett API. Cartesian trajectories were not sufficiently repeatable in ourexperiments since we required a precise and highly repeatable motion (ac-curate to within 1 mm) to perform our task across various environments.Moreover, the workspace accessible by rotating the wrist is limited by theangular configuration of its attaching joint. This means that certain wristorientations are not repeatable if the inverse-kinematics solution providesdiffering joint-space positioning around the wrist. These drawbacks preventus from commanding the robot in Cartesian-space in our experiments.Joint-space For accurate and repeatable sets of motions, the robot shouldbe controlled directly in joint-space. The only issue with joint-space con-trol is that the robot performs its task in Cartesian-space. Joint-angle tra-jectories that accomplish specific Cartesian-space tasks are difficult if notimpossible to define manually. This necessitates kinesthetic teaching wherethe robot is trained to perform its task in Cartesian space by a human ex-pert manually moving the robot tool to perform a task, while correspondingjoint-space trajectories are recorded by the robot.Kinesthetic Teach-and-Play The libbarrett API comes equipped withkinesthetic teaching functionality via the teach-and-play module. Teach-and-play records position information at the rate of 500Hz while a userphysically moves the robot through the desired motions. The robot canrecord its trajectory in either Cartesian-space or joint-space. Again, due toimperfect inverse-kinematics, if a highly repeatable motion is required it isadvisable to record trajectories in joint-space. It becomes possible to executehighly repeatable Cartesian-space trajectories if the relative displacementfrom a known starting joint-angle position is small (i.e. a workspace ofapproximately 10cm3 around a starting joint-angle position).344.4. BarrettHand Control4.4 BarrettHand ControlIn this section, we introduce the details on control of the BH8-280 Barrett-Hand through the libbarrett API, as used in our experiments.Velocity Move The simplest form of control of each finger of the Handis to specify a direction and rate of travel of each of the joints in the Hand.The joints will halt gracefully if they reach their limits or become obstructedbefore they reach these limits.Trapezoidal Position Move If the Hand must be configured precisely(for e.g a pre-grasp posture), the user can specify desired joint-angles ofeach finger and spread. The spread of the Hand refers to the single degree offreedom rotation of the first and third fingers about the palm. Upon sendingposition commands to the hand, the proximal finger joint angles or the angleof spread move toward the goal position via a trapezoidal velocity profile.As with velocity control, the fingers will halt gracefully if the movement ofthe hand becomes obstructed.High Control-rate Position Move High control-rate position movesprovide an advanced alternative to simple trapezoidal moves. Joint-anglescan be specified to reach a desired pose, however each of the joints travel atmaximum velocity until they reach the desired pose or become obstructed.Care must be taken when sending these commands, as obstructions do notresult in graceful halts and instead could cause damage to objects or theHand itself. It is necessary to ensure a clear path for each of the fingers andspread of the Hand before sending these commands.4.5 Realtime SystemsControl of the robot in realtime must be done through the libbarrett re-altime systems API. Realtime control allows for complex and closed-loopmotions since the output of each realtime system depends upon its inputs354.5. Realtime Systemsat each step of the realtime control loop at the rate of 500Hz. Program 1provides an example program which defines such a realtime system in theC++ programming language.364.5.RealtimeSystemsclass WamSystem : public System {public: Input<jp_type> input; // Obtain current joint-angles as inputpublic: Output<jp_type> output; // Provide updated joint-angles as outputprotected: Value* outputValue; // Value that output readsprotected: jp_type jp_offsets; // Modifications to realtime motion feedpublic:WamSystem(const string& sysName): // All systems must define a nameSystem(sysName), input(this), output(this, &outputValue){init_vec(&jp_offsets, 0); // Initialize all offsets to 0}~WamSystem(){ mandatoryCleanUp(); } // Mandatory destructorprotected:jp_type jp_out; // Declare local copy of output datavirtual void operate() {const jp_type& jp_in = input.getValue(); // Pull data from the inputjp_offsets[5] += 0.001; // Increase 6th joint-angle of WAM slightlyjp_out = jp_in + jp_offsets; // Modify wam joints by relative offsetsoutputValue->setData(&jp_out); // Push data to subsequent system}};Program 1: Example libbarrett realtime system written in C++ that controls the 6th joint of the WAM to increaseindefinitely. Namespace references removed for brevity.37Chapter 5Experiments and ResultsIn this chapter, we describe the experimental setup, provide details on theexperimental procedure, and finally present and discuss environment prop-erty prediction results obtained by the prediction framework.5.1 Setup5.1.1 Actuation and sensingExperiments are conducted using a 7 DOF Barrett WAM robot arm withattached 4 DOF Barrett BH-280 Hand, built by Barrett Technology (MA,USA). A 6-axis force-torque sensor is mounted to the wrist of the arm. Therobot hand is equipped with tactile arrays on the fingers and palm. Jointtorque sensors are embedded in each of the 3 fingers. Position control ofthe arm occurs at 500Hz and all sensors are sampled at 125 Hz, which isreduced to 2.5 Hz during preprocessing (see section 5.2.2).5.1.2 Kinesthetic teach-and-playExample trajectories are demonstrated to the robot via a kinesthetic teach-and-play interface. The system records pose estimates of the arm at the rateof 500 Hz and the result is saved for future playback.5.1.3 Software architectureOur real-time control framework runs on top of the libbarrett real-time sys-tems library [66], and is used during demonstration and autonomous ex-ecution. We also use it to record and play back data streams that aretime-synchronized with the motion. See Chapter 4 for further details.385.1. Setup5.1.4 Experimental testbedThe block used for the experiment is a rectangular prism made of medium-density polyethylene foam, with length 48.5 cm, width 10.5 cm and height10.5 cm. Two parallel walls of length 28.5 cm and width 6.5 cm are used toprevent the block from sliding sideways out of the workspace. The distancebetween the walls is 49 cm. As shown in Figure 1.1, a wall is used to limitthe final sliding motion and leave the block in its original location, ready tobe toppled again. The walls are lined with paper to decrease the coefficientof friction between the block and the walls for smoother operation.In our experiments, different environments are defined by the Cartesianproduct of three sets of environment property values for P = {Pm, Pµ, Pc},yielding a total of 8× 6× 3 = 144 different environments. These values areshown in Table 5.1.Pm (g)425, 650, 875, 1100,1325, 1550, 1775, 2000Pµ0.441, 0.505, 0.616,0.768, 0.911, 1.136Pc (mm/N) 0.294, 2.484, 0.978Table 5.1: Mass, coefficient of Coulomb static friction and compliance prop-erty sets, as measured for a variety of blocks and surfaces.5.1.5 The block topple-slide taskToppling, as defined in [44], consists of two high-level phases: rolling andsettling. In the rolling phase, the robot pushes the block up onto a topplingedge, which is perpendicular to the robot’s movement, until the center ofmass of the block is directly above the edge. During the settling phase, theblock falls under gravity and lands on a new face before coming to rest.As it is difficult to ensure the block’s center of mass is above the edgefollowing the rolling phase, the prescribed motion is developed so as to havethe robot maintain contact with the block throughout the settling phase, tothe extent that this is possible.395.2. ProcedureOnce the block has settled, the robot then proceeds to slide the blockacross the surface of the table until the block has come to a stop back atits initial pose. Figure 1.1 depicts the topple-slide task with a sequence ofimages.5.2 Procedure5.2.1 Learn the task trajectoryA human expert demonstrates the topple-slide trajectory (Figure 1.1) viakinesthetic teaching. The robot is fixed to the table so as to not introduceadditional variance in the recorded data due to base motion. The demon-strating user performs the task in about 6 s. The motion is then manuallytuned so that the reference trajectory succeeds for the topple-slide task for avariety of combinations of block mass, coefficient of friction, and compliance(see section 5.1.1).5.2.2 Record sensory datasetThe prescribed motion is repeated over a series of trials h ∈ [1 · · ·nh] foreach property set p ∈ P , yielding the complete raw sensory data set D. Weuse nh = 20 repeated trials. Before training our model, we pre-process thedata as follows. The sensory data is resampled to 5 Hz after applying a200 ms mean box filter.We whiten each data set to support meaningful comparisons betweensensors – by shifting data collected from each sensor to have zero mean anda variance of one – across all trials h ∈ [1 · · ·nh] and property sets p ∈ P . Inour experiments we use ns = 110 sensors across nt = 18 time samples. Thisyields a complete input vector of size n = 1980 for each manipulation trial.5.2.3 Feature selectionFollowing the equations in section 3.2.1, we compute the Γ for each elementin x. We select Γmin so as to select 0.1× n features.405.2. ProcedureFigure 5.1: Effect of varying degrees of PLS dimensionality reduction onmass estimation performance. A 20% reduction is achievable with trivialloss in estimation quality.5.2.4 Partial least squares modelingFollowing PLS, we obtain the β coefficients. We can make the represen-tation more compact by further choosing only the β coefficients of largestmagnitude. In practice, we are able to make a further reduction of around40% without any significant impact on the prediction accuracy, as shown inFigure 5.1. The first row uses all features and no PLS reduction. The secondrow uses a reduced set of 5% selected features without PLS reduction. Thethird row uses 5% selected features, followed by 40% PLS reduction.415.2. ProcedureFigure 5.2: Online estimation of the mass from sensory data using vary-ing degrees of dimensionality reduction. The estimated mass of the blockare shown at various blue points throughout the motion. The dotted redhorizontal line denotes the actual mass of the block.5.2.5 Online predictionWe start by parsing the entire motion into a series of key time-phases, t∗.We define each t∗ ∈ T as a time-phase wherein at least K sensors havereceived a Γ larger than a certain value. In practice, we choose a minimumΓ so that K = 0.1× ns. By training separate models in this fashion, we areable to make predictions as soon as the robot enters any phase of the motioninvolving selected features. Figures 5.2 and 5.3 demonstrate the predictionperformance on-board the robot as it executes the task.425.2. ProcedureFigure 5.3: Online estimation of mass, friction and compliance from sensorydata following Γ feature selection and PLS feature extraction. Time-phases1 through 3, 9, and 15 through 18 are ignored since sensor readings duringthese time-phases do not provide any information with respect to distin-guishing environment properties, i.e. their respective Γ values are belowΓmin.435.2. Procedure5.2.6 Calculating environment propertiesIn order for the robot to discover a mapping between sensor readings andenvironment properties, a unique numerical approximation of the underlyingproperty must be calculated. See Figure 5.4 for a graphical overview of howthe coefficient of friction and compliance are calculated and Figure 5.5 for aphoto of each of the environmental properties.Mass The mass of each unit is approximated using an off-the-shelf kitchenscale.Friction coefficient We first place the foam block atop a surface linedwith the frictional material we are measuring. We then gradually inclinethe surface until the block begins to slide. We capture the inclination at thepoint of sliding as θ. We finally approximate the coefficient of static frictionof the frictional material to beµ = tan ( θ ),which we also assume to be a fair approximation to the coefficient of kineticfriction.Compliance As an estimate of the compliance of each type of foam, weset a rigid solid of known mass m atop a solid block of the foam we aremeasuring. The dimensions of the rigid solid and the foam solid are identical.We then measure the compressional displacement, d, of the top of the foamsolid using standard calipers. Finally, we approximate the compliance of thefoam to bec = d / ( m · g ),where g is acceleration due to gravity.445.3. Results(a) Friction: µ = tan ( θ )(b) Compliance: c = d / ( m · g )Figure 5.4: Calculating numerical representations of environment properties(a) coefficient of friction, µ, and (b) compliance, c.5.3 ResultsIn what follows below, we comment on topple-slide experiments carried outwith the robot hand, as well as with the spherical probe. We also encouragethe reader to watch the supplemental video associated with this thesis.Γ selection helps focus attention on specific sensors and motion-phasesthat are particularly likely to provide information useful to predicting en-vironment properties. Figure 5.6 illustrates the selected features for thetopple-slide task as executed by the robot arm with either the Hand orspherical probe as end effectors (see Figure 5.7). Notice how clusters of x∗can be interpreted as defining important sensory events in the task sequence,which the robot should pay most attention to. The yellow shaded regioncorresponds to the topple phase and the blue shaded region corresponds tothe slide phase. The motion phases where the arm is not in contact withthe object are identified as being unimportant, as are the phases that markthe beginning and end of both of the topple and sliding phases. In terms ofsensors, the joint velocities, jvn, are generally unimportant, with the excep-tion of joint 6. Joints 2, 4, and 6 provide task-relevant information in theirsensed torques and positions. Similar results are also obtained with the full455.3. Results(a) Eight units of 0.225kg mass which are used to vary the mass of the manipulatedfoam block from 0.445kg to 2kg.(b) Six surface frictions (from left to right): paper, plastic, wood, fine-sandpaper,coarse-sandpaper, cloth.(c) Three levels of compliance (from left to right): ethafoam, seafoam, greyfoam.Figure 5.5: Environment properties used in experiments..465.3. Resultshand attached to the robot arm, in which case there are over a hundred sen-sors sampled across 18 time phases of the motion. With the hand in place,the key sensors are determined as being the task wrench ω, as measuredby the force-torque sensor, the fingertip torques, f , and the fingertip tactilereadings, a.It may be noted that a simple contact/no-contact feature identifier mightyield similar segmentations of the overall task in this case. However, thiswould require an explicit model that extracts contact information from sen-sory inputs. Our method identifies key points in the motion without anymanually-tuned sensory features.To determine the impact of the Γ feature selection, we compare masspredictions obtained using the inclusion of all features, i.e., no feature selec-tion, and those obtained when Γ feature selection is used to select a subsetof 5% of the the original features. This is applied to the manipulation taskas executed using the spherical probe. In both cases, a non-reduced par-tial least squares model is constructed and leave-one-out-cross validation(LOOCV) test is considered for performance evaluation. As shown in Ta-ble 5.2, the result produced using the significantly reduced subset of inputfeatures is in most tests better than that obtained when using all the fea-tures and accuracy improves to within 1 measurement unit (± 112.5 g) forall tests. Also, as can be seen in Table 5.3, applying up to 20% Γ featureselection to the incoming datastreams enables real-time operation in termsof both data-transfer bandwidth and prediction runtime. Tradeoff calcu-lations assume 1Mbit CANBus, 16MHz dedicated processing speed and areal-time control loop frequency of 500Hz. FLOPs are calculated using stan-dard inner-product vector multiplication complexity of 2n − 1 for each ofthe three property predictions.If desired, a fixed subset of the largest computed partial least squarescoefficients can be used for the final prediction, instead of the full set, β. Inpractice, a 40% reduction in the number of coefficients yields only a minimalreduction in the quality of the prediction.To validate our choice of partial least squares (PLS), we compare the re-sults against three other methods: principal component regression (PCR),475.3. Results(a) Joint-velocities are not selected due to high noise.(b) Joint-torques, Cartesian-wrench and Hand-tactile selected forproviding most task-relevant information to the system.Figure 5.6: Visualization of features selected according to the task varianceratio, Γ, of data collected using (a) robot with spherical probe and (b)robot with BarrettHand. Colour is added to signify task-phases. Gapsbetween task-phases signify an absence of task-relevant information withincorresponding time-phases.485.3. Results(a) (b)Figure 5.7: WAM robot with (a) spherical probe and (b) BarrettHand asend-effectors.495.3. ResultsPrediction RMSE (g)Mass (g) PLS + Γ PLS Only425 2.40 75.7650 13.9 86.9875 1.40 51.11100 15.8 40.81325 7.20 43.11550 0.20 37.21775 9.00 1472000 11.0 7.20Table 5.2: Effect of 5% Γ feature selection on LOOCV block mass predictionroot mean squared error (RMSE).least squares regression (LSR), and naive Bayes classification (NBC). ForLSR, we regularize the solution using ridge regression. For NBC, a newsensory stream is treated as input to a classification problem, and the clas-sifier is constructed using naive Bayes that assumes that all features in xare independent. Using the repeated trials for the given set of environmentproperties, a normal distribution is constructed for each element of x, andthe likelihood of a new value of x belonging to the same class is simply mod-eled as the product of the individual element likelihoods. The environmentproperties of the most likely class are then returned as the prediction. Allfour methods are evaluated using LOOCV, and are applied to x∗, i.e., afterΓ feature selection. The results for mass prediction show that PLS yields thebest predictions, with respective mean errors for PLS, PCR, and LSR andNBC of 33.3, 56.4, and 84.9, and 282.6, with respective standard deviationsof 4.2, 8.4, 14.6 and 83.2 as measured in grams.Figure 5.2 illustrates online mass prediction results. The robot is ableto make predictions at any key time-phase, t?, each characterized by a highΓ for many sensors. This is accomplished through building multiple models,each spanning the data from the start of the motion to some t ∈ t?.These results are obtained for the case of training on data for m ={425, 650, 875, 1100, 1550, 1775, 2000} as measured in g and is then testedusing sensory data obtained for m = 1335 g. The result shows predictions505.4. Discussionbeing made using increasingly fewer selected features or reduced PLS dimen-sions, as noted in the caption. Also, the predictions improve as the motionprogresses and more selected features are observed.Our framework is also robust to feature noise. To demonstrate this, werun experiments where we introduce large amounts of synthetic noise into thesensory data streams before feature selection and after data whitening (seeSection 5.2.2). As shown in Table 5.4, LOOCV prediction RMSE increasessmoothly as feature noise increases. Note that for even small amounts ofadditive noise (σ2 ≈ 0.1), PLS fails to produce meaningful results in theabsence of Γ feature selection.Γ data selection: 5% 10% 20% 40% 70% 100%# FLOPs (approx.): 32 65 131 263 461 659Runtime (ms): 0.25 0.5 1.0 2.1 3.6 5.1Bandwidth (bit): 352 704 1408 2816 4928 7040Maximum error (g): 69.5 112.7 96.3 111.4 107.9 188.3Real-time satisfied? T T T T/F F FBandwidth satisfied? T T T F F FAccuracy satisfied? T T/F T T/F T FTable 5.3: Bandwidth/runtime/accuracy tradeoff following differentamounts of Γ selection. Optimal tradeoff is achieved when between 5%and 20% of the data is selected using Γ.5.4 DiscussionOur prediction framework uses unlabeled sensory data streams, collectedduring a manipulation task, to make reliable real-time predictions aboutenvironment properties that cannot be visually observed, i.e., mass, friction,and compliance, given the existence of relevant training examples. Sensorsor motion phases that are observed to be noisy are readily discounted byour method. The results show that the task variance ratio, Γ, provides a515.4. Discussionσ2 RMSE (g)0.0 9.40.5 55.11.0 160.71.5 172.82.0 255.5Table 5.4: Effect of different levels of additive Gaussian noise N (0, σ2) onthe sensory input data for LOOCV mass prediction RMSE (m = 1335 g).Accuracy degrades smoothly as sensor noise increases.simple means for feature selection, identifying important sensors and motion-phases supporting real-time predictions, and which furthermore improvesthe resulting partial least squares predictions.While predicting environment properties from labeled training data couldbe an obvious application of linear regression, this is in practice problem-atic because the training data for our scenario consists of a relatively smallsample size (low hundreds) embedded in a high dimensional space: x cancontain thousands of sensory measurements. Furthermore, the large numberof measurements required to make accurate predictions prohibits real-timeoperation.Although PLS utilizes a dimension reduction technique by using a fewlatent factors, it cannot avoid the sample size issue since a it has beenproven that a reasonable sample size relative to the number of parameters isrequired to estimate sample covariances consistently [8]. Thus, PLS worksbest under the conditions of large sample sizes and/or small numbers ofinput variables.When combined with Γ feature selection, our results show PLS superiorto other regression algorithms which do not leverage input and output cor-relations in their calculations. Unlike PCR, PLS uses y (in addition to x)to construct its principal directions. Thus, its solution path is a nonlinearfunction of y [23]. In addition to outperforming the other benchmark pre-diction methods for the task, PLS also provides a further opportunity fordimensionality reduction if desired.525.4. DiscussionDue to the model-free nature of our approach, the prediction frame-work works for virtually any combination of sensor modalities, includingtactile-pressure, Cartesian wrench and joint-torque, which enables easy ex-perimentation to determine the optimal tradeoff between sensor usage andprediction accuracy.One limitation of our approach is that Γ can be misleading, such asin the case of noise-free features that also exhibit significant non-linearitieswith respect to the properties being predicted. Another drawback is thatthe learned predictive model remains specific to the prescribed motion usedfor the task and the specific kinematics and dynamics of the robot andenvironment it trained in. The current remedy is to incorporate furthertraining data from which to build the model when changes to the robotor its motion take effect. In future work, we intend to examine how thepredictive model can be transferred to new settings [52].Our framework can also be leveraged in multiple ways in order to detectanomalous events. Rapid changes in the predicted environment properties,such as object compliance, is a signal of an anomaly. Also, implicit in thecomputation of Varenviro is a model of what value a sensory feature shouldhave at a given point in the motion. This allows a motion anomaly to besignaled if a number of sensors each begin to signal anomalous values at agiven point in time, or a sensor anomaly to be signaled if a single sensorbegins to consistently produce anomalous readings.53Chapter 6Conclusions and FutureWorkIn this thesis, we present the challenge of predicting properties of real-worldenvironments using high-dimensional haptic sensory data from the perspec-tives of both biological and robotic systems.6.1 SummaryWe begin in Chapter 2 by presenting insights into the problem from theestablished robotics, neuroscience and physiology literature. Next, we de-fine the prediction problem more formally in Chapter 3. In Chapter 4 weprovide an overview on the software framework used to control the phys-ical WAM/Hand system and collect data in support of experiments. Wethen introduce a model-free approach to the prediction of example environ-ment properties – namely object mass, friction, and compliance – during thecourse of a non-prehensile topple-slide manipulation task in Chapter 5.Given appropriate data from example manipulations with known envi-ronment properties, the method presented in this thesis extracts informationfrom unlabeled sensory data collected over the course of a new manipulation.We demonstrate that our novel metric, known as the Task Variance Ratio(TVR), identifies important features, sensors and motion-phases. Using theTVR metric combined with the PLS regression method, we obtain accuratepredictions in real-time using only 3% of the sensory input data from therobot.546.2. Limitations and Future Directions6.2 Limitations and Future DirectionsOne significant limitation of the predictive framework is the need for a pre-scribed motion that can already succeed at the task despite variations in theenvironment properties that we seek to predict.An important direction for future work will be to investigate the tightintegration of prediction and adaptation into the framework. With knowl-edge (learned or provided) of how to adapt the topple-slide task for heavieror more compliant blocks, this could readily be used to enlarge the range ofvariations that can be coped with.Surprise-and-adaptWe have devised a preliminary model-based motion adaptation approachto succeed in environments where the prescribed motion fails. Under theassumption that the robot has access to tactile pressure and/or fingertiptorque sensors, we modify the orientation of the robot’s wrist about theaxis parallel to the manipulated block, so that the pressure readings at thefingertips track an appropriate profile provided by an expert.To deal with sensor noise, we consider a history of sensor readings of sizeKr. If sensor readings fall below a threshold for at least Kr timesteps, theorientation of the wrist increases – thus applying more pressure to the block.Similarly, if sensors consistently read above a threshold, the orientation ofthe wrist decreases – releasing pressure. See Appendix A for a pseudocodeof this operation.This scheme is successful for the topple-slide task when particularly smallperturbations in the environment are experienced, such as a relatively small(within two units) change in object mass, but fails with large perturbations.An interesting future direction is to devise a model-free approach inwhich the robot discovers for itself that a lack of pressure at its fingertipsmeans that its wrist orientation should increase (in addition to other relevantmappings between sensor readings and adaptive motions). This knowledgewould have to come from the data and would most likely require additionalsensors and/or some form of supervision, e.g. from a human or a camera.556.2. Limitations and Future DirectionsTime-synchronized motionsA related limitation is that our current sensory features are all time-indexed,i.e., there is an assumption that the current phase of the motion is tightlycoupled to the current time. In future work, we would like to couple thephase estimate more tightly to the actual motion via available sensory ob-servations.Task-parameter generalizationA last key limitation is that because of the model-free nature of the currentapproach, the prediction procedures do not generalize well to changes in thetask kinematics or dynamics. We aim to develop parameterized versions ofthe predictive model in order to allow for such generalization.Leveraging physics-based simulationWe are are also interested in exploring how simulations with only qualita-tive accuracy might be used to identify suitable sensors and motion phasesin advance. This could be used to inform the types of sensors and theirplacement, as well as provide insight with respect to relevant time-steps atwhich to record data. Initial results in this direction are promising and thuspoint to a new use for simulations that are not necessarily tightly calibratedto the true kinematics and dynamics of the plant.56Bibliography[1] Charu C Aggarwal. A framework for diagnosing changes in evolvingdata streams. In Management of Data, Proceedings of the 2003 ACMSIGMOD International Conference on, pages 575–586, 2003.[2] Anelia Angelova, Larry Matthies, Daniel M Helmick, and Pietro Per-ona. Dimensionality reduction using automatic supervision for vision-based terrain learning. In Proc. Robotics: Science and Systems, 2007.[3] Eric Bair, Trevor Hastie, Debashis Paul, and Robert Tibshirani. Pre-diction by supervised principal components. Journal of the AmericanStatistical Association, 101(473), 2006.[4] Antonio Bicchi, Davide Dente, and Enzo Pasquale Scilingo. Hapticillusions induced by tactile flow. In EuroHaptics, Proceedings of, pages314–329, 2003.[5] MKO Burstedt, Benoni B Edin, and Roland S Johansson. Coordinationof fingertip forces during human manipulation can emerge from inde-pendent neural networks controlling each engaged digit. ExperimentalBrain Research, 117(1):67–79, 1997.[6] Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly de-tection: A survey. ACM Computing Surveys (CSUR), 41(3):15:1–15:58,July 2009.[7] Eris Chinellato. Visual neuroscience of robotic grasping. PhD thesis,Universitat Jaume I, 2008.[8] Hyonho Chun and Su¨ndu¨z Keles¸. Sparse partial least squares regres-sion for simultaneous dimension reduction and variable selection. Jour-57Bibliographynal of the Royal Statistical Society: Series B (Statistical Methodology),72(1):3–25, 2010.[9] Mark R Cutkosky. On grasp choice, grasp models, and the designof hands for manufacturing tasks. Robotics and Automation, IEEETransactions on, 5(3):269–279, 1989.[10] Mark R Cutkosky and Robert D Howe. Human grasp choice and roboticgrasp analysis. In Dextrous robot hands, pages 5–31. Springer, 1990.[11] Mark R. Cutkosky, Robert D. Howe, and William R. Provancher. Forceand tactile sensors. In Bruno Siciliano and Oussama Khatib, editors,Springer Handbook of Robotics, pages 455–476. Springer Berlin Heidel-berg, 2008.[12] Hendrik Dahlkamp, Adrian Kaehler, David Stavens, Sebastian Thrun,and Gary R Bradski. Self-supervised monocular road detection in desertterrain. In Proc. Robotics: Science and Systems, volume 38, 2006.[13] Hao Dang and Peter K Allen. Tactile experience-based robotic grasping.In Workshop on Advances in Tactile Sensing and Touch based Human-Robot Interaction (HRI), 2012.[14] Hao Dang, Jonathan Weisz, and Peter K Allen. Blind grasping: Stablerobotic grasping using tactile feedback and hand kinematics. In Roboticsand Automation (ICRA), Proceedings of the 2011 IEEE InternationalConference on, pages 5917–5922, 2011.[15] Tamraparni Dasu, Shankar Krishnan, Suresh Venkatasubramanian, andKe Yi. An information-theoretic approach to detecting changes inmulti-dimensional data streams. In Interface of Statistics, ComputingScience, and Applications, Proceedings of the Symposium on, 2006.[16] Peter Dayan and Nathaniel D Daw. Decision theory, reinforcementlearning, and the brain. Cognitive, Affective, & Behavioral Neuro-science, 8(4):429–453, 2008.58Bibliography[17] R. Detry, D. Kraft, O. Kroemer, L. Bodenhagen, J. Peters, N. Krger,and J. Piater. Learning grasp affordance densities. Paladyn, Journalof Behavioral Robotics, 2(1):1–17, 2011.[18] John M Elliott and KJ Connolly. A classification of manipulative handmovements. Developmental Medicine & Child Neurology, 26(3):283–296, 1984.[19] Ronald S Fearing. Tactile sensing for shape interpretation. In Dextrousrobot hands, pages 209–238. Springer, 1990.[20] Javier Felip, Jose Bernabe´, and Antonio Morales. Emptying the box us-ing blind haptic manipulation primitives. In Intelligent Robots and Sys-tems (IROS), Proceedings of the 2011 IEEE/RSJ International Confer-ence on, 2011.[21] J Randall Flanagan, Miles C Bowman, and Roland S Johansson. Con-trol strategies in object manipulation tasks. Current Opinion in Neu-robiology, 16(6):650–659, 2006.[22] J Randall Flanagan and Roland S Johansson. Action plans used inaction observation. Nature, 424(6950):769–771, 2003.[23] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elementsof statistical learning, volume 1. Springer Series in Statistics, 2001.[24] Zoubin Ghahramani, Geoffrey E Hinton, et al. The em algorithm formixtures of factor analyzers. Technical report, Technical Report CRG-TR-96-1, University of Toronto, 1996.[25] Corey Goldfeder, Matei Ciocarlie, Hao Dang, and Peter K Allen. Thecolumbia grasp database. In Robotics and Automation (ICRA), Pro-ceedings of the 2009 IEEE International Conference on, pages 1710–1716, 2009.[26] Stacey L Gorniak, Vladimir M Zatsiorsky, and Mark L Latash. Manip-ulation of a fragile object. Experimental brain research, 202(2):413–430,2010.59Bibliography[27] G. Heidemann and M. Schopfer. Dynamic tactile sensing for objectidentification. In Robotics and Automation (ICRA), Proceedings of the2004 IEEE International Conference on, volume 1, pages 813–818, april2004.[28] Albert Hein and Thomas Kirste. Unsupervised detection of motionprimitives in very high dimensional sensor data. In Proceedings of the5th Workshop on Behaviour Monitoring and Interpretation, BMI, 2010.[29] JJ Hopfield. Pattern recognition computation using action potentialtiming for stimulus representation. Nature, 376(6535):33–36, 1995.[30] Rachel Hornung, Holger Urbanek, Julian Klodmann, Christian Os-endorfer, and Patrick van der Smagt. Model-free robot anomaly de-tection. In Intelligent Robots and Systems (IROS), Proceedings of the2014 IEEE/RSJ International Conference on, pages 3676–3683, 2014.[31] Robert D. Howe. Tactile sensing and control of robotic manipulation.Advanced Robotics, 8(3):245–261, 1993.[32] Cheng-Lung Huang and Chieh-Jen Wang. A ga-based feature selec-tion and parameters optimizationfor support vector machines. ExpertSystems with applications, 31(2):231–240, 2006.[33] Roland S Johansson and Kelly J Cole. Grasp stability during manip-ulative actions. Physiology and Pharmacology, Canadian journal of,72(5):511–524, 1994.[34] Roland S Johansson and J Randall Flanagan. Coding and use of tactilesignals from the fingertips in object manipulation tasks. Nature ReviewsNeuroscience, 10(5):345–359, 2009.[35] Roland S Johansson, Go¨ran Westling, Anders Ba¨ckstro¨m, and J Ran-dall Flanagan. Eye-hand coordination in object manipulation. Neuro-science, 21(17):6917–6932, 2001.60Bibliography[36] RS Johansson and G Westling. Roles of glabrous skin receptors and sen-sory memory in automatic control of precision grip when lifting rougheror more slippery objects. Experimental Brain Research, 56(3):550–564,1984.[37] Makoto Kaneko, M Wada, H Maekawa, and K Tanie. A new consid-eration on tendon-tension control system of robot hands. In Roboticsand Automation (ICRA), Proceedings of the 1991 IEEE InternationalConference on, pages 1028–1033, 1991.[38] Daniel Kifer, Shai Ben-David, and Johannes Gehrke. Detecting changein data streams. In Very Large Data Bases, Proceedings of the 30thInternational Conference on, volume 30, pages 180–191, 2004.[39] Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE,78(9):1464–1480, 1990.[40] Mark H Lee and Howard R Nicholls. Tactile sensing for mechatronicsastate of the art survey. Mechatronics, 9(1):1–31, 1999.[41] Fabio Leoni, Massimo Guerrini, Cecilia Laschi, Davide Taddeucci,Paolo Dario, and Antonina Starita. Implementing robotic graspingtasks using a biological approach. In Robotics and Automation (ICRA),Proceedings of the 1998 IEEE International Conference on, volume 3,pages 2274–2280, 1998.[42] Nathan F Lepora, Uriel Martinez-Hernandez, Hector Barron-Gonzalez,Mat Evans, Giorgio Metta, and Tony J Prescott. Embodied hyperacu-ity from bayesian perception: Shape and position discrimination withan icub fingertip sensor. In Intelligent Robots and Systems (IROS),Proceedings of the 2012 IEEE/RSJ International Conference on, pages4638–4643, 2012.[43] David Lieb, Andrew Lookingbill, and Sebastian Thrun. Adaptive roadfollowing using self-supervised learning and reverse optical flow. InProc. Robotics: Science and Systems, pages 273–280, 2005.61Bibliography[44] K.M. Lynch. Toppling manipulation. In Robotics and Automation(ICRA), Proceedings of the 1999 IEEE International Conference on,volume 4, pages 2551 –2557 vol.4, 1999.[45] Andrew T Miller and Peter K Allen. Graspit! a versatile simulator forrobotic grasping. Robotics & Automation Magazine, IEEE, 11(4):110–122, 2004.[46] Pabitra Mitra, CA Murthy, and Sankar K. Pal. Unsupervised featureselection using feature similarity. IEEE transactions on pattern analysisand machine intelligence, 24(3):301–312, 2002.[47] Chellappa Muthukrishnan, David Smith, Donald Myers, Jack Rebman,and Antti Koivo. Edge detection in tactile images. In Robotics andAutomation (ICRA), Proceedings of the 1987 IEEE International Con-ference on, volume 4, pages 1500–1505, 1987.[48] Yoshihiko Nakamura, Kiyoshi Nagai, and Tsuneo Yoshikawa. Mechan-ics of coordinative manipulation by multiple robotic mechanisms. InRobotics and Automation (ICRA), Proceedings of the 1987 IEEE In-ternational Conference on, volume 4, pages 991–998, 1987.[49] John R Napier. The prehensile movements of the human hand. Journalof bone and Joint surgery, 38(4):902–913, 1956.[50] Kenneth J Overton and Thomas Williams. Tactile sensation for robots.In Artificial Intelligence (IJCAI), International Joint Conferences on,pages 791–795, 1981.[51] Lucia Pais, Keisuke Umezawa, Yoshihiko Nakamura, and Aude Billard.Learning robot skills through motion segmentation and constraints ex-traction. In HRI Workshop on Collaborative Manipulation, 2013.[52] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. Knowl-edge and Data Engineering, IEEE Transactions on, 22(10):1345–1359,2010.62Bibliography[53] Sivalogeswaran Ratnasingam and T Martin McGinnity. A spiking neu-ral network for tactile form based object recognition. In Neural Net-works (IJCNN), The 2011 International Joint Conference on, pages880–885, 2011.[54] Sivalogeswaran Ratnasingam and TM McGinnity. Object recognitionbased on tactile form perception. In Robotic Intelligence In Informa-tionally Structured Space (RiiSS), 2011 IEEE Workshop on, pages 26–31, 2011.[55] ND Ring and DB Welbourn. Paper 8: A self-adaptive gripping device:Its design and performance. In Institution of Mechanical Engineers,Proceedings of the, volume 183, pages 45–49, 1968.[56] Joseph M Romano, Kaijen Hsiao, Gu¨nter Niemeyer, Sachin Chitta,and Katherine J Kuchenbecker. Human-inspired robotic grasp controlwith tactile sensing. Robotics, IEEE Transactions on, 27(6):1067–1079,2011.[57] Lawrence K Saul and Sam T Roweis. Think globally, fit locally: unsu-pervised learning of low dimensional manifolds. The Journal of MachineLearning Research, 4:119–155, 2003.[58] N Sawasaki, M Inaba, and H Inoue. Tumbling objects using a multi-fingered robot. In Proceedings of the 20th International Symposium onIndustrial Robots and Robot Exhibition, pages 609–616, 1989.[59] ClaudioGentili Scilingo, Lorenzo Sani, Vincenzo Positano, Filomena MSantarelli, Mario Guazzelli, James V Haxby, Luigi Landini, AntonioBicchi, and Pietro Pietrini. Perception of visual and tactile flow acti-vates common cortical areas in the human brain. In EuroHaptics 2004,Proceedings of, pages 290–292, 2004.[60] David M Siegel. Finding the pose of an object in a hand. In Roboticsand Automation (ICRA), Proceedings of the 1991 IEEE InternationalConference on, pages 406–411, 1991.63Bibliography[61] Pavan Sikka, Hong Zhang, and Steve Sutphen. Tactile servo: Controlof touch-driven robot motion. In Experimental Robotics III, pages 219–233. Springer, 1994.[62] Boris Sofman, Ellie Lin, J Andrew Bagnell, John Cole, Nicolas Van-dapel, and Anthony Stentz. Improving robot navigation through self-supervised online learning. Journal of Field Robotics, 23(11-12):1059–1075, 2006.[63] Xiuyao Song, Mingxi Wu, Christopher Jermaine, and Sanjay Ranka.Statistical change detection for multi-dimensional data. In KnowledgeDiscovery and Data Mining, Proceedings of the 13th ACM SIGKDDInternational Conference on, pages 667–676, 2007.[64] J.C. Spall and John A. Cristion. Model-free control of nonlinear stochas-tic systems with discrete-time measurements. Automatic Control, IEEETransactions on, 43(9):1198–1210, 1998.[65] Jan Steffen, Robert Haschke, and Helge Ritter. Experience-based andtactile-driven dynamic grasp control. In Intelligent Robots and Systems(IROS), Proceedings of the 2007 IEEE/RSJ International Conferenceon, pages 2938–2943, 2007.[66] Barrett Technology. Libbarrett., 2011. Accessed: 2014-09-30.[67] Barrett Technology. Wam internal pc/104 configuration., 2012. Accessed: 2014-09-30.[68] Johan Tegin and Jan Wikander. Tactile sensing in intelligent roboticmanipulation–a review. Industrial Robot, 32(1):64–70, 2005.[69] Joshua B Tenenbaum, Vin De Silva, and John C Langford. A globalgeometric framework for nonlinear dimensionality reduction. Science,290(5500):2319–2323, 2000.64[70] Randall D Tobias et al. An introduction to partial least squares regres-sion. In SAS Users Group International (SUGI), Proceedings of the20th, pages 2–5, Orlando, FL, USA, 1995.[71] William Townsend. The barretthand grasper–programmably flexiblepart handling and assembly. Industrial Robot: An International Jour-nal, 27(3):181–188, 2000.[72] William T Townsend and J Kenneth Salisbury. Mechanical design forwhole-arm manipulation. In Robots and Biological Systems: Towards aNew Bionics?, pages 153–164. Springer, 1993.[73] Marc R Tremblay and Mark R Cutkosky. Estimating friction usingincipient slip sensing during a manipulation task. In Robotics and Au-tomation (ICRA), Proceedings of the 1993 IEEE International Confer-ence on, pages 429–434, 1993.[74] A˚ B Vallbo, RS Johansson, et al. Properties of cutaneous mechanore-ceptors in the human hand related to touch sensation. Human Neuro-biology, 3(1):3–14, 1984.[75] Kenji Yamanishi and Jun-ichi Takeuchi. A unifying framework for de-tecting outliers and change points from non-stationary time series data.In Knowledge Discovery and Data Mining, Proceedings of the 8th ACMSIGKDD International Conference on, pages 676–681, 2002.[76] Yuan-Fei Zhang and Hong Liu. Tactile sensor based varying contactpoint manipulation strategy for dexterous robot hand manipulatingunknown objects. In Intelligent Robots and Systems (IROS), 2012IEEE/RSJ International Conference on, pages 4756 –4761, Oct 2012.[77] Dmitrii Nikolaevich Zubarev, P Gray, and PJ Shepherd. Nonequilibriumstatistical thermodynamics. Consultants Bureau New York, 1974.65Appendix ASurprise-and-adaptPseudocodevoid operate() //repeat at 500Hz{ //Consider a history of thresholded differences.if ( sensor_value - expected_value > threshold_p )history.append_front ( 1 )else if ( sensor_value - expected_value < threshold_n )history.append_front ( -1 )elsehistory.append_front ( 0 )//’Surprise’ iff at least K contiguous unexpected readings.if ( sum ( history [ 0 : K ] ) == K ) //too much pressuredecrease_wrist_angle()if ( sum ( history [ 0 : K ] ) == -K ) //too little pressureincrease_wrist_angle()}Program 2: Realtime operate method pseudocode for the surprise-and-adaptrealtime system module. See Section 4.5 for general details on Libbarrettrealtime systems development.66


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items