UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A robotic workstation vision-based safety system for persons with physical disabilities Visser, Mitchell Dean 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-0613.pdf [ 8.82MB ]
Metadata
JSON: 831-1.0080895.json
JSON-LD: 831-1.0080895-ld.json
RDF/XML (Pretty): 831-1.0080895-rdf.xml
RDF/JSON: 831-1.0080895-rdf.json
Turtle: 831-1.0080895-turtle.txt
N-Triples: 831-1.0080895-rdf-ntriples.txt
Original Record: 831-1.0080895-source.json
Full Text
831-1.0080895-fulltext.txt
Citation
831-1.0080895.ris

Full Text

A ROBOTIC WORKSTATION VISION-BASED SAFETY SYSTEM FOR PERSONS WITH PHYSICAL DISABILITIES by Mitchell Dean Visser B.Sc. University of Calgary, 1993 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF T H E REQUIREMENTS OF T H E DEGREE OF MASTER OF APPLIED SCIENCE in T H E F A C U L T Y OF GRADUATE STUDIES Department of Mechanical Engineering We accept this thesis as conforming to the required standard T H E UNIVERSITY OF BRITISH COLUMBIA September, 1996 © Mitchell Dean Visser, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada D a t e ^ g / g ^ / - /5[ /ffS DE-6 (2/88) Abstract ABSTRACT The overall goal of this research project was to investigate and develop a vision-based safety system that would either solely, or in combination with another system, provide an acceptable level of safety for a user with a severe physical disability while operating a rehabilitative robotic workstation developed by the Neil Squire Foundation A system was developed that uses a single camera to track the user in a horizontal plane and uses feedback from the robot controller to calculate the position of the robot using kinematic equations. The system is controlled by a computer that can communicate with the robot controller and stop the robot if it detects a safety zone violation. Prior to the development of a vision-based safety system, a safety analysis was performed considering general rehabilitative robotic equipment and a user with a severe physical disability. This analysis revealed injury mechanisms such as collisions, pinning, and pinching and identified levels of injuries from life threatening injuries to undesired contact with the robot. The safety analysis was then specifically targeted to the Neil Squire Foundation robot and used to determine the performance requirements of a variety of safety systems, including a vision-based safety system. Testing of the vision-based safety system on the robotic workstation with potential users showed that the system fulfilled all the project specifications, including preventing all unintentional contact between user and the robot. Table of Contents TABLE OF CONTENTS ABSTRACT ii LIST OF TABLES v LIST OF FIGURES vi ACKNOWLEDGEMENTS viii DEDICATION ix INTRODUCTION 1 1.1 Project Objectives 4 CURRENT TECHNOLOGIES AND SAFETY POLICIES 5 2.1 Vision 5 2.1.1 Background 5 2.1.2 Camera and Marker Systems 11 2.2 Current Safety Standards and Policies 13 2.2.1 Government Standards 14 2.2.2 Current Safety Research 17 2.3 Other Safety Systems for the Assistive Robot 19 SPECIFICATION OF A SAFETY SYSTEM, 21 3.1 The Robot 21 3.2 The User 27 3.3 Safety Analysis 29 3.3.1 Acceptable Safety Limits 33 3.4 Summary of System Specifications 34 A VISION-BASED SAFETY SYSTEM 35 4.1 Robot Tracking 36 4.1.1 Robot Kinematics 37 4.2 User Tracking 43 - i i i -Table of Contents 4.2.1 Image Analysis 43 4.3 System Software 45 4.4 System Calibration 51 4.5 System Improvements 53 TESTING AND RESULTS 56 5.1 System Verifnation 57 5.1.1 System Tracking Accuracy 57 5.1.2 Sampling Difference and Safety Zone Violations 59 5.1.3 Confirming the Safety System 63 5.2 User Tests 65 5.2.1 Subject Descriptions 66 5.2.2 User Trial Results 67 CONCLUSIONS AND RECOMMENDATIONS 70 6.1 Meeting the Proj ect Specifications 71 6.2 Comparison of the Light Curtain and Vision System 73 6.3 Recommendations for Future Work 75 REFERENCES 77 APPENDIX A - ROBOT TECHNICAL SPECIFICATIONS 81 APPENDIX B - EQUIPMENT USED IN SAFETY SYSTEM 82 APPENDIX C - PARAMETER CURVES FOR NEIL SQUIRE ROBOT... 83 APPENDIX D - USER QUESTIONNAIRE. 84 APPENDIX E - VISION SYSTEM COSTS 96 APPENDIX F - COMPUTER CODE 97 -iv-List of Tables LIST OF TABLES Table 2.1.1-1: Current Vision Systems Being Researched 6 Table 2.1.1-2: Comparison of Two and Three Dimensional Tracking Systems 10 Table 2.1.1-2: Camera / Marker Systems 12 Table 3.3-1: Potential Safety Systems and their Protection Level 32 Table 4.1.1 -1: Denavit-Hartenburg Parameters 38 Table 4.5-1: System Improvement Summary 55 Table 5.1.1 -1: Accuracy of the Calculated Position of Robot 58 Table 5.1.1-2: Accuracy of the Camera 59 Table 5.1.3-1: Safety System Results 64 Table 5.2.2: Summary of User Responses in Questionnaire 67 Table A: Robot Technical Specification 81 -v-List of Figures LIST OF FIGURES Figure 2.1.1-1: Typical Three-Dimensional Post -Processed Tracking System 8 Figure 3.1-1: Robotic Workstation 22 Figure 3.1-2: Close up on the Robot Gripper with Mouse Emulator 23 Figure 3.1-3: Schematic Diagram of Robot 24 Figure 3.1-4: Control Schematic for Robot 25 Figure 3.1-5: Points of Concern 26 Figure 3.3-la: Injury Mechanisms and Injury Rankings 30 Figure 3.3-lb: Injury Mechanisms and Injury Rankings 30 Figure 3.3-lc: Injury Mechanisms and Injury Rankings 31 Figure 4-1: Schematic of the Vision -Based Safety System 36 Figure 4.1.1-1: Denavit-Hartenburg Coordinate Convention for the Robot 39 Figure 4.2.1-1: Information reduction for image analysis 45 Figure 4.3-1: Programming Flow Chart for the Modified Software 47 Figure 4.3-2: Flow Chart of Tracking Subroutine 48 Figure 4.3-3: Computer / Robot Controller Interface 51 Figure 4.4-1: Robot Calibration Curves 52 Figure 4.4-2: Camera Calibration Curves 52 Figure 4.5-1: Gripper Endpoint Sampling Difference 54 Figure 5.1.2-1: Path Plot of Gripper Motion 60 Figure 5.1.2-2: Sampling Differences 61 Figure 5.1.2-3: Time and Distance Plot with Safety Zone 62 Figure 5.1.2-4: Closing Distance and Safety Violations 63 List of Figures Figure C - l : Robot Calibration Curves 83 Figure C-2: Camera Calibration Curves 83 -vii-Abstract ACKNOWLEDGEMENTS There have been many people involved in the development of the vision-based safety system and I would like to take this opportunity to thank those people and organizations that contributed to the success of this project. First and foremost, I would like to thank the Neil Squire Foundation and BC Science Council who provided funding for the project, thus making the research possible. Funding is not where Neil Squire Foundation's help ended; they also answered my questions, offered advice, and helped devise a practical solution to the problem. Members of the research arm of Neil Squire, in particular Dr. Gary Birch (Co-Supervisor) and Markus Fengler, were also instrumental team members when developing a means of assessing the safety concerns involved in dealing with individuals with physical disabilities . I would also like to thank my supervisor, Dr. Douglas P. Romilly, who provided financial assistance and the necessary mechanical engineering expertise to make this research a success. This project also involved a large amount of programming. I can't thank Gerry Rohling enough for the much needed help and guidance in the development of the Sentinel software. Without his help I have no idea how long this project would have taken to complete. I would also like to thank the Center for Integrated Computer System Research for providing the laboratory and office space required for this research to be completed. -v i i i -Dedication DEDICATION "All I really need to know about how to live and what to do and how to be I learned in kindergarten. Wisdom is not at the top of the graduate-school mountain, but there in the sandpile at Sunday School " Robert Fulghum I would like to dedicate this thesis to my parents, who insured I learned the lessons at the Sunday School sandpile so that I could climb the graduate-school mountain. -ix-Chapter 1: Introduction INTRODUCTION Every year, 50 people in British Columbia survive accidents that result in injuries causing paralysis from the neck down [1]. These injuries result in the individuals losing the use of their arms and legs. These accident victims join a growing population of people classified as persons with disabilities who only have their voice, minds, and head motion to assist them in making a meaningful life for themselves. It is statistics like this that drove the development of the Neil Squire Foundation, and more specifically that caused this particular research project to be undertaken. The Neil Squire Foundation is a Canadian non-profit organization, responsive to the needs of individuals who have severe physical disabilities. Their purpose is to create opportunities for greater independence in all aspects of life. One the of Foundation's research projects is aimed at getting individuals with below-neck paralysis back to work. The rationale behind this project is that many persons with physical disabilities, although their bodies no longer function as they were designed to do, still have functioning minds and are capable of doing productive work if a means to overcome their disability can be developed. To this end, the Neil Squire Foundation has developed a desktop-mounted robotic workstation that provides enough environmental manipulation (e.g., holding a coffee cup, turning pages, putting a disk in a computer) to make it feasible that an individual could perform a productive day of work. (Details about this unique vocational robot can be found in Chapter 3.) Chapter 1: Introduction The unique application of this robot provides the user with the necessary independence, but in doing so it causes another problem, a concern about the safety of the user while operating the robot. The robot, in order to perform useful work must be able to generate sufficient force to manipulate objects in the workspace. This required strength means that the robot also has enough energy to inflict injury to the users. Combine this with the uniqueness of the users, which requires them to be in the robot workspace and prevents them from easily moving out of harms way, and the potential for user injury becomes a valid concern. It is this potential for injury that makes implementation of this robot in a work environment a difficult task. The individuals with physically disabilities that have used this robot during its development and testing phases have been pleased with the increased independence that the robot provides and have determined that their independence is worth the safety risk involved in using the robot [2]. Unfortunately the decision to use the robot in a work environment is not completely the user's decision. When this robot moves out of the laboratory and into the workplace it comes under a whole other series of rules and guidelines that are in place to protect both the individuals from workplace injuries and the companies from lawsuits associated with products used at work. When the robotic workstation is used in a vocational setting, four different groups of people become involved in the robot's safety performance: • the user • the employer • the robotic workstation designers (also manufactures and distributors) • the government Each of these parties views the safety situation from a slightly different perspective. The user, as mentioned previously, enjoys the opportunities that the robot provides and is willing to accept a reasonable risk of injury. The employer is worried about liability related to allowing a -2-Chapter 1: Introduction potentially dangerous piece of equipment to be used by an employee. Employers also have to comply with government regulatory bodies such as the Workers Compensation Board (WCB) and the Canadian Safety Association (CSA). Products with the potential to cause injuries to an operator are not new in the workplace. There are many examples where individuals are placed at risk when operating equipment during work, individuals workers such as forklift operators, loggers and welders. What is unique about the robotic workstation is that, unlike most workstations where the operator is able bodied, in this situation the user is severely disabled. This is a unique situation for employers and the government, and few standards have been developed which address it as unique. The Neil Squire foundation realized that to make the robotic workstation acceptable to potential employers and satisfy governments concerns, they would have to develop a safety system for the robot. As a result, the Neil Squire Foundation undertook a project to investigate a variety of potential safety technologies ranging from torque sensors to light curtains. As a contribution to this effort, the University of British Columbia proposed the research and development of a vision-based safety system to be used with the robot. A vision-based system has the advantage of being a "non-contact" safety system in contrast to a torque sensor that requires contact with the user before being activated. The vision system was also perceived as being more "flexible" (e.g. more adjustable for different users, workstation setups, and safety zone sizes) than a light curtain, which provides a single barrier between the robot and user, preventing unintentional contact. Thus, the goal of this research project was to investigate and develop a vision-based safety system that would either solely, or in combination with another system, provide an acceptable level of safety for the user. Chapter 1: Introduction 1.1 Project Objectives The objectives of this project can be summarized in the following steps: 1. Conduct a review of the literature to document the existing safety requirements for rehabilitative robots and the level of existing vision tracking technology (Chapter 2). 2. Define the specific requirements of the safety system for the Neil Squire Foundation rehabilitation robot. This will involve examining and becoming familiar with the robot, identifying the potential users and conducting a safety analysis on the robotic workstation (Chapter 3). 3. Select and or develop the necessary hardware and software to construct a vision-based safety system capable of performing the specified requirements (Chapter 4). 4. Perform laboratory and clinical testing of the system utilizing potential users to validate the performance requirements of the system (Chapter 5). 5. Evaluate the results of the testing in comparison to other developed alternatives and provide recommendations for the safety system selection and further development (Chapter 6). A review of the existing technology related to vision technology and applicable robot safety standards is provided in the next chapter -4-Chapter 2: Current Technology and Safety Policies CURRENT TECHNOLOGIES AND SAFETY POLICIES 2.1 Vision Vision is a very large field of study and is growing constantly. Research in the area of vision ranges from image analysis and processing to improvements in image quality and camera development. The following two sections will focus on the small portion of vision technology that relates to tracking an object using vision. These sections will cover the necessary background information required to understand the use of vision in the safety system and will mention some of the other research that is going on in this area. 2.1.1 Background Vision tracking technology is not a new field of study. The ability to use successive images to provide a history of an object's motion has been around for over a hundred years. As early as 1876, a camera was used to take pictures of the movement of the planet Venus while it crossed in front of the Sun [3]. In 1882, a camera was used to track the motion of a horse as it jumped over an obstacle[3]. These early studies used successive still images that were analyzed after the film was developed. Although vision is not a new field of study it is a developing and growing field. An ever increasing number of areas use vision as a research tool, and are undertaking research and development of vision technology. Examples of this increased use of vision can be seen in areas such as automobile tracking, manufacturing, robotics, welding, Chapter 2: Current Technology and Safety Policies measurement, digitizing, and biomechanics [3,4, 5, 6,7,8, 9]. Present technology no longer relies on developed film, but rather utilizes captured video images and high speed computer analysis techniques to extract the desired information. A literature review on vision and object tracking indicates that there are four major areas of active research in this field. These four systems can be seen by examining the matrix in Table 2.1.1-1. While vision is actively being researched to improve the existing level of technology, it is also being used as a tool in research and as a sensor in a variety of systems. This research project uses vision as part of the system. Post-Processed Pseudo-Real-Time* Two Dimensions A C Three Dimensions B D Table 2.1.1-1: Current Vision Systems Being Researched * Pseudo-Real-Time - In this project "real-time," or "pseudo-real-time," refers to a sampling and processing rate that is sufficient to gather the necessary information on the movement of the object being tracked. Real-time for tracking the movement of a mountain is a sample point every century where if you were tracking a bullet fired from a gun you would require a sampling rate in excess of 500Hz. Chapter 3 defines the required real-time sampling rate for this project. While a post-processed system is not feasible for a vision-based safety system, most of the real-time vision tracking systems have been developed using a post-processed system as a starting point. It therefore seemed pertinent to briefly discuss all four types of vision systems. The four systems A, B, C and D are the combination of the intersecting rows and columns of Table 2.1.1-1. Chapter 2: Current Technology and Safety Policies • System A: post-processed, two-dimensional vision systems. These systems capture the images of an event on film or video tape, then after the event is completed, the film or tape is analyzed and the positions of the objects of interest are determined. When video tape is used to record the event the analysis process is often done automatically using a computer system. Photographic film is usually manually digitized. Technology has developed to a point where more video tape is being used than photographic film due to lower costs and ease of use. Film is normally used only when extremely high sampling rates ( >500Hz) or high resolution are required. Video tape typically has a sampling rate at 30 to 60Hz, but continued advancement in video technology has increased this sampling rate and will likely continue to do so as technology advances. • System B: post-processed three-dimensional vision systems. These systems are very similar to the post-processed two-dimensional vision systems in that the three -dimensional coordinate space is developed by using at least two two-dimensional systems. This system works by setting up a calibration frame that has at least six identifiable positions that are known in three-dimensional space, prior to the event being examined. This calibration frame is filmed by a least two cameras, and by using a method like the Direct Linear Transformation (DLT) [10] (see equations below) the calibration coefficients ajj can be determined. The event is then recorded with the two cameras, and both sets of the two-dimensional images can be post processed, and the object's location can be determined in three dimensions. _ a l j x i + a 2 j y i + a 3 j z i + a 4 , i 1 J aQjXi+ajojVi + anjZi + l a 5 j x i + a 6 j y i + a 7 . j z i + a 8 j U a 9 j x i + aiojYi + anjZi +1 -7-Chapter 2: Current Technology and Safety Policies where for a given marker i: Xjj = x coordinate of marker i on the film measured with camera j yjj = y coordinate of marker i on the film measured with camera j Xj = x coordinate of marker i in the three-dimensional space yi = y coordinate of marker i in the three-dimensional space ZJ = z coordinate of marker i in the three-dimensional space ajcj = coefficient k in the transformation formulas for marker i Figure 2.1.1-1 show a typical setup for a post-processed three-dimensional tracking system. This setup is very similar to the two-dimensional system except that it uses at least two cameras instead of a single camera. Video Video recorder camera Video recorder Figure 2.1.1-1: Typical Three-Dimensional Post -Processed Tracking System Chapter 2: Current Technology and Safely Policies A post-processed tracking system was developed in the Department of Mechanical Engineering at UBC by a research team looking at human arm motion in three -dimensional space. This system developed a computer program called Shadow™ [11] for tracking identifiable points on objects in three -dimensional space. This tracking system and associated software were used as a starting point for this project, as outlined later in Chapter 4. • System C and D real-time vision systems: This area is where most of the tracking research is being done. These systems are similar to the post-processed systems except that they do not have the delay in receiving the object's position. The uses for such systems are tremendous. It is this real-time feature that makes vision a viable potential for a safety system. Currently there are readily available products on the market that perform pseudo-real-time two-dimensional tracking. Three-dimensional pseudo-real-time tracking systems are just recently available on the market. A typical setup for real-time processing would be similar to the one shown in Figure 2.1.1-1 except that there would be no video recorders, and the cameras would be directly connected to the computer with no post-processing. Chapter 2: Current Technology and Safety Policies Table 2.1.1-2 is a comparison of the two and three-dimensional tracking systems available and summarizes some key points such as cost and accuracy. Feature Two-dimensional Systems Three-dimensional Systems Availability • Post-Processed - Multiple commercial systems available, (e.g. Peak*, SELSPOT*, VICON*). • Real-Time - Video boards capable of real-time tracking available (e.g., Sharp GBP-1). • Post-Processed - Multiple commercial systems available (e.g. Peak*, SELSPOT*, VICON*). • Real-Time - System are just recently available (e.g., Optotrak or MacReflex) Cost Commercial Systems: $ 20,000 and up for complete systems. A video camera and processing board can be < $10,000 Commercial Systems: approximately $ 60,000 and up for complete systems. Markers Only two for calibration of the system At least 6 markers for calibration frame. (Typically more are used and least square fit used to determine the required calibration constants) Accuracy • This is very dependent on the quality of the cameras, camera lens quality, distance of event from camera, number of cameras, calibration technique, marker placement, marker type, static or dynamic event.... • A properly set up system can have accuracies reported on static tests of less than 1% [11]. Micron accuracy has also been reported in another study [5]. * These are brand names of commercial systems available for purchase Table 2.1.1-2: Comparison of Two and Three Dimensional Tracking Systems Chapter 2: Current Technology and Safety Policies 2.1.2 Camera and Marker Systems The four systems (A, B, C, D) have two things in common: each system includes two basic components, cameras and markers. The camera is used to record the event, and the marker is used to identify key features in the recorded image. Markers constitute a very important part of a tracking system as their characteristics can effect precision, accuracy and operating speed of the system [12]. Using markers allows key points of interest to be identified and unimportant information to be disregarded. For a pseudo-real-time system speed is an important concern. With less of the image to analyze the system can operate faster. The use of markers is very important when automatic digitizing is being used because the computer needs to be able to easily identify the points of interest, if the image is being manually digitized, markers may not be required. There are ways other than placing markers on the subject of interest to eliminate information from an image. Some systems have used edge detection as a means of tracking the object [13]. Another technique is to identify the region of the image that has changed from one frame to the next and ignore the remaining portion of the captured image [14, 15]. Markers, edge detection, and regions of interest all perform the same function, and that is to limit the amount of information to be analyzed. When dealing with markers there are concerns about lost markers (i.e. specific markers not visible in the image), markers crossing over each other and being incorrectly identified, "noise" and unwanted markers (i.e. other objects in the recorded image that might be mistaken for markers) [8, 16]. These concerns must be taken into account when a marker system is being chosen. There are three main camera marker systems available in the market. These are shown in the matrix of Table 2.1.1-2. [3] The matrix in Table 2.2.1-2 shows the three possible systems. -11-Chapter 2: Current Technology and Safety Policies Systems I and II are the most commonly used configuration. System III has been used in only a very few instances. [17] Passive Camera Active Camera Passive Markers I III Active Markers II Table 2.1.1-2: Camera I Marker Systems Passive Cameras - This type of camera is the most popular type of camera. This camera has a shutter that opens and closes allowing light from the event being photographed to enter through the lens and be recorded on either film or video tape. These cameras are like video cameras, movie cameras, or your typical snap shot camera that are used by most people to take pictures. A passive camera is the most popular camera available and is most often used in research and projects using vision. Active Cameras - Active cameras are not a popular camera. This type of camera emits a light beam (sometimes from more than one emission source) onto an passive marker like a photo-diode. The camera system only records the location of the marker, not an image containing the marker. Passive Markers - Markers that are passive do not emit any signal. These markers are designed to stand out from the surrounding sources and be easily identifiable on the recorded image. Examples of this type of marker would be items like reflective tape or white dots on a black background. Many systems use this type of marker because it is a simple and cheap way of identifying key features of the event being examined. A disadvantage of this type of marker is that for best results usually the event must be specially illuminated to insure the markers are well highlighted. -12-Chapter 2: Current Technology and Safety Policies • Active Markers - These markers emit a signal. In most cases this is a light (infra-red or visible) that makes the desired point of interest identifiable from its surroundings. This system of marking is also very popular, but the markers are more complicated (larger and heavier) and may interfere with the events being examined and may cost more money. These extremely visible markers are sometimes used because they reduce the need for special lighting. 2.2 Current Safety Standards and Policies The concept of needing safety standards for robots has been around since robots were first invented. Isaac Asimov, a science fiction writer in the 1950's, wrote the following three laws: The Three Laws of Robotics 1. A robot may not injure a human being or through inaction allow a human being to come to harm 2. A robot must obey the orders given it by a human being except where such orders conflict with the First Law 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law Isaac Asimov When Isaac Asimov wrote these three laws in the 1950's he imagined robotics to be far more advanced by the year 1995 [18] than reality proved them to be. Isaac Asimov imagined independently thinking artificially-intelligent robots that would be constructed and used in all aspects of human culture. He realized that if man made robots, he would also need to develop laws governing robots to protect humans from a robot's superior speed and strength. Although robots have not advanced as far as Isaac Asimov imagined, robots are now used in -13-Chapter 2: Current Technology and Safety Policies society, and certain safety standards need to be developed to insure that the users of these robots are protected from injury. Current developments in robot and rehabilitative equipment safety standards will be examined in the following two sections. First, government standards that have been developed to handle this situation will be examined and their shortcomings with regards to the issue of safety and rehabilitative equipment will be outlined: second, current areas of safety system research will be examined to determine the level of current safety technology, and to determine how this applies to the development of a vision-based safety system. 2.2.1 Government Standards Government safety standards have been written to protect those people who without expertise are unable to differentiate a safe product from an unsafe product. Standards have also been written to provide a minimum safety standard for all products, providing consistency across the country. These standards are often developed by experienced experts and will continue to evolve as the knowledge in a field expands. Standards can be used as a guide by designers to remind them of particular concerns with regards to product development, but are never intended to be a design manual and should never be used to replace sound engineering judgment. With this concept of what standards are and what they are to be used for, the current CSA safety standards were examined for information regarding safety standards for rehabilitative equipment. ASME/ASI standards were also looked at but for the most part paralleled CSA standards, so for this project only the Canadian standards are mentioned. The immediate conclusion is that there is no standard that deals with the subject of safety for individuals with severe disabilities using rehabilitative equipment [19]. This does not mean that there is nothing useful in the standards that applies to this unique situation. What this does -14-Chapter 2: Current Technology and Safety Policies mean is that rehabilitative equipment is a relatively new field and since standards are usually developed based on experience, nothing has yet been developed to handle the issue of safety with this type of product. CSA standard Z432-94: Safeguarding of Machinery examines the issue of protecting users from a piece of machinery. According to this standard's definition of machinery, the Neil Squire assistive robot is classified as a piece of machinery. What this standard does is supply guidelines for the development of a safety system for machinery. Unfortunately the standard deals with things like barriers, machine guards, lockouts, and emergency stop buttons. This standard is an excellent guide for non-rehabilitative equipment as it deals with the safety situation of robots that are designed never to come into contact with the robot user. It protects the user from accidental contact with the robot by outlining a means to prevent the potential for any contact with the robot at all. The difference with a rehabilitative robot is that it is designed to come into contact with the user. In the past robots were developed to remove the user from harm's way by providing a means to manipulate the environment from a distance. With the development of rehabilitative robots (and in some cases medical robots) the user has become an integral part of the system that the robot is designed to manipulate. Isolating the user from the potential of contact with machinery is not possible in this situation; thus standard CAN/CSA Z432-94 does not provide a means to deal with this situation. The standard does deal with design methodology by insisting that safety issues must be considered during the design of a product rather than at the end of development, a policy that has been used by the Neil Squire Foundation through the entire development of the rehabilitative robot. A standard dealing with medical equipment (CAN/CSA C22.2 No601.1-M90) does recognize that robots and equipment have the potential of coming into contact with a patient. This standard covers issues ranging from electrical requirements to the mechanical strength of -15-Chapter 2: Current Technology and Safety Policies equipment. The limitation of this standard for our situation can be summed up by looking at the following statement made by the standard (Section 22.4): Movements of Equipment or Equipment parts which may cause physical injury to the Patient shall be possible only by continuous activation of the control by the operator of these equipment parts. This statement implies, along with the rest of the standard, that the equipment will be used by a highly qualified able-bodied user on a patient. The standard does not address the user of the robot being physically disabled or the particular safety needs of these unique users. There are several standards that deal with the special needs of the physically disabled. One group of these standards deals with wheelchair design. An example of this is CAN/CSA Z323.4.3-M89 which deals with the design of wheelchairs for static stability. These standards deal with the design of a special piece of fairly common rehabilitative equipment, but fail to provide any general requirements which address the special need of the user in regards to safety. Another standard that has the same failing is CAN/CSA Z323.1.2-94: Automotive Adaptive Driving Controls (AADC) for Persons with Physical Disabilities. Although this standard does discuss some of the special needs of users with physical disabilities, it fails to mention the reasoning behind particular statements that would be crucial to a more general standard. The standard is specific to a certain situation, but still does not cover the special safety needs and abilities of the physically disabled that use rehabilitative equipment. There are some general safety design standards that cover a wide variety of safety concerns. The first is CAN/CSA Q634-91: Risk Analysis Requirements and Guidelines. This standard, although not dealing with the special considerations for the safety of rehabilitative equipment, does develop a methodology which involves answering the following three questions: 1. What can go wrong? 2. How likely is it? 3. What are the consequences? -16-Chapter 2: Current Technology and Safety Policies This standard is a guideline for planning, executing and using risk analysis and provides a means to qualify the results. By its own admission it indicates that specialized knowledge is required before proceeding to perform a risk analysis of a system. The standard does not provide criteria for identifying the needs for risk analysis. The standard has many good ideas and guidelines but still falls short of solving the problem. Another safety standard is the Industrial Health and Safety Regulations published by the Workers' Compensation Board of British Columbia [20]. As the title of these regulations suggests, they are the rules and requirements that deal with safety in a working situation. The responsibilities of the worker and employer are outlined, and a large variety of working situations are covered. What is not covered is the situation of people with disabilities in the workplace using rehabilitative equipment such as the Neil Squire Assistive robot. The special needs and considerations for the safety of employees with disabilities using robotic equipment are not covered in the regulations. In summary, the existing standards provide a great deal of information and guidelines regarding safety and the interaction (or lack of interaction) between human and machine. These regulations should not be overlooked when designing a safety system for an assistive robot, but they fall short of covering the special needs of the physically disabled who are using rehabilitative equipment. A safety analysis of the safety situation involved with using the Neil Squire assistive robot is completed in Chapter Three. 2.2.2 Current Safety Research The state of the art for safety analysis in rehabilitative robotics is minimal. This is not saying that no one considers safety when developing specialized equipment for people with disabilities. The truth is quite the opposite. Almost everyone mentions that safety is a concern, and any designer must consider the safety of the user. Many designs include a variety of safety -17-Chapter 2: Current Technology and Safety Policies systems and some research is even directly targeted at building safety systems [21, 22]. What fails to happen, preventing the further development in this area, is that if a safety analysis is conducted there does not appear to be any published evidence of it. There were only a couple of examples [23,24] where the issues of safety and robotics were mentioned, and current safety systems were summarized, but no method of doing the safety analysis was mentioned. There were also papers that indicated that there had to be a complete safety analysis performed on specified rehabilitation equipment. [25,26] Most of the rehabilitative equipment developers mention that safety is a big concern, and include a variety of safety mechanisms in their design. What fails to happen is a presentation of an analysis of the safety situation outlining why particular devices were chosen. Certainly many of the safety issues are very easy to identify, such as preventing the robot motors from applying undesired movement by means of engaging and disengaging clutch plates [27], avoiding the possibility of equipment/user contact altogether [28], or redundant encoders in the event of malfunction. [29] What is not presented is what safety problems were a concern at the beginning of the project and what safety issues were eliminated by installing these safety devices. Some researchers make general safety comments like "the device must never injure the user" but then fail to tell how this was achieved. [22,30]. Other projects never mention safety at all when this should be at least covered. A paper describing a procedure to evaluate a wheelchair mounted manipulator arm failed to mention the importance of safety or to rate the safety risks as part of the evaluation [31]. Safety has been cited as an important issue in the prolonged use of rehabilitation robots, but again, an approach to tackling the issue was never mentioned [32]. In summary, safety is a big issue in the development of rehabilitative equipment. Most (probably all) researchers and designers consider this during their design. Unfortunately, the -18-Chapter 2: Current Technology and Safety Policies safety analysis, methodology, and results do not appear to have been published, and have certainly not been published in any comprehensive manner, preventing further development of this key area of concern. In Chapter three a safety analysis is shown that was completed on the Neil Squire Foundation's assistive robot to identify the safety problems and outline what safety concerns the various safety systems would hope to solve. 2.3 Other Safety Systems for the Assistive Robot While research of a vision-based safety system for the Neil Squire robot was conducted, other safety systems were being examined and tested with the robotic workstation. A brief description of these systems is below. • A torque sensor: This system is being incorporated into the robot controller and will monitor the magnitude and rate of change of the current being supplied to the robot motors. Any unexpected changes in the current requirements for the motors, such as those encountered when the robot hits an object, will trigger the system to stop the robot arm and back it away. This is a contact solution to the safety problem and is aimed at reducing the severity of the injury, not at preventing an injury. • A light curtain: This system is being developed in parallel to the vision system as a potential non-contact solution to the safety problem. This light curtain provides a fixed light barrier between the user and robot. The system should be able to detect an object penetrating the barrier and shut the robot off. This system does not track the position of the user or robot and assumes that the user is in a fixed location. This system cannot differentiate what object has broken the light curtain and therefore shuts the robot off if anything crosses the light barrier. -19-Chapter 2: Current Technology and Safety Policies • A deadman switch: This is a switch that must be continually depressed for the robot to operate. This provides no independent safety monitoring (i.e., a safety monitoring system not based on the user). • A panic stop button: This is a button that if pressed will shut the robot off. This system relies on the user's ability to detect a safety concern in enough time to press a button to stop the robot. This system does not provide any independent safety monitoring. -20-Chapter 3: Specifications of a Safety System S P E C I F I C A T I O N O F A S A F E T Y S Y S T E M Completion of the second objective of the project involves determining the specific requirements of the vision-based safety system for the Neil Squire Foundation rehabilitation robot. To determine these specifications, three things had to be investigated: 1. The robot, the robot controller, and the user control input had to be examined and an understanding of their operation had to be acquired (Section 3.1). 2. The abilities and limitations of the potential users of the robotic workstation had to be outlined (Section 3.2). 3. Finally, a safety analysis of the robot and user had to be conducted to determine what level of safety the vision-based safety system had to provide. (Section 3.3) 3.1 The Robot The Neil Squire Foundation robot is currently designed as a desktop mounted robot. The robot in a typical workstation environment is shown in Figure 3.1-1 and 3.1-2 is followed by a schematic of the robot in Figure 3.1-3. This is a six degrees of freedom (DOF) robot and is capable of manipulating objects and moving payloads in a cylindrical volume of space centered around the main horizontal axis. As mentioned in the introduction, this robot has been designed to assist a user (described in Section 3.2) in a work environment. The robot is designed to give a user sufficient -21-Chapter 3: Specifications of a Safety System independence to perform computer related office-type work and is therefore capable of manipulating objects such as books, file folders, coffee cups, computer disks and other non-fragile and relatively lightweight objects. The purpose of this robot is to provide individuals with a tool that wi l l assist them in finding meaningful work, thus providing them with a personal income and removing part of society's financial responsibility for their well-being. Since this robotic workstation is designed to operate in an office environment, any safety system developed must also be appropriate for an office situation. Figure 3.1-1: Robotic Workstation Another important feature of the robot is that it has been designed to be "low-cost" as compared to other robots. For example, the cost of a PUMA™ robot (e.g. DeVar robotic workstation, using the PUMA™ 260 from PaloAlto V A ) is over $100,000 whereas the Nei l Squire Robotic workstation is priced at under $40,000. The two robots are intended for different purposes: the P U M A arm is a precision, repeatable, high speed robot arm capable of performing a large -22-Chapter 3: Specifications of a Safety System variety of tasks (e.g. a P U M A robot arm could perform welding operations on an assembly line), whereas the Neil Squire Foundation Robot sacrificed some speed and precision to provide a low-cost option specifically for the severely physically disabled. The Neil Squire Foundation robot was designed with low-cost in mind from its conception. Many of the robots parts can be directly purchased and require little or no modifications. It is estimated that the robot has a three year pay back on the initial investment. Since a lot of effort went into the development of a low-cost robot, a logical specification for any safety system is that it must be developed to provide the required safety for the user at the lowest possible cost. It is hoped that a vision-based safety system can be developed for less than $6,000, which is only 15 percent of the cost of a robotic workstation. Figure 3.1-2: Close up on the Robot Gripper with Mouse Emulator The configuration of the robot is such that it has seven DC rotary motors. Six motors are used to move the main joints of the robot (0i to 06 in Figure 3.1-3), and they are used for positioning. Each of the six joints powered by these motors is equipped with a potentiometer -23-Chapter 3: Specifications of a Safety System used for servo positioning of the joints. The seventh motor operates the opening and closing of the gripper. These motors are connected to a PID controller which is connected by a serial cable to a personal computer that is capable of running the controller software within a Windows™ 3.1 operating environment. Figure 3.1-3: Schematic Diagram of Robot The software is operated by the user who interfaces with the computer by using any system that will emulate a mouse. Figure 3.1-2 shows a mouth controlled mouse (Jouse™), also developed by the Neil Squire Foundation. The user is able to program tasks that will allow the robot to move through a set of joint movements that will perform a specified job. A schematic -24-Chapter 3: Specifications of a Safety System of the robot and controller is shown in Figure 3.1-4 and a table describing the technical specifications of the robot is provided in Appendix A. Any safety system developed must not interfere with the user's control of the robot and also must be able to interface with the robot controller in order to shut the robot off in the event of a safety violation. Computer Figure 3.1-4: Control Schematic for Robot While operating the robot, moving it through its full range of motion and becoming familiar with the computer software, four points on the robot were discovered that must be monitored to ensure the no part of the robot unintentionally contacts the user. The four points are the gripper, wrist, motor unit bottom (MUB), and motor unit top (MUT). These four points are shown in Figure 3.1-5. If the user is positioned as shown in Figures 3.1-1 and 3.1-2, -25-Chapter 3: Specifications of a Safety System monitoring these four points and the user's position will ensure the robot does not contact the user unintentionally. Figure 3.1-5: Points of Concern Any safety system will have to monitor the four points of concern at a rate that will make the safety system effective. The faster the sampling rate that the vision-based safety system operates at, the more accurate the system will be for tracking the robot and preventing accidents. The sampling rate can be effected by processing limits of the computer, and the accuracy required. As mentioned before, a video tracking system was previously developed at UBC and the equipment used in that project was also used in the development of a vision-based safety system. The processing speed is limited by the computer (486 66MHz) and video card (Sharp GPB-1). An initial estimation was made that the robot should move less than 100mm between the robot entering the safety zone (i.e. a region of space around the user where the robot should not enter without the safety system shutting the robot off) and the robot being shut down. The robot moves at a maximum speed of 150 to 200mm/s and this means a minimum sampling rate of 2 to 4Hz. -26-Chapter 3: Specifications of a Safety System The initial estimation of 100mm of allowable movement was based on several factors. 1. The size of the safety zone that a user would feel comfortable with when operating the robot was estimated at about 350mm, and allowing no more than 100mm of movement would still provide a safety factor of over 3. 2. 100mm seemed like a realistic maximum target value after observing the robot performing a variety of tasks. (The robot was operated in the laboratory and video tape was watched of actual users working with the robot.) 3. Realizing that the computer available to operate the vision-based safety system would not be able to provide an infinite sampling rate, a value of 100mm felt realistic for a prototype system. 3.2 The User To determine the specifications for the vision-based safety system, it is important to know the abilities and limitations of the potential users of the robotic workstation. The robot was not designed to be used by all individuals with physical disabilities. Making a robot versatile enough to service all people with disabilities would be an extremely difficult task. The Neil Squire Foundation developed this robot for a specific subset of the disabled population, specifically for individuals with severe physical disabilities that no longer have the use of their arms or legs. The following list describes users that the robot was originally designed for. The users have • limited or no use of their arms or legs. • the ability to interact with a computer. • continuous lucidity. • reliable short term memory. -27-Chapter 3: Specifications of a Safety System • vision which is sufficient to see the robot at all times. • the capacity for activating an emergency stop consistently and reliably at all times. Some of the users may have the following conditions: • dependence on life support equipment (e.g. ventilator equipment) • restricted head motion • dependence on electric wheelchairs • spasticity All of these user characteristics have been taken into account during the development of the robot and must be taken into account during the development of any safety system for the robot. During the design of the vision-based safety system the following issues were considered: • The users are unable to move up or down, and their movement is mostly restricted to a horizontal plane. Due to the restricted movement of the users it is only necessary to track the users in the horizontal plane. To do this tracking, a camera and marker system (as referenced in Chapter 2 ) will have to be picked that will provide a reliable and acceptable means to identify the user in the workspace. • Any safety system developed for the users must also be acceptable to the users. It would be easy to protect the users by outfitting them in body armor, but this would be an unacceptable solution in the user's opinion and therefore would not be acceptable. • The safety solution must not interfere with any of the user's life support equipment. • The system must be flexible enough to allow for different conditions of the user (i.e., for a spastic user, different wheelchairs, restricted head motion). • Since the users are able to communicate with a computer through Windows™, if practical, for platform consistency, the safety system should also operate in a Windows™ environment -28-Chapter 3: Specifications of a Safety System 3.3 Safe ty A n a l y s i s An extensive safety analysis was done on the robotic workstation to outline the minimum requirements that any safety system designed for the robot would have to meet. This was done through a series of meetings with the entire design team. Everyone at the meeting had experience with the operation of the robot and had familiarity with safety issues. The members of the design team from the Neil Squire Foundation provided assistance based on their extensive experience in dealing with the special concerns of users with physically disabilities. The first step of the safety analysis was to look at the injury mechanisms., i.e. the sources of potential injury. To identify the injury mechanisms, knowledge of what the robot and the user could do was required. These injury mechanisms were then ranked according to the following scale (See Figures 3.3-la to 3.3-ld for the listing of injury mechanisms and injury rankings): • Life threatening - Users life is in actual jeopardy. • Permanent injury - The users life is not in danger but they could be left with permanent damage. • Tissue damage - The user could be hurt but any injury would heal completely. • Undesired contact - The robot contacts the user without consent but causes no injury. • Equipment damage - The robot or workstation equipment could be damaged. The following terms are used in this analysis: • Impact - A high energy collision • Contact - A low energy collision -29-Chapter 3: Specifications of a Safety System I n j u r y M e c h a n i s m Collision Pinning Pinching Impact with extremities or torso Contact with head, Impact Contact thorax, with with torso, thorax eyes extremeties Contact with life support equipment with extremities torso or head hvith thorax or life support equipment with eyes of digits, extremities torso or head of life support equipment or thorax Life Threatening Permanent Injury Tissue Damage I n j u r y R a n k i n g Figure 3.3-la: Injury Mechanisms and Injury Rankings I n j u r y M e c h a n i s m Spillage Shrapnel Burns Onto person including eyes Onto life support equipmentl Onto control equipment! Into user's or other people's eyes Into user or third Into robot party, not equipment including eyes Into life support equipment Flame Heat Life Threatening Permanent Injury Tissue Damage Equipment Damage I n j u r y R a n k i n g Figure 3.3-lb: Injury Mechanisms and Injury Rankings -30-Chapter 3: Specifications of a Safety System I n j u r y M e c h a n i s m Electrical Shock Toxic Substances Severe Shock -Burns Mild Shock -neuromuscular System Effects Electri cal magneti c Energy Pacemaker Other Released from Components (e.g. formaldehyde) Present on Equipment from Manufacturing Processes Life Threatening Permanent Injury Tissae Damage Equipment Damage I n j u r y R a n k i n g Figure 3.3-lc: Injury Mechanisms and Injury Rankings After the injury mechanisms and the injury rankings portion of the analysis was completed the minimum requirements of each injury ranking were identified. With the minimum requirements identified, a list of potential safety systems that would help or completely fulfill the minimum requirements for each of the injury ranking categories was developed. These safety systems were then ranked according to how well they would fulfill the minimum requirements for each injury ranking. This portion of the analysis can be seen in Table 3.3-1. The following is a list of the levels of protection for injury ranking fulfillment used in Table 3.3-1: I complete fulfillment of the minimum requirements II major reduction in the chance of occurrence of an injury III major reduction in the severity of an injury, but an injury still likely to occur IV minor fulfillment of the minimum requirements V minimal effect unless combined with another solution VI no effect -31-Chapter 3: Specifications of a Safety System Injury Ranking Minimum Requirements Safety System Level of Protection Life Threatening No high pressure applied to thorax Torque Sensor III No contact with life support Mechanical Limits V equipment Light Curtain II Panic Stop III Deadman Switch III Vision II Shielding I No induced (EMI) effects on life Electrical Code II support equipment Compliance No open flames Do not permit I Open Flames No toxic substances Do not permit I Toxic Substances Permanent Injury No contact with eyes Eye Protection I Shielding I Vision II Tissue Damage No high pressure contact with the Torque Sensor II body Light Curtain II Panic Stop III Deadman Switch III Vision II Shielding I No high moments (reduces chance Software limits V of shrapnel and spillage) Mechanical Limits V Undesired Contact No unintentional contact with the Light Curtain II user Panic Stop III Deadman Switch III Vision II Shielding I Equipment Damage Prevent shrapnel and spillage from Shielding I contacting equipment Table 3.3-1: Potential Safety Systems and their Protection Level -32-Chapter 3: Specifications of a Safety System Table 3 . 3 -1 shows that a vision-based safety system and light curtain will provide a major reduction in the chances of an injury occurring for the injuries that would be a result from impact or contact between the user and the robot. The only system which suggested better performance was providing shielding between the user and robot, which is not always a practical or acceptable solution. During the testing of the vision-based safety system with actual users the two technologies (vision and light curtain) were evaluated and compared through a user questionnaire and experimental observations. 3.3.1 Acceptable Safety Limits The safety analysis also revealed another important concept that directly relates to the safety of an individual using the robotic workstation. It is the concept of an acceptable safety limit. In many fields of work the operators of equipment are exposed to a certain level of risk. What is different is that the intended users of this robotic workstation have a different view than an able-bodied user of an acceptable safety limit. An able-bodied user will not accept being injured by the robot because they can easily move out of harm's way. A user with a severe physical disability who has lost this mobility is inherently at a larger risk to their personal safety. The Neil Squire Foundation, having contacts with a large number of potential users, has determined that most individuals feel that the benefits of using this robot outweigh the potential safety problems. Based on the intended users requirements, the decision was that as a minimum, a vision-based safety system for the robotic workstation must provide an injury ranking level of protection such as to prevent any life threatening injuries, permanent injury, or tissue damage. It is also hoped that a vision-based safety system will provide a level of protection that will prevent unintentional contact while allowing desired contact. Due to limitations inherent in a vision--33-Chapter 3: Specifications of a Safety System based tracking system it, cannot prevent injuries resulting from shrapnel, toxic substances, open flames and EMI effects. 3.4 Summary of System Specifications The following list is a summary of all the functional specifications determined by examining the robot, user and performing the safety analysis. The vision-based safety system must: • prevent contact between the user and robot that results in a life threatening situation, causes permanent injury, or causes tissue damage (It would be advantageous to also prevent undesired contact.) To accomplish this it must: 1. monitor the four points of concern on the robot. 2. be able to interface with the controller. 3. monitor the user moving in a horizontal plane. 4. have a sampling rate of a minimum 2Hz • not interfere with the user's operation of the robot. • have user acceptance. • be flexible to account for different conditions of the user. • be as low-cost as possible (max. $6,000). • be designed to be suitable for an office environment. • if practical, run in a Windows™ environment to interface with user software. The following chapter outlines the development of the vision-based safety system in accordance with the above specifications. -34-Chapter 4: The Vision Based Safety System A VISION-BASED S A F E T Y S Y S T E M Using the specifications determined in Chapter 3 a vision-based safety system was developed. This chapter details the development of the system. The vision-based system has to perform the following tasks: • The system must monitor the four points of concern on the robot and determine their positions in three-dimensional space. This was done by using information from the robot controller about the individual joint positions, and converting this to real-world positional information of the four points (Section 4.1). • It has to track the user in a horizontal plane to determine where the user is positioned. Since tracking was only required in one plane a single camera mounted above the user could be used, eliminating the need for a multi-camera three-dimensional vision tracking system (Section 4.2). • A safety zone has to be assigned around the user's position. Then the position of the robot has to be compared to the safety zone, and if the robot is inside the safety zone a signal has to be sent to the robot controller indicating that the robot must be shut off (Section 4.3 -4.5). It was initially planned to send a variety of signals to the robot controller. First, a signal indicating that the robot should be slowed down when it is getting closer to the user. A second signal would be sent to stop the robot at another predefined distance from the user until prompted by the user to continue thereby allowing only desired contact between the user and robot. Unfortunately the robot controller could not be programmed in time to -35-Chapter 4: The Vision Based Safety System allow for multiple signals to be received and so the initial system only stops the robot as is enters the safety zone, and does not allow the user to intentionally contact the robot. The above three tasks must be performed at least two times a second to prevent the robot from moving more than the specified 100mm after it has entered the safety zone. Figure 4-1 shows a schematic of the developed vision-based safety system to meet these requirements. Details of how the three tasks were completed can be found in Sections 4.1 - 4.5 Controller serial port data (robot's joint position) Robotic workstation Figure 4-1: Schematic of the Vision -Based Safety System 4.1 Robot Tracking To track the robot's four points of concern the robot controller was modified to output the potentiometer values of each joint indicating their local relative position. These values are read by the computer running the safety system and calibrated to represent the real-world joint positions in meters (linear joints) and degrees (rotary joints). These values can then be used to -36-Chapter 4: The Vision Based Safety System determine the required points of concern in three-dimensional space. The process used to convert the joint positions to the three-dimensional position in space is known as robot kinematics and is explained in Section 4.1.1. A simplification was made during the development of the prototype vision-based safety system. To avoid complications the gripper was assumed to be empty. This simplifies the determination of a varying gripper object end point while tracking the robot. This reduces the effectiveness of the safety system if there are hard objects in the workstation that can be picked up by the gripper which can then contact the user before the safety system detects a safety violation based on the gripper location. 4.1.1 Robot Kinematics To determine the position of the robot, a method of matrices was used which was pioneered by researchers Denavit and Hartenburg and is now referred to as the DH convention.[33] This method is used to calculate the position of a link in a robot arm with respect to a reference frame based on the position of all of the robot's previous links. This is a powerful method that allows any position on the robot to be determined if all the joint positions are known. To use this method a reference frame (coordinate frame (xn yo zo) in Figure 4.1.1-1) is assigned at a convenient location, and each sequential coordinate frame (coordinate frames (xi yi zO to (X6 y6 Z6)) is arranged according to a set of rules. Axes are assigned at each joint, change in geometry, or point of interest. After these axes are assigned there are four geometric parameters, 0j) a^  and dj used by the DH method, that can be determined. These four geometric parameters are determined by following the procedure outline by the DH method [34] where: (using the right-handed method) -37-Chapter 4: The Vision Based Safety System 0i = the joint angle from the \ \ . \ to the x j about the z j . i axis d i = the offset angle from the Z J . i axis to the z; axis about the Xj axis aj = the offset distance from the intersection of the Z j . i axis with the X J axis to the origin of the i t h frame along the Xj axis dj = the distance from the origin of the ( i - l ) t h coordinate frame to the intersection of the Zj. i axis with the Xj axis along the Z J . i axis These assigned axes and geometric parameters can be seen in Figure 4.1.1-1 and the geometric parameters are summarized in Table 4.1.1-1. Joint Number 6i ai di 1 6i 0 0 0 2 180 90 a 2 d2 3 -90 90 0 d3 3' 0 0 0 -d3-4 e4 -90 a4 0 4" 0 0 84- 0 5 e5 -90 0 0 6 e6 0 0 d6 Table 4.1.1-1: Denavit-Hartenburg Parameters -38-Chapter 4: The Vision Based Safety System Figure 4.1.1-1: Denavit-Hartenburg Coordinate Convention for the Robot where: 81 ,d2, - d V 64, 85, 65 are the changing variables and a 2 = 0.0493 m 34. = 0.15 m 34 = 0.0452 m de = 0.1524 are the fixed variables based on the dimensions of the robot. Chapter 4: The Vision Based Safety System Using the above set of geometric parameters, a series of transformation matrices Tj. i f j can be generated that relate a link i to the previous link i-1 as given below. cos(6j) -cosCa^sinCei) sinta^sin^j) ajCos(0j) sin(6i) cosCa^cos^j) -sin(ai)cos(8i) a;sin(0j) 0 sin(aj) cos(aj) d{ 0 0 0 1 These transformation matrices can be multiplied to relate the link or position of interest to the reference frame. The following eight matrices are the transformation matrices that correspond to the eight coordinate frames ((xn y 0 ZQ) to (x 6 y 6 z^)) shown in Figure 4.1.1-1. T o i -T 2 3 - = COS(0!) -since,) 0 0 -1 0 0 - a 2 sin(8,) COS(0,) 0 0 0 0 1 0 0 0 1 0 T 1 2 = 0 1 0 d 2 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 a 4 0 1 0 0 0 1 0 0 0 0 1 - d 3 , T3>4. = 0 0 1 0 0 0 0 1 0 0 0 1 T 5 6 = T 2 3 = cos(86) sin(86) 0 0 0 1 0 0 0 0 -1 0 0 0 0 d 3 0 1 0 0 0 0 1 d 6 0 1 T 3 4 = cos(04) 0 -sin(6 4) a 4cos(0 4) sin(04) 0 cos(84) a 4sin(8 4) 0 - 1 0 0 0 0 0 1 T 4 5 = cos(85) 0 sin(85) 0 0 -1 0 0 -sin(0 5) 0 cos(85) 0 0 0 0 1 The transformation matrices were multiplied in the following way to determine the position of the four points of concern identified in Section 3.1, Figure 3.1-5. -40-Chapter 4: The Vision Based Safety System Gripper 06 = i d ' M 2 ' A23' 134" M 5 ' 1 56 Wrist T05 = T o r T 1 2 • T23' T34 - T 4 5 Motor Unit bottom To? = T o r T 1 2 - T23' Motor Unit Top 04' = * 01 " 112 " 123' ' l3'4' The resulting matrices Toe, T05, To3', T04', are 4x4 matrices. These 4x4 matrices can be divided into a 3x3 matrix that describes the orientation of the point of concern and a 3x1 vector that describes the position (x, y, z) in three-dimensional space with respect to the reference frame. The break up of the matrix is shown below: X Orientation Y Z 0 0 0 1 It is the position vector of the matrix that is used in the computer software to determine the positions of the critical points. The equations resulting from the 3x1 vector used to determine the position of the points of concern are shown below. (The bolded characters are the joint positions.) ^41-Chapter 4: The Vision Based Safety System Gripper: X = d.6,sin(8i)»sin(04)»sin(e5) - d6»cos(0i)«cos(05) - a4»sin(0i)»sin(05) - d3»sin(0i) -a2»cos(0i) Y = -d£»cos(0i)»sin(64)»sin(65) - d6»sin(0i)»cos(©5) + a4»cos(0i)»sin(0s) + d3»cos(0i) -a2»sin(0i) Z = d6»cos(04)»sin(05) - a4»cos(04) + d2 Wrist: X = -d3»sin(0i) - a2,cos(0i) Y = d3 »cos(01) - a2»sin(01) Z = d 2 Motor Unit Top: X = a4>»cos(0i) + (d3<)»sin(0i) - a2»cos(0i) Y = a4>»sin(0i) - d3i»cos(©i) - a2»sin(0i) Z = d 2 Motor Unit Bottom: X = d3'»sin(0i) - a2»cos(0i) Y = -d3 '»cos(0i) - a2»sin(0i) Z =d 2 The above equations have been implemented into the computer software (See Appendix F) so that the position of the robot's four points of concern can be determined in three-dimensional space where they will be checked to see if they are within the safety zone. -42-Chapter 4: The Vision Based Safety System 4.2 User Tracking A passive camera / passive marker (ref. Chapter 2) was to perform the user tracking because this is a common camera, and one was available from previous research conducted in the laboratory. A single 1/2" CCD camera with a wide-angle lens was mounted approximately two meters directly above the user to provide the horizontal plane tracking of the user. The three-dimensional position of the user's head can be determined because the users are wheelchair bound and their height is essentially fixed. A marker was placed on the users head to help the computer identify the user in the video image. The safety system uses two reflective markers*: 1) one to track the user position and 2) one to indicated a reference location (0,0) that corresponds to the reference frame used by the robot kinematics (xo yo Zo). * A passive marker (in this case a reflective marker) was used instead of an active marker because during initial experimentation with an LED light source as the marker the intensity of the light varied too much (as they were moved about ) to provide a reliable consistently sized marker. The reflective markers are also cheaper and simpler to use than active markers. The requirement for using reflective markers is explained below. 4.2.1 Image Analysis To automatically track the user, in the video image (with a minimum 2Hz sampling rate) the amount of information in the image has to be reduced leaving only the user's position and reference mark for the computer to analyze. If the information is not reduced prior to the computer analyzing the image, the computer must spend considerable time determining where -43-Chapter 4: The Vision Based Safety System the user and reference point are, and this would provide too much of a time lag for the system to meet its required sampling frequency. Having too much information in the image is a common problem, especially when a computer must analyze the image. A variety of solutions have been used to solve this problem. In many motion-tracking systems a reflective marker placed on the point of interest is illuminated by a full spectrum spotlight, and with the camera's aperture stopped down this will cause the markers to stand out allowing the computer to easily identify the points of interest. This solution is used by systems like the PEAK Motion Measurement System™. This is a passive camera/passive marker solution. This is not an ideal solution for the project because users are not willing to sit under a full spectrum spotlight for an entire day. Another solution to the problem of too much information, one not requiring the user to be under a spot light, is to make the markers active. This is usually done by putting infrared emitters in place of the reflective markers and filtering the image received by the camera though an infrared filter, eliminating all ambient light and leaving only the active markers in the image. This type of system is similar to a commercial system available call SELSPOT. This system was initially examined but rejected because: • there were problems with markers being lost because of the limited range that the markers could be seen due to the directionality of the infrared diodes • and the markers had to be larger because of the batteries powering the infrared LED, and this would make user acceptance of the system lower. A combination of the two systems was used in the final system. A reflective marker was used but instead of a full spectrum spotlight lighting the marker, an infrared spotlight was constructed so that a user would be unaware they were under a spotlight. Using this system removes most of the erroneous information. The markers are made to stand out even further -44-Chapter 4: The Vision Based Safety System by converting the video image to a binary image, making the marker white and the rest of the image black. This eliminates the gray range and leaves only the reflective markers visible in the frame. This information reduction can be seen by looking at Figure 4.2.1-1. This figure shows an image of the robotic workstation a) with no infrared filter (the image is hard to convert to a binary image because the markers don't standout), b) with the infrared filter and infrared spotlight (the markers stand out and the image can be converted to a binary image), and c) a binary image (only the markers standout). Qualisys™ sells a camera system very similar to this system for approximately $9,000, but it requires their video processor and the total package is about $19,000. It was not used for this project because an existing camera was already available, and it was determined a camera and infrared spotlight could be constructed for far less then $19,000. a) c) I Figure 4.2.1-1: Information reduction for image analysis 4.3 System Software The software used to control the vision-based safety system is a Windows™ compatible program that coordinates the robot motion and user tracking and performs the necessary comparisons to determine if there has been a safety violation (i.e., if the robot has entered the safety zone surrounding the user). 5^-Chapter 4: The Vision Based Safety System A Windows™ program was used to comply with the project specifications. In addition to making the safety system accessible to the users, there were several other reasons for making it run within Microsoft Windows™. • It was felt that with the amount of graphics associated with the image processing it could be handled in a Windows™ environment easier than in a DOS environment. • The video card being used (a Sharp GPB-1) for image processing within the prototype system is for an IBM compatible computer, and there was some local expertise for programming in Windows with this particular video card. • Windows™ is a popular operating system for computers It was originally intended that the Shadow™ software mentioned in Chapter 2 would provide the basic structure of the required software code, and all that would be required would be to write a subroutine that would do the real-time processing for the vision-based safety system. This decision seemed like an excellent choice because the existing program already interfaced with the available video card and therefore any subroutine attached to the program would not have to go through the initialization required for the card (i.e., there would be less programming). The computer program also had the advantage of a pre-built Windows environment, again less programming involved. A flowchart of the proposed enhancement to the Shadow™ software is shown below in Figure 4.3-1. 46-Chapter 4: The Vision Based Safety System Program initiation & video card setup (using Shadow software) I Video Camera Subroutine I Kinematic Subroutine I Input from Robot Controller Robot and users position compared YES Figure 4.3-1: Programming Flow Chart for the Modified Software Initial examination of the Shadow™ showed that it was written in "C" and complied in Quick C (this is not an object oriented compiler). The program looked as if it would easily adapt to having a real-time tracking subroutine being added to the main program. Figure 4.3-2 show the flow chart that was developed to outline what software had to accomplish. -47-Chapter 4: The Vision Based Safety System Input from Video Source Initialize Safety System (i.e. find (0,0) and (person) and determine safety envelope) System X No Operating^ Determine Robot Position (i.e. use kinematic equations) Input from 1 Robot Controller Compare Robot Position and User Safety Envelope Identify new position of user Marker Lost or Determine New Robot Position (i.e. use kinematic equations) extra marker Data Not Recieved from Controller Compare Robot Position and User Safety Envelope Real-Time Portion of the Program - Must operate at specified frequency Figure 4.3-2: Flow Chart of Tracking Subroutine -48-Chapter 4: The Vision Based Safety System A converter software routine was written to change the serial port output of the robot controller into a usable format. The output is in 16-bit Twos-Compliment Hexadecimal format and this had to be converted to a decimal value representing the values of the potentiometers of all the robot's six joints. The decimal potentiometer values were then converted to actual positions of the various joints. Using the kinematic equations developed in Section 4.1.1, the positional values were transformed into a three-dimensional position of the gripper, the wrist, and the top and bottom of the motor end unit in real space. A Windows™ software interface for the program was also developed. This interface allows the operator to control the safety subroutine and provides visual information regarding the status of the safety system. The interface for the operator was carefully developed to ensure that it was easy to use and that it clearly indicated if the robot had breached the safety envelope. It is very important to have an easy to use and understandable interface so as to eliminate as many human errors as possible [35]. The interface software also indicates other problems that might occur in the system such as lost markers, extra markers, and communication problems between the computer and robot controller. The next step was to program the computer to sample the robot controller information from the serial output port and bring these character strings into the program for conversion and manipulation. It was at this point that a problem with the concept of modifying the Shadow™ software showed up. Windows™ takes control of the serial ports and port access (i.e. reading and writing has to be run through the Windows™ environment. Somewhere in either the Shadow™ software or the old compiler software, something was preventing the serial port from sequencing correctly with the incoming data stream, thus making the information from the serial port unusable. This particular problem could not be solved, and it was decide that an independent piece of software had be written on an updated compiler to overcome the problem. -49-Chapter 4: The Vision Based Safety System The new program, named "Sentinel", was written using an object oriented "C++" compiler (i.e. Visual C++™). Since the previous programming was done in "C", most of the software previously written could be used in the "C++" compiler. A new Windows™ interface was then developed using the new compiler. Using an object oriented compiler allowed the use of a precompiled communication subroutine (a Visual Basic extension or VBX) that could properly sequence with the serial port data stream from the robot controller. The video portion of the software was written using the libraries which were supplied by Sharp™ with their video card. This programming was fairly simple to complete since the programming libraries were setup to do what was required. The video image was captured into computer memory where it was converted to a binary image leaving only the two markers visible. The centroids of these markers was calculated and the screen coordinates of the two markers were returned. These values were converted to real space coordinates and the position of the head was determined by subtracting the value of the reference marker from the head's marker. The Sentinel software uses the position of the head and robot positions and compares the values to determine if the robot is in the user's safety zone, and if so, it alerts the user via the Windows™ computer display interface and sends a stop signal to the robot controller. (A complete copy of the computer code can be found in Appendix A.) To communicate with the robot controller a warning signal is sent to the controller using a simple open/closed signal that is opened when the robot enters the safety zone and closed when the robot is outside the safety zone. Communication is achieved by a simple relay circuit activated by the computer software. A schematic of the relay circuit is shown in Figure 4.3-3. -50-Chapter 4: The Vision Based Safety System + 12 V N C NO (To Robot Controller) • / • . SPDT Relay : (i2V) r Pin 2 from Printer Port 2.7K ; N P N Transistor (2N3904) Figure 4.3-3: Computer I Robot Controller Interface 4.4 System Calibration To relate the potentiometer decimal values received from the controller to the actual "real-world" values that these decimal values represent, it was necessary to calibrate the system. The calibration curves shown in Figures 4.4-1 are determined by relating the actual values measured for a variety of robot positions (one joint at a time) and plotting them against the values returned by the robot controller. These calibration curves are used by the Sentinel software to convert the decimal values outputted by the robot controller into actual joint positions that can be used in the robot kinematic equations. A similar procedure for comparing the "real-world" locations of markers to the pixel values recorded by the vision system was used to calibrate the camera to provide actual values of the user and reference markers. These plots are shown in Figure 4.4-2 -51-Chapter 4: The Vision Based Safety System - 5 0 0 - 2 5 0 0 2 5 0 5 0 0 Potentiometer Decimal Values Figure 4.4-1: Robot Calibration Curves Slope of Lines I Camera Z f(x) = 4.00E-3*x • Camera Y f(x)^4.28E-3*x Pixel Values Figure 4.4-2: Camera Calibration Curves -52-Chapter 4: The Vision Based Safety System 4.5 System Improvements The Sentinel software was put though a variety of tasks to see if it performed as expected. The results of these tests and the overall performance of the system are examined in Chapters 5. Even though the system performed as expected, the initial sampling rate of the system was lower than hoped; thus Sentinel was examined for potential improvements to increase the sampling rate. As determined when defining the project specifications in Chapter 3, the robot must not move more than 100mm into the safety zone before the robot is shut off. To achieve this a sampling rate greater than 2Hz is required. This 100mm includes the system response time, robot momentum, and the sampling difference (i.e., distance the robot travels between sample points). To test the sampling difference the robot was programmed with a task that involved moving all the robot's joints into a variety of positions. This task was replayed and the four points of concern were recorded for the complete cycle. The sampling difference of each of these points was calculated and Figure 4.5-1 shows a plot of the sample difference verses the percent of the cycle for the robot gripper endpoint movement. Version 1 of the vision-based safety system (blue line) shows that at about 98% of the cycle the gripper had a sample difference of about 117mm. This exceeded the project specification of 100mm so the system was examined to try to reduce the sampling difference to comply with the specifications. The Sentinel software was optimized for speed by writing a more efficient code and having the compiler optimize the software for speed. The task was replayed and the sampling differences were plotted for Version 2 of the system (green line). A maximum sampling difference of 105 cm was recorded (a 10.2% reduction). The system was examined again in an effort to further reduce the 105mm sampling difference. Without rewriting the Windows™ serial drivers or hiring a professional Windows™ programmer the Sentinel -53-Chapter 4: The Vision Based Safety System software could not be optimized further. The only way to reduce the sampling difference was to adjust the robot's maximum velocity. The difference was reduced to 70mm by reducing the maximum robot speed in joints 1 and 2 by approximately 23 percent. The overall performance of the robot is only marginally affected by this reduction. Version 3 of the system has a 41.6% reduction in the sample difference and now meets the project specification of less than 100mm movement into the safety zone. This version of the vision-based safety system was used for the remaining tests performed on the system. 0 10 20 30 40 50 60 70 80 90 100 Percent of Cycle Version 1 Version 2 Version 3 Figure 4.5-1: Gripper Endpoint Sampling Difference -54-Chapter 4: The Vision Based Safety System Table 4.5-1 is a summary of the three versions of the vision-based safety system. Sampling Frequency (Hz) Maximum Difference (mm) Percent Improvement (%) Version 1 2.0 117 Version 2 3.5 105 10.2 Version 3 3.5 70 41.6 Table 4.5-1: System Improvement Summary Chapter 5: Testing and Results T E S T I N G AND R E S U L T S Testing of the vision-based safety system was important to determine if the system developed meets the required project specifications. The system was designed to meet all of the system specifications but questions of how well it protects the user, user acceptance, and flexibility still had to be determined. Testing of the system was divided into two parts. Initial laboratory testing, as described in Section 5.1, was done to verify the performance of the vision-based safety system. The safety system was tested to insure that it provided the required protection for the user prior to testing it on a user. Once the performance of the safety system was proven, potential users of the robotic workstation tested and evaluated the vision-based safety system. This testing is described in Section 5.2. The system development, initial testing, and system improvements mentioned in Section 4.5 were performed on a robot in a laboratory setting at the university. The testing described in the remainder of this chapter was performed at the Neil Squire Foundation research facility. The safety system was moved to the new location to provide more convenient testing with actual users of the system. The robot and camera were recalibrated at the new site. The new joint and camera calibrations can be seen in Appendix C. The only noticeable difference between the two setups is that the sampling rate is lower at the new location (3.0Hz verses 3.5Hz) due to a different controller configuration, but this is offset by slower robot motors, so the new setup still operated within the specified limits. -56-Chapter 5: Testing and Results 5.1 System Verification Verification of the safety system was assured via three tests. The first test was to determine the accuracy of the kinematics that determined the robot's position and of the camera for tracking the user. The second test determined the sampling difference (which must be < 100mm to meet project specifications) of the system. This test also checked to see if the system was picking up the safety zone violations. The final verification test was to test the safety system by repeatedly moving the robot towards the user (in this case a mannequin) from all directions and having the safety system shut off the robot. The results of these tests are reported in Sections 5.1.1 to 5.1.3 which follows. 5.1.1 System Tracking Accuracy The accuracy of the safety system's calculation of the position of the four points of concern on the robot (Figure 3.1-5) was determined by moving the robot to seven different positions. These positions were achieved by moving each of the five robot joints (Rotation, Left/Right, In/Out, Yaw, and Pitch) to achieve a different location. The position of the four points of concern were then measured with respect to the frame of reference. These measurements were compared to the values calculated from the safety system. The results of this test can be seen in Table 5.1.1-1. It should be noted that system accuracy was a concern on a centimeter level, not a millimeter level. The measured distances were therefore not measured to a millimeter, but measured and reported to the nearest half centimeter. -57-Chapter 5: Testing and Results Position Gripper(mm) Motor Bottom (mm) Measured 1 2 3 4 5 6 7 1450 1105 915 735 780 850 1120 Calculated 1450 1101 912 727 770 840 1120 % diff Measured 0.000 0.362 0.327 1.088 1.282 1.176 0.000 1490 1065 945 780 725 885 1068 Calculated %diff 1480 1068 947 780 726 901 1073 0.671 0.282 0.212 0.000 0.138 1.808 0.468 Position Wrist (mm) Motor Top (mm) 1 2 3 4 5 6 7 Measured Calculated % diff Measured 1390 1010 840 635 695 870 1070 1390 1012 845 641 694 0875 1074 0.000 0.198 0.595 0.945 0.144 0.575 0.374 1490 1070 950 790 725 885 1079 Calculated 1490 1071 952 785 731 891 1083 % diff 0.000 0.094 0.211 0.632 0.828 0.678 0.371 Mean: u. = 4.3mm Standard Deviation: o = 3.9mm (difference between measured and calculated position) Table 5.1.1-1: Accuracy of the Calculated Position of Robot The maximum percent difference between the actual and calculated values, considering all four points of concern, was 1.808 percent (16mm). With a mean of 4.3mm and standard deviation of 3.9mm, a value of 16mm is the maximum error between the calculated position and the measured position that can be expected with a 99.9% confidence interval (i.e. \i +3o = 16mm). A similar test was performed with the camera and markers to assure if the user's position was correctly pinpointed. These results can be seen in Table 5.1.1-2. The maximum percent difference was 1.538 percent (5mm). Using a confidence interval of 99.9%, the maximum error expected is 11.2mm. -58-Chapter 5: Testing and Results Position Y Coordinate (m) Z Coordinate (m) Measured Calculated %diff Measured Calculated %diff 1 0.640 0.632 1.250 0.750 0.748 0.266 2 0.480 0.484 0.833 0.565 .561 0.707 3 0.620 0.621 0.161 0.325 0.320 1.538 Mean = 4.0mm Standard Deviation = 2.4mm (difference between measured and calculated position) Table 5.1.1-2: Accuracy of the Camera In a worst-case scenario, the calculated positions of the user and robot will have a maximum potential error of 27.2mm (i.e. 16mm + 11.2mm). This value is relatively small; with an estimated safety zone of 350mm, a specified maximum movement of 100mm, and a potential calculated position error of 27.2mm the robot is still 222.8mm (i.e. 350mm - 100mm -27.2mm) away from the marker indicating the user's position. The error in determining the user's and robot's positions only becomes a concern when a user selects a safety zone that is small enough that 27.2mm possible error in measurement and the distance the robot moves after it has entered the safety zone (< 100mm) would cause the robot to contact the user. During the user trials the selected safety zones were modified to insure that this problem would never occur. 5.1.2 Sampling Difference and Safety Zone Violations This test moved the robot through a series of programmed positions that involved moving all of the robot's joints. The safety system was monitoring the situation but was prevented from actually stopping the robot when it detected a safety zone violation. The path of the robot gripper and location of the user (for this test a mannequin) can be seen in Figure 5.1.2-1, and the sampling difference shown in Figure 5.1.2-2 is calculated from this prescribed motion. The motion of all of the critical points were monitored and all safety zone violations were recorded. -59-Chapter 5: Testing and Results The purpose of this test is to determine what the sampling rate of the safety system is and insure that the robot travels less than 100mm between sampling points. Not stopping the robot allowed the safety system to be checked by determining if every time the robot entered the safety zone the system would have sent a signal to stop the robot. Figure 5.1.2-2 is a plot of the sample difference verses the percent of the cycle for the four points of concern on the robot. The sampling frequency of the system was recorded to be 3.01Hz. The maximum movement of the robot between sampling points was 63.2mm, well below the specified maximum of 100mm. This value does not include the robot's momentum and system response time. These two issues will be addressed in Section 5.1.3. Figure 5.1.2-1: Path Plot of Gripper Motion -60-Chapter 5: Testing and Results 60-40-20-0-Sampling Difference "20" (mm) _4Q. -60--80-| IN I ft A I 1 Gripper 0.0 15.3 30.6 45.9 61.3 Percent o f Cycle 76.6 91.9 Motor Bottom Wrist Motor Top Figure 5.1.2-2: Sampling Differences Figures 5.1.2-3 and Figure 5.1.2-4 were generated to check the accuracy of the safety systems ability to detect penetration of the specified safety zone. Figure 5.1.2-3 shows the three different views of the material plotted in Figure 5.1.2-1. The colors are used to indicate the sequence of movement. The user's position and the safety zone are also shown on the graph. To trigger a safety violation the robot has to be in each of the three safety zone views at the same time. Examining Figure 5.1.2-3 indicates that the gripper violated the safety zone at two different times (when dark blue and red lines are in all three safety zones). Figure 5.1.2-4 is a plot of the system's response where the distance between the gripper and the user is plotted against time. The red line on the graph indicates when the safety system would have warned the robot controller of a safety violation. As predicted, the safety system was triggered on two separate occasions. - 6 1 -Chapter 5: Testing and Results Top View 0.6-0.4-0.2-0--0.2--0.4-! • I 0 0.2 0.4 0.6 0.8 Z Front View 1.2 1.4 0.6-0.4-0.2-0--0.2 • -0.4-r— I i \ —h 1 I • i 0 0.2 0.4 0.6 0.8 Z 1.2 1.4 ~ ~ ~ ~ i Time Scale (sec) 1 10 20 30 40 50 60 70 80 Position of Users Head Safety Zone Legend —' Side View 0.6-0.4-0.2-0--0.2--0.4-AN ) ) • -0.4 -0.2 0 0.[2 0.4 0.6 Y * all positions are in meters Figure 5.1.2-3: Time and Distance Plot with Safety Zone -62-Chapter 5: Testing and Results SAFETY ZONE 42 32 Time (sec.) 53 63 Closing Distance Safety Violation Figure 5.1.2-4: Closing Distance and Safety Violations 5.1.3 Confirming the Safety System Knowing that the system is accurately determining the position of the robot and the user, and that the safety system correctly triggers when a safety violation occurs, leads to a final verification test. The safety system was reconnected to the robot controller, allowing the system to automatically shut the robot off when a safety zone violation was detected. The robot was then moved towards the mannequin until it was shut off by the safety system. The first four robot actions involved a single joint moving at its maximum speed towards the user. The last action involved all the robot's joints active in moving the gripper towards the user. The safety system provided two values for the closing distance between the user and robot. The first value is when the system was initially triggered, and the second value is when the robot finally stopped. The testing was done to determine how much the robot's momentum -63-Chapter 5: Testing and Results and system response time affected the stopping distance of the robot. The results of this test are shown in Table 5.1.3-1. Action Tria l Closing Distance (mm) Initial Detection Final Position In/Out 1 320 310 (Joint 3) 2 330 310 3 340 320 4 330 320 5 350 330 Move Left 1 310 300 (Joint 2) 2 330 320 3 320 310 Move Right 1 320 310 (Joint 2) 2 350 330 3 350 340 Move Down 1 350 330 (Joint 1) 2 340 330 3 350 330 All Joints 1 340 320 2 330 320 3 350 340 Table 5.1.3-1: Safety System Results The safety zone was set at 350mm for the above test. (I.e. 350mm - 100mm will still prevent the robot from contacting the mannequin.) The zone was set at this size knowing that the sampling rate might allow for 63.2mm of movement between sampling points. With this possible error the safety system should never initially trigger closer than 286.8mm (i.e. 350mm - 63.2mm). Table 5.1.3-1 shows that the closest the robot ever got before the system detected it was 310mm. -64-Chapter 5: Testing and Results The difference between the point where the safety system initially detected the robot entering the safety zone and the final position of the robot, shows that the momentum of the robot and system response time caused a maximum of 20mm of movement after the safety system sent a signal to the robot controller to turn the power off. Combining this result with the previous tests means that the robot can move a maximum of 83.2mm (63.2mm because of sampling error, 20mm due to momentum) into the safety zone before it is stopped, which is below the project specifications of 100mm. 5.2 User Tests Four users were brought in for testing with the vision-based safety system. The users had all previously used the robot, and all but one had tested the light curtain safety system. Due to privacy issues, the users will be referred to as Subjects A, B, C, and D. A description of each test subject can be found in Section 5.2.1. The users operated the robot while being monitored by the vision-based safety system. The users were brought in for testing, and the vision-based safety system was described in detail to them. The users were asked to bring the robot as close to them as they felt was comfortable, and the distance between the user and the robot was measured to define the safety zone. The software was modified based on the user's specification and tested to insure that it stopped at the specified distance. The users were then instructed to try to "beat the system." The users attempted to move the robot towards themselves and thwart the vision-based safety system. The users were never in any danger since another person with the ability to turn the robot off was always monitoring the situation. The tests were video taped for post analysis in the event a problem with the system was detected. -65-Chapter 5: Testing and Results After testing was completed, the users filled out a questionnaire. A copy of the questionnaire can be seen in Appendix D. The user's answers are also recorded in Appendix D and summarized in Section 5.2.2. 5.2.1 Subject Descriptions The following is a brief description of the test subjects. Subject A • A high level quadriplegic with very little upper body movement. • Full range of head motion. (Not always in total control.) • Female. Subject B • A high level quadriplegic with some limited upper body movement. • Full range of head movement and very limited arm motion. • Prone to forward spasms - can detect early enough to back out of the way. • Male. Subject C • A high level quadriplegic with no upper body or head motion. • Extremely limited hand movement. • Has a ventilator for breathing. • Does not have any spasms. • Male. Subject D • A high level quadriplegic with no upper body or hand movement. • Very little neck movement. • Occasional neck spasm causing the head to move backwards. • Male. -66-Chapter 5: Testing and Results 5.2.2 User Trial Results Table 5.5.2-1 is a summary of the yes and no questions completed by the test subjects. Any questions that required more detailed explanation are summarized below the table. Some of the questions in the survey (Numbers 11, 14, 15, 19 and 20) do not relate to the vision safety system or its comparison, so they will not be included in this summarization. Question Yes No Comments Did you generally feel safe in the vicinity of the robot? 4 0 There was an attendant present if the safety system failed (A&C). Comfortable with the technology (B&D). Do you think other disabled persons would feel safe using this system as is? 4 0 Did the robot always stop safely? 3 1 A cable came loose on Subject A and the signal to stop robot never reach controller. System did detect the safety violation. Did you have confidence in the safety system? 4 0 Subject A was confident because of attendant there - subject had less time than other user to gain confidence in the system (1 hour instead of 2). With this current vision system payloads are not detectable; does this make the system too unsafe to use? 0 4 Objects in the workstation are going to be limited so did not see this as a concern. Does it bother you to have a camera (non-recording) monitoring your actions? 0 4 Table 5.2.2: Summary of User Responses in Questionnaire Except for the one incident when there was a loose cable, the robot stopped safely all the time. The system even exceeded the project specification; not only did it prevent tissue damage, it also prevented unintentional contact. The users were unable to allow the robot to have desired -67-Chapter 5: Testing and Results contact due to limitations of the robot controller. All the users were pleased with the system and had nothing to suggest to improve the vision-based safety system. Questions 5, 6, 7, 8 from the questionnaire dealt with the issue of markers for tracking the user's movement. The consensus is that, if it is required for the system to operate, that they would wear the markers in order to use the robot. When asked if a smaller marker would be acceptable, all the users agreed that anything to give the marker more aesthetic appeal would be an improvement. The question of how they would get the marker on their head when they arrived at the robotic workstation was raised as a potential concern. When asked if a wheelchair marker would be acceptable, the answers varied depending on the test subject. Subjects C and D felt that this would be an acceptable alternative. Both Subjects have limited head movement and a displaced marker would provide the same level of protection as one on their head. In addition to limited head motion, Subject C has some arm motion and is able to back his wheelchair out of the way in the event of a spasm to prevent injury to himself. Subject A felt that the best spot for the marker was on her head because of the large amount of head movement that she has moves her head from a standard location. When asked if the marker could be placed on their mouth-controlled mouse emulator only Subject D felt that it would be acceptable, because his mouse emulator is always in the same position relative to his head. In the questions regarding the users' preference of the light curtain or the vision system (Questions 9 and 12), the test subjects unanimously chose the vision system (Test Subject C did not comment on this because he did not participate in the light curtain testing). The reasons for this were that the vision system had more flexibility than the light curtain, since the vision-based safety zone traveled with the user and was not fixed in one location as the light curtain was. The vision system also had a safety zone that could be adjusted depending on the user's individual preferences. They also felt that the vision system was less likely to have false -68-Chapter 5: Testing and Results triggers, since the light curtain, as is, was prone to go off for reasons other than the robot or user penetrating the light curtain. (The table shaking or being moved would cause the receiver from the light curtain being put out of alignment and set off safety system.) The users also felt that the Neil Squire Foundation should focus their efforts on the further development of the vision system. Subject A suggested that the light curtain would be a good backup system if the vision-based system failed, and therefore would like both systems installed with the robot. In summary, the user trials were extremely useful in determining the success of the vision-based safety system and providing a means to identify the areas for immediate improvement. The testing provided a clear indication that the vision system will be an excellent safety system, preferred over the light curtain. It should be kept in mind that the testing environment is complicated: the test subjects were not using the safety system in isolation; they were dealing with the robot, controlling software, and interacting with the computer using a mouse emulator. The test subjects are also affected by a variety of conditions (e.g. mood swings, personal preferences, medical complications) that are difficult to evaluate in regards to how they felt about the system. -69-Chapter 6 Conclusions and Recommendations C O N C L U S I O N S AND R E C O M M E N D A T I O N S The overall goal of this research project was to investigate and develop a vision-based safety system that would, either solely or in combination with another system, provide an acceptable level of safety for a user with severe physical disabilities, who is operating the rehabilitative robotic workstation. To meet this goal, prior to the development of a vision-based safety system, a safety analysis was performed on the robotic workstation and user to determine the performance requirements of a vision safety system. A system was developed that uses a single camera to track the user in a horizontal plane and utilizes feedback from the robot controller to calculate the position of the robot, using kinematic equations. The system is controlled by a computer that has the ability to communicate with the robot controller and stop the robot if it detects a safety zone violation. When the vision-based safety system was finally completed and tested, the system exceeded even the project specifications. The system not only prevented tissue damage and permanent injuries, but also prevented unintentional contact between the robot and user. -70-Chapter 6 Conclusions and Recommendations 6.1 Meeting the Project Specifications A series of specifications were developed in the early stages of the project to outline what the vision-based safety system had to do in order to be an acceptable safety system. All of these specifications were met. Prevention of Tissue Damage The system was to prevent contact, between the robot and the user, that would result in a life threatening situation, cause permanent injury, or cause tissue damage. During the testing of the system on both a mannequin and four different potential users, the system not only provided the specified level of protection but it also prevented all undesired contact between the user and robot. Objects in the gripper were not used during the testing, and it is possible to have tissue damage as a result of a hard object in the gripper, although users of the robot did not feel this was be a big problem. User Acceptance For the safety system to be effective the users of the robotic workstation must be willing to use it. Simple solutions which adversely affect aesthetics, such as placing the user in body armor, are not acceptable to the users. All the users were willing to use the vision-based safety system. There were some concerns raised about the marker used to identify the user, but most of these concerns are addressable. The system is also flexible enough to account for a variety of different users' needs. The marker placement can vary depending on the users particular needs (e.g. users with little head -71-Chapter 6 Conclusions and Recommendations movement could have the marker located on their wheelchair). The safety zone size is also adjustable for each user, depending on their individual preference. Another feature of the system that makes it attractive is that the vision safety system is unobtrusive. Except for a camera mounted above the user, the system is essentially invisible, making it more suitable for an office environment. Any piece of equipment that does not draw attention to a user's disability is appreciated by the users. A system that requires special shielding or protective equipment would emphasize the user's disabilities instead of focusing attention on the user's abilities. Low-Cost Another requirement for the development of the vision-based safety system was that it be as low-cost as possible while still providing the required level of protection. A target price for the system of less than $6,000 was set in the project specifications. Appendix E shows a breakdown of the price for a system that can be purchased for about $4,000 dollars. (This equipment was not purchased and tested, but the product specifications indicated that it would work.) This price is less than the maximum specified price. -72-Chapter 6 Conclusions and Recommendations 6.2 Comparison of the Light Curtain and Vision System Another important part of the evaluation was to compare the vision-based safety system to the light curtain safety system. The two technologies are compared in two different ways: • user acceptance and preference. • cost and performance. Prior to testing the vision-based safety system, the test subjects A, B, and D had all used the robot with the light curtain. After the users tested the robot with the vision system, they were asked to compare the two technologies. The general consensus was that the users felt safe using both systems, but felt that the vision-based safety system was more flexible and should be the safety system that the Neil Squire Foundation focuses their effort on in the future. The cost of a light-curtain-based safety system is about $2500, compared to the vision-based safety system, which is about $4,000. This $1500 difference represents about a 3.5 percent cost increase in the entire cost of the workstation and must be weighed against the performance of the safety systems before a decision can be made on which safety system should be implemented into the final design. An important consideration when comparing the two systems is to other potential benefits beyond what the prototypes have provided. The light curtain is limited to just what it is doing now, creating an invisible wall between the users and the robot that prevents an object (either the robot or the user) from crossing the barrier and alerting the operator of a potential safety concern. The vision-based safety system currently is more advanced than the light curtain in that it creates an adjustable moving box around the user, allowing the robot to operate to the left and right and above the user's head while providing the same level of protection from injury. -73-Chapter 6 Conclusions and Recommendations It is realized that the vision system does not currently detect objects in the gripper while the light curtain can detect most objects. Thin or transparent objects can pass through the light curtain without detection and cause injury to the user. At present the light curtain cannot be modified to adjust for these objects. The vision system, on the other hand, although not currently capable of doing so, could have a feature that either increases the safety zone around the user, or adjusts the kinematic equations to account for objects in the gripper, depending on what the user has picked up. It should be noted that the users tested did not think that gripper objects in the gripper were a concern for their safety. The vision-based safety system could also be programmed to slow the robot down at a certain distance from the user and completely stop the robot at another preprogrammed distance. The safety system also has the potential to detect unusually rapid head movements of the user as an indication of a spasm and stop the robot until the user has regained control. The increased flexibility, potential for future advancement, user acceptance and preference for the vision system over the light curtain outweigh the 3 percent increase that the vision-base safety system costs to produce. The recommendation of this thesis is that any future work on the development of a safety system for the robotic workstation should include the vision system as a potential safety device to protect the users from injury. -74-Chapter 6 Conclusions and Recommendations 6.3 Recommendations for Future Work A quote by Cardinal Newman sums up a problem with many projects that are in the process of being researched and developed. Nothing would be done at all if a man waited till he could do it so well that no one could find fault with it. Cardinal Newman This project had to follow the Cardinal's advice, and for the purpose of the initial investigation into the feasibility of a vision-based safety system, many options for the system had to be ignored so that the idea could be examined in a timely manner. Since the testing and evaluation of the vision-based safety system gave a clear indication that it is a viable option to protect the user from injury, the following are some recommendations for future work on the system. Develop the system so that it not only calculates the position of the user and robot, but also determines their respective velocities. The velocities can be used to detect if the user is having a spasm which causes a sudden (high velocity) movement, and to turn the robot off until the user has recovered. The closing velocities between the robot and user could also be used to help determine an unsafe situation. The velocities can also be used in path projection so that the future location of the robot can be estimated and used to help avoid collisions. To monitor the robot joint velocities, the controller could be modified to output joint velocities as well as positions. To implement the velocity calculations and increase the accuracy of the current system, the software should be rewritten to optimize it for speed, thus increasing the sampling rate. This -75-Chapter 6 Conclusions and Recommendations step will involve rewriting some of the Windows™ drivers to optimize serial port information transfer as well as rewriting the communication VBX component of the code to a dedicated communication package aimed specifically at interfacing between the safety system and the robot controller. The robot controller should also be modified to allow for more advanced communication between the safety system and the robot controller. Currently the safety system only reads information from the controller about joint positions, and trips a switch in the controller indicating a safety zone violation. If the controller were able to indicate what object the gripper was carrying, the kinematic equations could be updated to provide protection for the user, no matter what object was in the gripper. A two stage safety zone could also be implemented where the safety system initially slows the robot down and finally stops it at programmable distances from the user. -76-References REFERENCES [1] BC Paraplegic Association - Annual Reports and Records 1991-95., Summarized by the Rick Hanson Society, British Columbia, 1996. [2] The Neil Squire Foundation. Interviews and experiences working with individuals with physically disabilities. [3] Nigg, B .M. and Herzog W.(1994) Biomechanics of the Musculoskeletal System. John Wiley & Sons Ltd. England, pp. 254-285. [4] Gonzalez, F., Campoy, P., Aracil, R., Penafiel, F. (1993) Three-dimensional digitizer for the footwear industry. Europto Series: Computer Vision for Industry. Vol. 1989 pp. 332-338 [5] Helm, J.D., Sutton, M.A. , McNeill, S.R. (1994) Three-dimensional image correlation for surface displacement measurement. Proceedings: Videometrics III. SPIE Vol. 2350. pp. 32-45. [6] L i , L . , Steen, W.M. (1995) C C D vision based surface geometry tracking for real time navigation of C N C machining. Proceedings of the thirty-first International MATADOR CONFERENCE pp. 655-660 [7] Montagne, E. , Alizon, J., Martinet, P., Gallice, J.(1995) Real time 3D location of a car from three characteristic points observed in video image sequence. Transportation systems: Theory and Applications of Advanced Technology, IF AC Symposium pp. 361-366 Vol. 1 [8] Rao, B.S.Y., Durrant-Whyte, Sheen, J.A. A fully decentralized multi-sensor system for tracking and surveillance. The International Journal of Robotics Research. Vol. 12 No. 1 Feb. 1993. [9] Wu, Minsheng.(1994) A new scanning method of optical sensor system for seam tracking in three dimensions. Advanced Techniques and Low Cost Automation. Proceedings of the International Conference, pp. 325-331. -77-References [10] Abdel-Aziz, Y.I. and Karara, H.M. (1971) Direct Linear Transformation from Comparator Co-ordinates Into Object Space Co-ordinates. Proc. ASP/UI Symposium of Close-range Photogrammetry. Am.Soc. of Photogrammetry, Falls Church, VA pp. 1-18. [11] Anglin, Carolyn. (1993) A functional task analysis and motion simulation for the development of a powered upper-limb orthosis. Master Thesis. University of British Columbia. [12] Homainejad, A.S., Shortis, M.R. (1995) Stereo vision system for tracking a dynamic object. Proceedings: Videometrics TV. SPIE Vol. 2598. pp. 264-271. [13] De Paoli, S., Zigmann, R., Skordas, T., Huot Soudain, H. (1993) Evaluation of an obstacle detection technique based on stereo-vision. Europto Series: Computer Vision for Industry. Vol. 1989 pp. 128-136 [14] Nishihara, H.K., Burns, J.B., Kahr, P., Markus, R., Rosenschein, J.S. (1996) Active Vision for Security and Surveillance. Realtime imaging, Proceedings: SPIE Vol. 2661 pp. 12-20. [15] Etienne-Cummings R., Lango P., VanderSpiegel J., Mueller, P. (1995) Real-time visual target tracking: Two implementations of velocity based smooth pursuit. Visual Information Processing TV SPIE Vol. 2488. pp. 297-308 [16] Kehtarnavaz, N.D., Kim, J. (1995) Vision-based vehicle tracking via a noise-tolerant optical flow method. Visual Information Processing TV SPIE Vol. 2488. pp. 254-262 [17] Mitchelson, D.L. (1988) Automated three dimensional movement analysis using the CODA-3 system. Biomedizinische Technik. 33 (7-8) pp. 179-182 [18] Asimov, Isaac. (1950) /, Robot Fawcett Crest Publications, Greenwich [19] Canadian Standards Association Z432-94: Safeguarding of Machinery - Occupational Health and Safety Z323.1.2-94: Automotive Adaptive Driving Controls (AADC) for Persons with Physical Disabilities - Long Term and Continuing Care C22.2 No. 151-M1986: Laboratory Equipment - Lifestyles and the Environment C22.2 No. 601.1-M90: Medical Electrical Equipment - Part 1 - General Requirements Z323.4.3-M89: Wheelchairs- Determination of Static Stability Q634-91: Risk Analysis Requirements and Guidelines - Quality Management -78-References [20] Workers' Compensation Board of British Columbia (1980) Industrial Health and Safety Regulations, Richmond British Columbia. [21] Sabitini, A.M., Genovese, V . , Gugliemelli, E., Dario, P. (1995) A composite proximity sensor array for mobility aids. RESNA '95 RECREAbility Proceedings, Vancouver, pp. 487-489. [22] Sheredos, S.J., Taylor, B . , Cobb, C .B . Dann, E.E. (1995) The helping hand electro-mechanical arm. RESNA '95 RECREAbility Proceedings, Vancouver, pp. 493-495. [23] Davis, B .L . (1991) Safety Issues in Rehabilitation Robotics. Rehabilitation Robotics Workshop. University of Cambridge pp. 1-3. [24] Edwards, R. (1991) Rehabilitation Robotics: Safety Aspects Rehabilitation Robotics Workshop. University of Cambridge pp. 1 -7. [25] Kwee, H.H. , Cremers, G.B. , van der Fij i , D.J., Partsen, H.A. (1994) User evaluation of an M3S demonstration platform. Fourth International Conference on Rehabilitation Robotics pp. 3 - 5 . [26] Neveryd, H. , Bolmsjo, G. (1994) WACKY, a mobile robot system for the disabled Fourth International Conference on Rehabilitation Robotics pp. 137- 141. [27] Troccaz, J., Lavalle, S., Hellion, E. (1993) A passive arm with dynamic constraints: a solution to the safety problems in medical robotics? IEEE International Conference on Systems, Man and Cybernetics Vo l . 3. pp. 166-171. [28] Dallaway, J.L., Jackson, R.D. (1993) The RAID workstation for office environments. RESNA '93 Engineering the ADA Proceedings, Las Vegas, pp. 504-506. [29] Kwee, H.H. , Duimel, J.J., van Woerdan, J.A., Driessen, L .W. , Tuinhof, A.A., van der Burght, R.J.M. (1991) The MANUS Telethesis: A Progress Report Rehabilitation Robotics Workshop. University of Cambridge [30] Hoffman, A . H . , Ault, H.K. , Flinton, D.R. Sullivan, W.B. (1993) The design and development of a reacher/gripper device for a child with Arthogtyposis. RESNA '93 Engineering the ADA Proceedings, Las Vagas, pp. 507-509 -79-References [31] Mattie, J .L . , Hanrah, R . E . (1995) Development of an evaluation procedure for wheelchair-mounted manipulator arms. RESNA '95 RECREAbility Proceedings, Vancouver, pp. 499-501. [32] Arnott, John L . (1995) Intelligent systems and disability - the research challenge. IEEE International Conference on Systems, Man and Cybernetics V o l . 3. pp. 2390-2395. [33] Denavit, J . , Hartenberg, R.S. . (1955) A kinematic Notation for lower-pair mechanisms based on matrices. Journal of Applied Mechanics., vo l . 77, pp. 215-221. [34] Fu , K . S . , Gonzalez, R .C . , Lee, C . S . G . . (1987) Robotics: Control, Sensing, Vision, and Intelligence. M c G r a w - H i l l Inc.. U S A . pp. 254-285. [35] Elder, Matthew C , Knight, John C . (1995) Specifying user interfaces for safety-critical medical systems. Medical Robotics and Computer Assisted Surgery Baltimore, Maryland, pp. 148-155. -80-Appendix A A P P E N D I X A - R O B O T T E C H N I C A L S P E C I F I C A T I O N S F u n c t i o n Spec i f i ca t ion Payload Capacity 5.0 lb. mass with center of mass at or less than 9.0" from the pitch/roll axis Maximum Speed 15 cm/sec (center of mass of the payload) Accuracy Location and orientation within 0.2" and 1.0 degrees of desired values Repeatability Location and orientation within 0.1" and 0.3 degrees Component life Minimum of 5 years use Control User Dependent • Voice • Mouth Joystick Other • All parts designed for interchangeability • Mechanical drawings to conform to CAN/CSA-B78.2-M91 • Will maintain position under full load if power is removed from the drive system. • This device is for indoor use only. • Linear drive tracks are to be designed for final length fitting in the field. • Easily Transportable • Fixed installation while in use Gripper Force 401b Electrical Safety CSA guidelines Cost approximate price is complete for under $40,000 dollars. This includes the controller, training, software, and robot Table A: Robot Technical Specification -81-Appendix B A P P E N D I X B - E Q U I P M E N T U S E D IN S A F E T Y S Y S T E M The following equipment was used in the prototype vision-based safety system: • Panasonic SVHS OmniMovie HQ Video Camera - Used for post performance analysis • Computer (Intel 486 - 66MHz) - Safety system control • Sharp GPB-1 Video card - Frame Grabber & Image processing • Pulnix TM-545i CCD Camera - Image aquisision • Sony SVO-9500MD SVHS V.C.R. - Image recording • Elenko Precision Power Supply - Infra red flood light power supply -82-Appendix C A P P E N D I X C - P A R A M E T E R C U R V E S F O R N E I L SQUIRE R O B O T Actual Values o (cm, deg) -50 0 2 5 0 5C Potentiometer Decimal Values Calibration Curves • Rotation f(x) = -3.41E-l*x + 8.93E+1 (Deg) • Left/Right - 2 . 9 4 E - 1 * x + & 4 8 E + 1 (cm) A In/Out f(x) = - 7 . 2 7 E - 2 * \ + 3 . 3 7 E + 1 (cm) # Yaw f (x) = - 3 . 4 2 E - l * x * 9 . J 6 E + 1 (deg) Pitch f<x) = - 3 . 2 4 E - I * x + - 9 J 5 E + 1 (deg) Figure C-l: Robot Calibration Curves Slope of Lines I Camera Z f(x) = 3 . 6 2 E - 3 * x • Camera Y f{\) = 3 3 5 E - 3 * x Pixel Values Figure C-2: Camera Calibration Curves -83 -Appendix D A P P E N D I X D - U S E R Q U E S T I O N N A I R E USER: D A T E : (yy/mm/dd) 1. Did you generally feel safe in the vicinity of the robot? (Yes or No) Specify incidences in which you felt unsafe 2. Do You think other disabled persons would feel safe using this system as is? (Yes or No) Why? 3. Did the robot always stop safely? (Yes or No) 4. Did you have confidence in the safety system? (Yes or No) 5. Is it acceptable to have this marker on your head? (Yes or No) Why? 6. Would a smaller marker be acceptable? (Yes or No) If yes how small? 7. Would a wheelchair mounted marker be an acceptable alternative? (Yes or No) 8. W o u l d a J o u s e ™ mounted marker be an acceptable alternative? (Yes or No) 9. Do you like the vision system better than the light curtain? (Yes or No) Why? 10. With this current vision system payloads are not detectable, does this make the system too unsafe to use? (Yes or No) -84-Appendix D 11. Given that the light curtain can detect payloads other than thin or transparent objects do you think that this is an adequately safe safety system? (Yes or No) 12. If this robotic workstation was installed in your home would you prefer it with a: • light curtain, • vision system • both systems • neither system Why: 13. Which of the two safety technologies should Neil Squire focus their efforts on for further improvement for the robotic workstation? 14. Can you think of any other ways to make this robotic workstation safer? 15. Is there anything else you need to know about the workstation to make it more comfortable to use? 16. Does it bother you to have a camera (non-recording) monitoring your actions? (Yes or No) 17. What improvements would you suggest for improvement to the vision safety system? 18. What did you like about the vision safety system? 19. If you had the opportunity to get a robotic workstation like this for yourself, would you? (Yes or No) Why? 20. Are there particular concerns you would have about owning such a system? -85-Appendix D USER RESPONSES Subject A 96/08/19 1. Did you generally feel safe in the vicinity of the robot? Yes because there was an attendant - Felt uneasy with the robot when it was over my head. 2. Do You think other disabled persons would feel safe using this system as is? Yes 3. Did the robot always stop safely? No [Communication problem with robot controller - loose cable fixed] 4. Did you have confidence in the safety system? Yes only because there was a safety attendant 5. Is it acceptable to have this marker on your head? Yes 6. Would a smaller marker be acceptable? Yes 7. Would a wheelchair mounted marker be an acceptable alternative? N o Felt it would be much safer with marker on the head 8. Would a Jouse™ mounted marker be an acceptable alternative? No [See Question 7] 9. Do you like the vision system better than the light curtain? Yes because the safety zone is flexible and travels with the user. 10. With this current vision system pay loads are not detectable, does this make the system too unsafe to use? No 11. Given that the light curtain can detect payloads other than thin or transparent objects do you think that this is an adequately safe safety system? Yes - but only if you sit in front of the light curtain -86-Appendix D 12. If this robotic workstation was installed in your home would you prefer it with a: both systems - in the event that one of the systems fails you have a backup 13. Which of the two safety technologies should Neil Squire focus their efforts on for further improvement for the robotic workstation? - Vision 14. Can you think of any other ways to make this robotic workstation safer? - biggest concern was the robot hitting the user - Failure of the safety system. 15. Is there anything else you need to know about the workstation to make it more comfortable to use? - Make the workstation higher 16. Does it bother you to have a camera (non-recording) monitoring your actions? No 17. What improvements would you suggest for improvement to the vision safety system? None 18. What did you like about the vision safety system? - Its flexibility 19. If you had the opportunity to get a robotic workstation like this for yourself, would you? Yes - Gives the user more independence - Help get the user a job 20. Are there particular concerns you would have about owning such a system? How to fix the robot if something went wrong Observers Comments • The user being tested was unable to gain confidence in the vision-based safety system because of the short period of time available for testing. • Definitely felt a safety system (or two) was required for the successful operation of the robot system. • User Height = 1.36 m • Safety Zone = 35 cm (originally) 25 cm (after a little familiarity with system) • Test subject was a high level quadriplegic with very little upper body movement, very good head motion. -87-Appendix D Subject B 96/08/20 1. Did you generally feel safe in the vicinity of the robot? Yes Very familiar with the operation of the robot from previous visits 2. Do You think other disabled persons would feel safe using this system as is? Yes Felt that if he was comfortable with the system others would be as well 3 . Did the robot always stop safely? YES 4. Did you have confidence in the safety system? Yes Safety was not a large concern for the subject since he had enough arm mobility to back out of the robot's way if the system would ever fail. 5. Is it acceptable to have this marker on your head? Yes While using the robot it was not a big deal to be wearing the marker, concern was raised about losing independence by having to have someone put on or take off the in order to use the robot. Would not want to have the marker on his head when he was not operating the robot. Willing to use the marker if it was required to use the robot. 6. Would a smaller marker be acceptable? YES A marker that was not so obvious would be better suited for his needs. Would not mind having a marker on his person if it would not look out of the ordinary. Commented that he did not like having a mark in the middle of his forehead when using the IR joystick in another project 7. Would a wheelchair mounted marker be an acceptable alternative? YES The loss of accurate tracking of his head was not a large concern because his limited upper body movement could provide a means to move out of harms way. 8. Would a Jouse™ mounted marker be an acceptable alternative? No Preferred on wheel chair - a Jouse™ marker provides no user tracking. Appendix D 9. Do you like the vision system better than the light curtain? Yes Vision provided a more user adjustable safety zone and allowed for the safety system to be shut off less often than with the light curtain. Found the light curtain shut off to often 10. With this current vision system payloads are not detectable, does this make the system too unsafe to use? No - user felt that most objects in the workspace would be controlled and unlikely that he would be carrying a knife in the gripper. Also his mobility would allow for some object avoidance. 11. Given that the light curtain can detect payloads other than thin or transparent objects do you think that this is an adequately safe safety system? Yes 12. If this robotic workstation was installed in your home would you prefer it with a: both systems - during training with the system neither system - once became familiar with the workstation. Similar experience with his wheelchair - initially lots of safety systems, but now has only a single kill switch. He figured after becoming familiar with the system he would only require a kill switch for safety. 13. Which of the two safety technologies should Neil Squire focus their efforts on for further improvement for the robotic workstation? - Vision - liked it better than the light curtain 14. Can you think of any other ways to make this robotic workstation safer? - improve the user interface that controls the robot 15. Is there anything else you need to know about the workstation to make it more comfortable to use? - a better user interface 16. Does it bother you to have a camera (non-recording) monitoring your actions? No 17. What improvements would you suggest for improvement to the vision safety system? see marker comments 18. What did you like about the vision safety system? - did not shut off as often as the light curtain (see previous comments) -89-Appendix D 19. If you had the opportunity to get a robotic workstation like this for yourself, would you? Immediately - No - no use for it right now When has a job - Yes in a work situation it would be good 20. Are there particular concerns you would have about owning such a system? Cost - upkeep - Tune-ups - Warrantee issues Observers Comments • Test subject was a quadriplegic with some upper body movement and some arm (limited) movement • User Height = 1.41 m • Safety Zone = 32 cm (originally) to 26 to 21 cm (after a little familiarity with system) -90-Appendix D Subject C 96/08/22 1. Did you generally feel safe in the vicinity of the robot? Yes There was a attendant at the deadman switch 2. Do You think other disabled persons would feel safe using this system as is? Yes because he knew that the robot was going to stop before it hit you 3. Did the robot always stop safely? Yes 4. Did you have confidence in the safety system? Yes 5. Is it acceptable to have this marker on your head? Yes -required it looks a little strange Do to his inability speak easily it would be difficult to tell other people why it is there Felt current marker was a little embarrassing - not very aesthetically pleasing Would still wear it in order to safely operate the robot 6. Would a smaller marker be acceptable? Yes anything that makes the marker less obvious - still would wear it even if it couldn't be made smaller 7. Would a wheelchair mounted marker be an acceptable alternative? Yes - if same level of safety could still be assured. - This would work on this user due to virtually no head movement, and lack of spasms 8. Would a Jouse™ mounted marker be an acceptable alternative? Only if the Jouse™ would never be moved and user always in the same location - Limiting 9. Do you like the vision system better than the light curtain? N/A 10. With this current vision system payloads are not detectable, does this make the system too unsafe to use? No-Yes Depends on what is available for possible payloads Should not be a problem for this workstation because of limited objects in workspace 11. Given that the light curtain can detect payloads other than thin or transparent objects do you think that this is an adequately safe safety system? N/A -91-Appendix D 12. If this robotic workstation was installed in your home would you prefer it with a: • vision system - an important feature to insure safety - never tried light curtain 13. Which of the two safety technologies should Neil Squire focus their efforts on for further improvement for the robotic workstation? N/A 14. Can you think of any other ways to make this robotic workstation safer? Flexible Jouse™ Hard to get to stop button - make it larger and more central 15. Is there anything else you need to know about the workstation to make it more comfortable to use? No - familiar with the robot from previous visits 16. Does it bother you to have a camera (non-recording) monitoring your actions? No 17. What improvements would you suggest for improvement to the vision safety system? -None 18. What did you like about the vision safety system? - never used the light curtain but like using the vision system because robot needs a safety system 19. If you had the opportunity to get a robotic workstation like this for yourself, would you? Yes - greater independence 20. Are there particular concerns you would have about owning such a system? Breakdowns Observers Comments • Test subject was a quadriplegic with No upper body movement and very very little hand movement - also had a ventilator • User Height = 1.370 m • Safety Zone = 35 cm • Never had used the Windows™ interface before -92-Appendix D Subject D 96/08/23 1. Did you generally feel safe in the vicinity of the robot? Yes Comfortable with Technology With implementation of the torque sensor if the vision system failed the user would not get severely injured 2. Do You think other disabled persons would feel safe using this system as is? Yes The robot was stopping correctly, and he felt that his confidence in the system would transfer to other users 3. Did the robot always stop safely? Yes 4. Did you have confidence in the safety system? Yes 5. Is it acceptable to have this marker on your head? Yes Felt the prototype marker was not overly comfortable Worried about how he would get the marker on by himself Would wear it if required to operate the robot 6. Would a smaller marker be acceptable? Yes smaller and lighter would be better 7. Would a wheelchair mounted marker be an acceptable alternative? Yes - Wouldn't have to wear something - Felt that his small amount of head motion would be enough to overcome not being able to track his head motion - i.e. not prone to large forward head spasms 8. Would a Jouse™ mounted marker be an acceptable alternative? Yes He is always in the same position relative to the Jouse™ 9. Do you like the vision system better than the light curtain? Yes Vision had less problems - light curtain had more false triggers than the vision system Also the system will not go out of alignment if the table in moved - by either him running into table or the cleaners He felt that when working both stopped the robot equally as well -93-Appendix D Vision was more flexible 10. With this current vision system payloads are not detectable, does this make the system too unsafe to use? No Felt that users of the robot would be smart enough to realize that the objects in the gripper can be different sizes Suggested an adjustable safety zone for when you are programming a task 11. Given that the light curtain can detect payloads other than thin or transparent objects do you think that this is an adequately safe safety system? Yes Felt safe with the system 12. If this robotic workstation was installed in your home would you prefer it with a: • vision system - Felt safe with this system • Light Curtain - Too many false triggers of the safety system. Would want at least one safety system when operating robot Stop button is a back-up for either system - this needs to be more central and larger 13. Which of the two safety technologies should Neil Squire focus their efforts on for further improvement for the robotic workstation? Vision seems more feasible and flexible long run seems more likely to work with fewer failures (table movement) vision would give greater independence because it would fail less 14. Can you think of any other ways to make this robotic workstation safer? Better interface 15. Is there anything else you need to know about the workstation to make it more comfortable to use? N o 16. Does it bother you to have a camera (non-recording) monitoring your actions? No 17. What improvements would you suggest for improvement to the vision safety system? -None 18. What did you like about the vision safety system? -94-Appendix D Less false triggers 19. If you had the opportunity to get a robotic workstation like this for yourself, would you? Yes - if could afford it - not a lot of use for it now but in future could adapt and use it 20. Are there particular concerns you would have about owning such a system? Theft Observers Comments • Test subject was a quadriplegic with No upper body movement or hand movement • User Height = 1.510 m • Safety Zone = 27 cm -to 22 cm to 20 cm to 17cm • The closest the robot was measured was 70mm from the front of the users head. • User had extensive experience with Windows™ and the Jouse™ allowed for much more time to experiment with the robot and safety system -95-Appendix E A P P E N D I X E - VISION S Y S T E M C O S T S Below is a cost breakdown of a potential system that could be used in a final product. Frame Grabbers: • Coreco OC-Mx 1Mb $1,895 or a • Coreco OC-TCi Ultra 1Mb Video $ 1,795 Camera: • PulnixTM7CN 1/2" CCD Camera $1,375 Power Supply $ 160 For cables and a camera lens allow for $300 to $500 dollars Approximate Total Cost $ 4,000 To program new software for use with the frame grabber some developer software and programming libraries would probably be requires. These cost would be one time cost and not require for each system. Dos Developers Kit $ 350 Cool Vision Library $ 1895 Oculus TCi - VGA Ultra software $ 35 -96-Appendix F A P P E N D I X F - C O M P U T E R C O D E The following is the main computer code used for the kinematic/vision system developed for this project. The other files necessary to compile this code are not included here since mostly they were automatically generated by the Windows™ compiler. /* safetydl.cpp : implementation file */ /* */ /* A program to calculate the Neil Squire Foundation's assistive */ /* robot gripper and robot arm's back end position using kinematic */ /* equations generated by the Denavit-Hartenberg Representation. */ /* */ /* The code also calculates the users head position using a video */ /* card (Sharp GPB-1) and C C D Camera */ /* */ /* Product of R E A C T Laboratories and Neil Squire Foundation */ /* */ /* Written by: */ /* Mitchell Visser, Dept. of Mechanical Engineering, U B C , 1996 & */ /* Gerry Rohling, Dept. of Mechanical Engineering, U B C , 1996 */ /* */ include "stdafx.h" #include "sentinel.h" #include "safetydl.h" #include "c:\gpb_l\gpb.h" #include "c:\gpb_l\windw.h" #include <windows.h> #include <math.h> #include <fstream.h> #include <iomanip.h> #define ID_LOOP (WMJJSER+10) #define PI 1 // plane 1 #define P2 2 #define P3 3 #de f ineBl l //bank 1 #define B2 2 #define B3 3 #defmeB44 #defineBD5 #ifdef_DEBUG -97-#undef THIS_FILE static char B A S E D _ C O D E THIS_FILEQ = F I L E _ ; #endif ///////////////////////////////////////////////////////////////////////////// // CSafetyDlg dialog CSafetyDlg::CSafetyDlg(CWnd* pParent /*=NULL*/) : CDialog(CSafetyDlg::IDD, pParent) { //{{AFX_DATA_INIT(CSafetyDlg) m_CVBWrongMarker = N U L L ; m_CVBLostMarker = N U L L ; m_CVBComError = N U L L ; m_CVBSafetyZone = N U L L ; m_CVBSerial = N U L L ; //}}AEX_DATA_IN1T } void CSafetyDlg: :DoDataExchange(CDataExchange* pDX) { CDialog:: DoDataExchange(pDX); //{{AFX_DATA_MAP(CSafetyDlg) DDX_VBControl(pDX, I D C _ W R O N G M A R K E R , m_CVBWrongMarker); DDX_VBControl(pDX, IDC_ LOSTMARKER, m_CVBLostMarker); DDX_VBControl(pDX, IDC_COMERROR, m.CVBComError); DDX_VBControl(pDX, ! D C _ S A F E T Y Z O N E , m.CVBSafetyZone); DDX_VBControl(pDX, IDC_SERIAL, m_CVBSerial); / / } } A F X _ D A T A _ M A P } BEGIN_MESSAGE_MAP(CSafetyDlg, CDialog) / /{{AFX_MSG_MAP( CSafetyDlg) ON_BN_CLICKED(IDC_START, OnClickedStart) / / } } A F X _ M S G _ M A P ON_MESSAGE(ID_LOOP,OnLoop) END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // CSafetyDlg message handlers B O O L CSafetyDlg::OnInitDialog() { CDialog:: OnlnitDialogO; s_gpbinit(0); // initailize the video card s_clearall(); s_caminit(0,0,100); // TODO: Add extra initialization here return T R U E ; // return T R U E unless you set the focus to a control } void CSafetyDlg: :OnClickedStart() { // Send the Program in a loop PostMessage(ID_LOOP); } L R E S U L T CSafetyDlg::OnLoop(WPARAM wParam, L P A R A M lParam) { /*»*******.*»*.*»,»«*»*,»*****«*,•*,*»»»»**»»»*,*/ /* Kinematic Subroutine to determine position of gripper and motor */ /* end unit */ /* Uses a Borland V B X to handle the Serial Port communications */ /* Declaring the Variables */ M S G msg; CString results; // initial data read from serial port CString answer; // message Box response CString raw; // string that new line has been identified char compot[40J; // raw data - 2s compliment hexadecimal long int decpot[6]; // decimal pot values from serial port float degpot[6]; // degree pot values from serial port int accept; // initial string acceptance code (0 or 1) int rawlen, rawpos; // length of raw and position of \n in raw long int i ; // delay loop counter int d, c; // counting variables float a2,a4, d6, a41; // fixed variables float arm; // robot extension arm length float pos[12]; // position of robot (both ends) + velocity int decint, power; // variables used in serial conversion double base = 16.0; // variables used in serial conversion char ascihex; // used in 2-com. to dec. conversion /* Initializing the variables */ raw.Empty(); accept = 0; // raw string is not acceptable rawlen = 0; i = 0 ; d = 0; c = 0; arm = (float)0.7889; a2 = (float)0.0493; a4 = (float)0.0452; a41 = (float)0.15; d6 = (float)0.1524; while(d<39) { compotfd] = 0; ++d; } Appendix F ofstream fOutput("OUTPUT.TXT", ios::ate); // initiating the Output file /* Sampling the Serial Port and Obtaining raw positional Data */ /* The raw data is in in 16bit twos-Compliment hexadecimal format */ while(accept != 1) { m_CVBSerial->SetNumProperty("MaxReceiveLen",0); // read the entire buffer results = m_CVBSerial->GetStrProperty("Receive"); raw = results; rawlen - raw.GetLength(); if(rawlen >= 80) { accept = 1; // raw string is acceptable m_C VBComError->SetNumProperty(" Value" /TRUE); while(PeekMessage(&msg, N U L L , 0,0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } } if( rawlen < 35) { // Creating a time delay for the buffer to fill with new data m_CVBComError->SetNumProperty( "Value" .FALSE) ; while(PeekMessage(&msg, N U L L , 0,0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } while(i<300000) { ++i; } } } /* Identifing the end of a complete line of data */ rawpos = raw.FindCW); if(rawpos < 40) // i.e. not a complete line of data { raw.SetAt(rawpos, 'k'); } rawpos = raw.Find("\n'); // the end of a complete line -100-^ JfC I^C CfC SfC «4C SfC 5fC S^ C tff* «f* 4 ^ SfC «4C S^ C 5f£ 5fC IjC SfC f^*- «4C 5fC 5tC SfC SfC SfC SfC SfC 3fC IjC 3{C «fc -SfC ^  /* Extracting a complete row of data from raw array */ d = rawpos; c = 31; compot[32]='\0'; while(c >= 0) { compotfc] = raw.GetAt(d); c = c - 1; d = d - 1; } /* Converting data (2s-compliment hexadecimal to a decimal value */ c = 0; while(c <= 5) { decpotfc] = 0; for(power=3 ;power>=0;power—) { decint = 0; ascihex = compot[((28-power) - (c*5))]; switch(ascihex) { case '0': break; case T : decint=decint+l; break; case '2': decint=decint+2; break; case '3': decint=decint+3; break; case '4': decinfc=decint+4; break; case '5': decint=decint+5; break; case '6': decint=decint+6; break; case 7': decint=decint+7; break; case '8': decint=decint+8; break; case '9': decint=decint+9; Appendix F break; case 'a': decint=decint+10; break; case 'b': decint=decint+11; break; case 'c': decint=decint+12; break; case'd': decint=decint+13; break; case 'e': decint=decint+14; break; case T: decint=decint+15; break; } decpot[c] = decpot[c] + (long int)(pow(base,(double)power)*decint); } if (decpot[c]> 32767) { decpot[c] = decpotfc] - 65535; } ++c; } /* Converting pot values to radians and meters */ degpot[0] = (float)((-0.341*decpot[0] + 89.4)*(3.14159/180)); // joint one (rad) degpotfl] = (noat)(0.849 + 0.003*decpot[l]); // joint two (m) degpot[2] = (noat)(-0.000729*decpot[2] + 0.337); // joint three (m) degpot[3] = (float)((-0.342*decpot[3] + 91.498)*(3.14159/180)); //joint four (rad) degpot[4] = (float)((-.324*decpot[4] - 91.542)*(3.14159/180)); // joint five (rad) /* Determining the position of the Gripper. Only 3 terms required */ /* from transformation matrix T14 (x), T24 (y), T34 (y) */ pos[0] = (float)(d6*sin(degpott0])*sin(degpot[3])*sin(degpot[4])-d6*cos(degpot[0])*cos(degpot[4])-sin(degpot[0])*a4*sin(degpot[4])-sin(degpot[0])*degpot[2]-cos(degpot[0])*a2); pos[l] = (float)(-d6*cos(degpot[0])*sin(degpot[3])*sin(degpot[4])-d6*sin(degpotf0])*cos(degpot[4])+cos(degpot[0])*a4*sin(degpot[4]) +cos(degpot[0])*degpot[2]-sin(degpot[0])*a2); pos[2] = (float)(cos(degpot[3])*sin(degpot[4])*d6 - a4*cos(degpotf3]) + degpot[l]); -102-/* Determination of the robot arm's bottom motor unit corner */ pos[3] = (float)((arm-degpot[2])*sin(degpot[0]) - a2*cos(degpot[0])); pos[4] = (float)(-(arm-degpot[2])*cos(degpot[0]) - a2*sin(degpot[0])); pos[5]"'" degpotfl]; /* Determination of the robot wrist unit */ pos[6] = (float)(-degpot[2]*sin(degpot[0]) - a2*cos(degpot[0])); pos[7] = (float)(degpot[2]*cos(degpot[0]) - a2*sin(degpot[0])); pos[8] = degpotfl]; /* Determination of the robot arm's top motor unit corner */ pos[9] = (float)(a41*cos(degpot[0])+(anri-degpot[2])*sin(degpot[0]) -a2*cos(degpot[0])); pos[10] = (float)(a41*sin(degpott0])-(arm-degpot[2])*cos(degpot[0]) -a2*sin(degpot[0])); pos[ll] =degpot[l]; * */ /* The Video Portion of the Program */ /* This portion of the code uses the Sharp Video Card */ /* */ /* The video sampling routine to get position of reference marker and head */ /* Purpose: To grab the frame from the V C R or camera, display it on the screen. */ /* and find all the centroids > min_area and < max_area. */ /* Declare the variables */ int label_count, count; int LUT_one[256]; long m00[25],m01[25],ml0[25]; double z[25], y[25], zabs,yabs; int threshold; threshold = 150; // FIRST W E L O A D T H E I M A G E s_clearall(); Appendix F s_selcam( 0, 0, 100); s_thrucpy(Pl,BIN,Pl,Bl,0,0,0,0,0,0,0); s_blutl(threshold,255,0,0,LUT_one); s_stlut( 0, LUT_one); s_bwlut( P I , B l , P I , B2, 0, 0, 0, 0); s_disp( PI , B2, 'R'); // NOW W E PROCESS T H E I M A G E s_labeI(Pl,B2,Pl,B3,0,0,0,Pl,B4,8); s_labelcnt(Pl,B3,0,&label_count); s_area(Pl,B3,5,0); s_lload(0,0,25,m00); s_mcntroid(Pl ,B3,5,0); sJloadCO.O^^mlO); s_lload(l,0,25,m01); count - 1; for (i = 0; i < 25; i++) { if ((m00[i] > 10) & & (mOOti] < 400)) { s_cntroidl(m00[i],ml0[i],m01[i], &z[count], &y[count]); ++count; } } // label all the predominate features // from P1B2 to P1B3 using P1B4 for // a scratch area // get area of each feature in P1B3 // and store in PECO and PEC1 // load data stored in PECO and PEC1 // into mOO (number_of_globes values) // get first moments of each feature // in P1B3 and store in PEC's 0,1,2 & 3 // load z data from PECO and PEC1 // into mlO (number_of_globes values // load y data from PEC2 and PEC3 // into mOl (number_of_globes values) // initialize the count of globes found // compute centroid (z,y) /* Check for the correct number of markers */ if(count < 3) { m_CVBLostMarker->SetNumProperty("Value",FALSE); m_CVBWrongMarker->SetNumProperty("Value",TRUE); fOutput « " L o s t MarkerW; } if(count > 3) { m_C V B WrongMarker->SetNumProperty(" Value" .FALSE); m_CVBLostMarker->SetNumProperty("Value",TRUE); fOutput « " T o o Many Markers\n"; } if(count = 3) { m_C V B WrongMarker->SetNumProperty(" Value" /TRUE); m_CVBLostMarker->SetNumProperty("Value",TRUE); } -104-Appendix F I* Converting Pixal values to meters */ i = l ; while(i<count) { zfi] = z[i]*0.00415; y[i] = y[i]*0.0040; ++i; } /* Converting head values to coordinate frame */ zabs = z[2]-z[l]; yabs = y[2] - y[l]; /* , */ /* Checking for the robot in the safety zone. */ /* */ I* Declare the variables */ double SafeZone,SZ; double PHieght; double OHieght; double Xupper,xabs; double Zright; double Zleft; double Yinner; int colourl, colour2, colour3, colour4, colour5; double disg, disw, dismt, dismb; /* Initializing the variables */ colourl = colour2 = colour3 = colour4 = colour5 = 0; SafeZone = 0.30; SZ = 0.30; PHieght = 1.4; OHieght = 1.520; xabs = (PHieght - OHieght) - . 10; // middle of head Xupper = (PHieght - OHieght) + SafeZone; Zright = zabs + SafeZone; Zleft = zabs - SafeZone; Yinner = yabs - SafeZone; -105-// safety zone around the users head // height of the person from ground // height of the origin from ground // top height of the safety zone // right edge of safety zone // left edge of safety zone // inner edge of safety zone // 0 = green 1 = red Appendix F /* Closing Distances */ disg = sqrt((pos[0]- xabs)*(pos[OJ- xabs)+(pos[l]- yabs)*(pos[l]- yabs)+(pos[2]-zabs)*(pos[2]- zabs)); dismb = sqrt((pos[3]- xabs)*(pos[3]- xabs)+(pos[4]- yabs)*(pos[4]- yabs)+(pos[5]-zabs)*(pos[5]- zabs)); disw= sqrt((pos[6]- xabs)*(pos[6]- xabs)+(pos[7]- yabs)*(pos[7J- yabs)+(pos[8]-zabs)*(pos[8]- zabs)); dismt= sqrt((pos[9]- xabs)*(pos[9]- xabs)+(pos[10]- yabs)*(pos[10]-yabs)+(pos[ll]- zabs)*(pos[ll]- zabs)); /* Start the Checking */ // ROBOT IN S A F E T Y Z O N E // gripper (pos[0], pos[l], pos[2]) if(pos[0] <= Xupper) { if(pos[l] >= Yinner) { if(pos[2] >= Zleft & & pos[2] <= Zright) { m_C VBSafetyZone->SetNumProperty(" Value" .FALSE); colour 1 = 1; } else { colourl = 0; } } } // motor unit bottom (pos[3], pos[4], pos[5]) if(pos[3] <= Xupper) { if(pos[4] >= Yinner) { if(pos[5] >= Zleft & & pos[5] <= Zright) { m_C VBSafetyZone->SetNumProperty(" Value" .FALSE) ; colour2 = 1; } else { colour2 = 0; } } } // Wrist unit (pos[6], pos[7], pos[8]) -106-if(pos[6] <= Xupper) { if(pos[7] >= Yinner) { if(pos[8] >= Zleft & & pos[8] <= Zright) { m_CVBSafetyZone->SetNumProperty("Value",FALSE); colour3 = 1; } else { colour3 = 0; } } } // motor unit top (pos[9], pos[10], pos[ll]) if(pos[9] <= Xupper) { if(pos[10] >= Yinner) { if(pos[l 1] >= Zleft & & pos[l 1] <= Zright) { m_C VBSafetyZone->SetNumProperty(" Value" .FALSE); colour4= 1; } else { colour4 = 0; } } } // ROBOT IN S A F E T Y Z O N E by closing distance if(disg <= SZ II dismb <= SZ II disw <= SZ II dismt <= SZ) { m_C VBSafetyZone->SetNumProperty(" Value" ,FALSE) ; colour5 = 1; } else { colour5 = 0; } // ROBOT OUT OF S A F E T Y Z O N E if(colourl = 0 & & colour2 ==0 & & colour3 = 0 & & colour4==0 & & colour5 == 0) { m_CVBSafetyZone->SetNumProperty("Value",TRUE); } /* */ /* Writing to the Output File for Data retrival */ /* */ Appendix F fOutput « disg « "\t" « d i s m b « "\t" « d i s w « "\t" « d i s m t « "\t" « yabs « "\t" « zabs « "\t" « xabs « "\t" « Xupper « "\t" « Yinner « "\t" « Zleft « "\t" « Z r i g h t « "\t"; fOutput « pos[0] « " \ t " « pos[l] « " \ t " « pos[2] « " \ t " « pos[3] « " \ t " « pos[4] « " \ t " « pos[5] « " \ t " « pos[6] « " \ t " « pos[7] « " \ t " « pos[8] « " \ t " « pos[9] « " \ t " « pos[10] « " \ t " « p o s [ l l ] « " \ t " ; fOutput « colourl « " \ t " « colour2 « " \ t " « colour3 « " \ t " « colour4 « " \ t " « colour5 « " \ n " ; /* Checking for New Messages in the Message queue */ while(PeekMessage(&msg, N U L L , 0,0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } /* Send the Program into a repeating loop */ PostMessage(ID_LOOP); return 0; void CSafetyDlg: :OnCancel() { // T O D Q Add extra cleanup here // set ROI back to 512 x 512 s_gpbroi(0,0,0,0,512,512); s_clearall(); CDialog::OnCancel(); -108-

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080895/manifest

Comment

Related Items