Open Collections

UBC Undergraduate Research

A novel application of real-time video streaming and recording to wheelchair skills training Lu, Daniel L.; Liang, Anson; Douglas, Alec Jan 14, 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


52966-Lu_D_et_al_ENPH_479_2013.pdf [ 11.47MB ]
JSON: 52966-1.0074495.json
JSON-LD: 52966-1.0074495-ld.json
RDF/XML (Pretty): 52966-1.0074495-rdf.xml
RDF/JSON: 52966-1.0074495-rdf.json
Turtle: 52966-1.0074495-turtle.txt
N-Triples: 52966-1.0074495-rdf-ntriples.txt
Original Record: 52966-1.0074495-source.json
Full Text

Full Text

A Novel Application of Real-Time Video Streaming andRecording to Wheelchair Skills Trainingenph 479 - Group 1252Daniel L. LuAnson LiangAlec DouglasJanuary 14, 2013Executive SummaryThis project implements a novel application of wireless real-time video streaming technol-ogy for use in the remote training of new wheelchair users. Professional training for newwheelchair users can significantly improve their quality of life and reduce the risk of injuryduring wheelchair operation. However, such training is often expensive, time-consuming,and difficult to access in many areas. An alternative to one-on-one training with a therapistis to provide wheelchair users with a portable device which displays instructional videosabout proper wheelchair operation technique. To this end, Dr. Ian Mitchell and Andy Kimof UBC Computer Science and Dr. William Miller and Ed Giesbrecht of UBC Occupa-tional Sciences & Occupational Therapy have developed an application called EPICWheelS(Enhancing Participation In the Community by improving Wheelchair Skills) for tabletdevices running the Android operating system. In addition to displaying instructional videos,EPICWheelS provides users with the ability to communicate with a remote wheelchair ther-apist through voicemail.This project explores the possibility of adding important functionality for the EPICWheelSapp to allow a remote therapist to more effectively evaluate the performance of the wheelchairtrainee. Our solution allows the app to wirelessly stream video from a camera mounted ona secondary Android device to a tablet running the application. This enables the user torecord themselves demonstrating new skills. The video will be transmitted in real time fromthe camera device to the tablet over a WiFi network, and be simultaneously recorded by thetablet. The recorded video can then be uploaded to a wheelchair therapist for evaluation.Our solution achieves a satisfactory performance of 15 frames per second for videos withthe standard VGA resolution (640 ? 480 pixels) with jpeg compression per frame. Thedelay associated with transmitting the video from the camera device to the tablet is lessthan 300 ms and is acceptable for our purposes.1ContentsExecutive Summary 1List of Figures 4List of Tables 51 Introduction 61.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2 Statement of Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Project Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4 Organisation of this Report . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Technical Discussion 82.1 Comparison of Different Approaches to Video Streaming . . . . . . . . . . . 82.1.1 USB Video Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1.2 Bluetooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.1.3 Network Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2 Brief Overview of Streaming and Recording . . . . . . . . . . . . . . . . . . 102.2.1 Transmission of video data over a network . . . . . . . . . . . . . . . 102.2.2 Recording of video data . . . . . . . . . . . . . . . . . . . . . . . . . 112.3 Software Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.1 IP Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3.2 MJPEG Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3.3 FFMPEG Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Experimental Equipment 133.1 Structure and Operation of Demo Application . . . . . . . . . . . . . . . . . 143.1.1 Connecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.1.2 Displaying Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.1.3 Recording Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.1.4 Encoding Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 Experimental Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Latency Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.2 Framerate Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2.3 Practical Usage Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Results and Discussion 224.1 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.2 Framerate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3 Practical Usage Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Conclusions 2626 Project Deliverables 266.1 List of Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266.2 Financial Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.3 Ongoing Commitments by Team Members . . . . . . . . . . . . . . . . . . . 277 Recommendations 288 Appendix: FFmpeg build instructions 28References 293List of Figures1 Viewing an instruction video on wheelchair usage in the EPICWheelS appli-cation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Photo of an example of a wireless USB camera [8]. . . . . . . . . . . . . . . . 93 Photo of an example of a USB to micro-USB adapter. . . . . . . . . . . . . . 94 Screenshot of IP Webcam application upon startup. . . . . . . . . . . . . . . 125 General overview flowchart illustrating the basic operation of the demo appli-cation developed for this project. . . . . . . . . . . . . . . . . . . . . . . . . 146 Screenshot of the demo application on startup (top), and after the ?Connect?button has been pressed (bottom). . . . . . . . . . . . . . . . . . . . . . . . 157 Screenshot of the demo application during video streaming. At this stage theuser may either begin recording by pressing ?Record? or disconnect from thecamera by pressing ?Disconnect?. . . . . . . . . . . . . . . . . . . . . . . . . 178 Screenshot of the demo application immediately after pressing the ?Record?button. The application prompts the user for the name of the video to be saved. 179 Screenshot of the demo application while it is recording the video. The amountof elapsed time is shown in the center top and the user may stop the recordingat any time by pressing ?Stop Recording?. . . . . . . . . . . . . . . . . . . . 1810 Screenshot of the demo application while it is encoding the video after recording. 1911 A flowchart containing the display, recording, and encoding processes whereitems in green are handled by the MjpegView class and items in blue arehandled by the MjpegInputStream class. . . . . . . . . . . . . . . . . . . . . 1912 Diagram illustrating the experimental set up for measuring latency, and thelatencies from the two sources of latency. . . . . . . . . . . . . . . . . . . . . 2013 One of the photos from the latency test. In the background is a stopwatch;on the right is a device running the IP Webcam application, and on the leftis a device running the demo application. . . . . . . . . . . . . . . . . . . . . 2114 Graph of latency times for three different sets of settings. For more details,please refer to Tables 1, 2, 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2415 Frames extracted from the sample videos,, respectively, in clockwise order. . . . . . . . . . . . . . . . . . 254List of Tables1 Test results for latency in milliseconds from stopwatch to camera for differentsettings of video resolution and frame compression quality. . . . . . . . . . . 232 Test results for latency in milliseconds from camera to tablet for differentsettings of video resolution and frame compression quality. . . . . . . . . . . 233 Test results for total system latency in milliseconds for different settings ofvideo resolution and frame compression quality. . . . . . . . . . . . . . . . . 234 Test results for average number of frames per second for different settingsof video resolution and frame compression quality. Due to fluctuations inwireless network bandwidth, the framerate can vary with an uncertainty of?2 fps. For results labeled N/A, the application became unresponsive. . . . 255 Financial summary of components used in this project. . . . . . . . . . . . . 2751 Introduction1.1 BackgroundNew users of wheelchairs benefit greatly from professional instruction in efficient and safe op-eration of a wheelchair. A low-intensity systematic course on wheelchair use over seven weeksyields a noticeable increase in mechanical efficiency of propulsion techniques of wheelchairusers [1]. Wheelchair skill may be enhanced even further after a brief period of extra train-ing such as the Wheelchair Skills Training Program in addition to the standard amount ofinitial rehabilitation [2]. As such it is desirable for new wheelchair users to receive as muchtraining as possible. However, training programs are very time-consuming and expensive forhospitals and rehabilitation centers. Furthermore, wheelchair users who live in remote areascan find it extremely difficult to access the nearest training facility.A natural solution to these problems is to provide the trainee with a portable device whichdisplays training material. A tablet computer is the ideal candidate since it is affordable,lightweight, and easy to use. Using apps on such mobile devices has been shown to havemany benefits such as reducing the cost of healthcare provision [3]. The proliferation oftablet computers running the Android operating system or iOS in recent years has renderedthem sufficiently affordable to be provided to users for training purposes. For example, thevitfiz system [4] uses the accelerometer in Android devices to provide feedback to patientsas they are performing physical exercises. The StrokeLink application for the iPad playsvideos to stroke patients to teach them rehabilitation exercises [5].An Android application called EPICWheelS (Enhancing Participation In the Communityby improving Wheelchair Skills) has been developed by Dr. Ian Mitchell and Andy Kim ofUBC Computer Science and Dr. William Miller and Ed Giesbrecht of UBC OccupationalSciences & Occupational Therapy. The EPICWheelS application allows a wheelchair userto learn new wheelchair skills and proper operation technique by watching a series of short,pre-recorded videos (Figure 1). The application also contains voicemail functionality to allowthe trainee to communicate with their therapist and tracks usage time.The EPICWheelS application has several advantages over traditional one-on-one train-ing. Firstly, it is capable of providing wheelchair skills training to persons residing in areaswithout easy access to rehabilitation centres and therapists. Secondly, the email and voice-mail functionality in EPICWheelS allows a therapist to review and evaluate the trainee?sprogress without necessarily being present at the time at which the trainee is performing thetraining exercises.1.2 Statement of ProblemA key shortcoming of having only email and voicemail communication is that the remotetherapist cannot directly observe the trainee performing the training routines. Consequentlythe therapist would be unable to evaluate the performance and accuracy of the wheelchairuser applying the newly learnt techniques. As such, the therapist cannot provide useful6Figure 1: Viewing an instruction video on wheelchair usage in the EPICWheelS with the same level of helpfulness as that which can be provided by a one-on-onetherapist.This problem is resolved by implementing a system to allow the wheelchair user to recordvideos of themself performing the training exercises, which can then be sent to the remotetherapist. Since wheelchair users are limited in their mobility, special care must be taken todesign the system such that a wheelchair user can easily take videos of themself.After setting an external camera in a stationary position, the user must be able to seewhat the camera is recording, so as to ensure that the frame of view is sufficient to capture theentire training routine (which may involve a large area). The camera must also be wireless,because the training routine may involve a large amount of motion that will entangle anddamage any wires connecting the tablet to the camera.In our solution, the process of recording oneself performing wheelchair training routinesconsists of the following: First, the tablet running EPICWheelS is situated on the wheelchairwith the user (e.g. on the user?s lap). After the wheelchair user places the camera device ona stable surface (e.g. a shelf or desk), they can begin streaming live video from the camerato the tablet. They may then start or stop the recording simply by touching the respectivecontrols on the tablet.After researching different options for wireless video cameras, we have decided to use asecond Android device with a built-in camera, such as a smartphone, as the external camera.This will be used in conjunction with a free and open-source application that transforms thedevice into an IP camera, allowing it to stream video over a WiFi network.1.3 Project ObjectiveThe objective of our project is to develop software that allows an Android application toreceive an incoming stream from an IP camera whilst simultaneously displaying and recordingit. The application should have a simple user interface that allows the user to easily connectto the IP camera and start or stop the recording at any time. At the end of the recording,7the software should output a compressed video file in a suitable format, which may then besent to a remote therapist for evaluation as needed.1.4 Organisation of this ReportThis paper will detail our project by first providing an overview of theory of the technicalimplementation of video streaming. Then, our choice of specific technologies and a com-parison of alternative methods will be presented. The organisation and functionality of thedemonstration Android application we develop for this project will be discussed. Followingwill be the details of the experimental procedure performed to validate our solution, theresults of performance testing of our solution, and analysis of said results. The paper willconclude with a list of project deliverables, financial summary, ongoing commitments byteam members, and recommendations for future work.2 Technical Discussion2.1 Comparison of Different Approaches to Video StreamingSeveral approaches exist to solve the problem of streaming video from an external camera toany digital device, wirelessly or not. Here we will present a brief discussion of the advantagesand disadvantages of three of the most common methods: USB video cameras, Bluetooth,and streaming over the network.2.1.1 USB Video CameraA class of webcams and other portable cameras known as USB video cameras connect to thehost using a Universal Serial Bus (USB) interface [7]. Although most such devices require aUSB cable, rendering them unsuitable for our desired wireless functionality, there are someUSB video cameras that are wireless. These often rely on transmitting video data usinga proprietary format from the wireless camera to a USB receiver that is plugged into thehost device. One example of such a camera is depicted in Figure 2, which features a wirelessantenna that transmits the video stream to a USB receiver through Gaussian Frequency-ShiftKeying modulation in a radio signal. Most commercially available wireless USB cameras havea range of 10 m to 100 m in open space and are often used as surveillance cameras.Since most Android tablets do not have a USB port but instead possess a micro-USBport, it is necessary to obtain an adapter for using an Android tablet with a USB device(Figure 3).Although the hardware is readily available, a significant challenge lies in using the An-droid operating system to recognise and capture the video stream from a USB camera.Because most Android devices have a built-in camera, there is little or no support for an ex-ternal camera. We were unable to find any existing API or driver that would allow the USBcamera to be detected by the Android operating system. Implementing a driver and API8Figure 2: Photo of an example of a wireless USB camera [8].Figure 3: Photo of an example of a USB to micro-USB adapter.for a USB camera is difficult and time consuming since this type of implementation needsto be done at a very low level and may require modifications to the kernel of the Androidoperating system itself. This would in turn require rebuilding and installing a custom buildof the operating system on the device, which is not guaranteed to be compatible with allAndroid devices, and is a significant amount of work. As such, the USB camera option wasnot chosen for this project.2.1.2 BluetoothBluetooth is a wireless technology that allows data to be transmitted between compatibledevices at a short range. There are many Bluetooth devices such as headsets and peripheralsthat transmit data within a short range of no more than 10 m. In theory, it should bepossible to stream video wirelessly using Bluetooth as well. However, despite recent advances9in Bluetooth speed [9], Bluetooth is designed for low bandwidth data transmission andthe transmission speed can decrease dramatically when distance is increased by even a fewmetres.There are, however, existing Bluetooth cameras that are commercially available such asthe Looxcie Bluetooth camera [10]. These solutions typically rely on proprietary communi-cation protocols and video codecs and it is impossible to integrate the functionality into ourapplication. We were unable to find any open-source solutions that can be utilised for ourpurposes, and in any case the limited bandwidth of Bluetooth severely restricts the videoquality, so this option was not chosen.2.1.3 Network StreamingAn obvious method to transmit data is to send it over the Internet. Data of any kind can betransmitted over a local network such as a wireless WiFi network. To stream live video froma camera, the camera would have to be configured similarly to a web server that continuallyuploads new video data. Such cameras may have an Internet Protocol address (IP address)and be referred to as IP cameras. A convenient way to create an IP camera is to simply usea secondary Android device and run an application which allows it to stream video from itsbuilt-in camera. This approach is more feasible than the previous two approaches becauseof the availability of software libraries and application interfaces (API). See Section 2.2 formore information about streaming video over a network and Section 2.3 for more detailsabout the specific software packages used.The only downside to network streaming is that the user must have a local WiFi network.However, most homes nowadays have such networks, and if they do not, WiFi routers arevery cheap and a technician may be sent to help the wheelchair user set up the network.Note that the WiFi network need not be connected to the internet for local video streamingto work.2.2 Brief Overview of Streaming and Recording2.2.1 Transmission of video data over a networkA video stream may be transmitted over the network in real time as a sequence of imageframes. However, the naive approach to this, which is to send each frame in the bitmapformat, requires a large amount of bandwidth and is not feasible. Thus, each frame must becompressed. A common approach is the Motion jpeg format (mjpeg), in which each videoframe of the digital video sequence is separately compressed as a jpeg image. The frame isthen sent over the local network using the Hypertext Transfer Protocol (http).Since any internet network can contain many other agents that are transmitting data,the available bandwidth for the video streaming can fluctuate. The phenomenon of packetloss can also lead to loss of data. As such, not all outgoing frames from the camera devicewill be received by the tablet. However, because each frame is compressed independently10and transmitted separately, the loss of a few single frames does not lead to failure and is infact not noticeable.2.2.2 Recording of video dataDuring recording, each of the received frames is saved to a temporary folder. The finaloutput video must be one single file for ease of handling. There exist a variety of opensource libraries that can concatenate and encode separate frames into a single video file,such as ffmpeg (see Section 2.3.3). After the frames are processed into a single video file,the temporary frames are deleted.2.3 Software Used2.3.1 IP CameraInternet protocol cameras (IP cameras) are a class of video cameras or webcams that areconnected to an internet network. For example, the IP camera may be attached to a wirelessWiFi network in the vicinity of the Android device. An IP camera has a unique IP addresson the network and functions as a webserver which feeds a stream of video frames to anyclient that is connected to it through a specific port.A free and open-source Android application called IP Webcam [6] transforms any Androiddevice with a camera into an IP camera (Figure 4). This application sends an outgoingstream in mjpeg format. It allows the user to choose several parameters for its outgoingvideo stream, such as the following:? Video resolution (labelled ?Resolution?). This determines the pixel dimensions of eachvideo frame. The higher the resolution, the finer the detail that one can resolve but theslower the performance, since more computation would be necessary to process eachframe. The user is allowed to select from several common presets, such as VGA (640? 480), QVGA (320 ? 240), WVGA (800 ? 480), and WXGA (1280 ? 720).? Compression quality for jpeg compression of outgoing frames (labelled ?Quality?).The user is allowed to choose a positive integer not exceeding 100, where 100 is thebest quality and 1 is the worst quality. A higher quality compression preserves finerdetail and has fewer jpeg compression artifacts, at the expense of greater file size andpossibly slower performance.? Username and password (labelled ?Login/password?). Optionally, the video streammay be protected by a username and password which must be entered upon connection.This allows the user to have improved privacy.? Maximum framerate (labelled ?FPS Limit?). The higher the framerate, the smoothermotion appears and the better one is able to discern fast movements. However, ahigher framerate causes more video frames to be generated, which can increase storage11space requirements for the video. In any case, if a maximum framerate is not set, thevideo would run at the greatest possible framerate afforded by hardware computationcapabilities and network bandwidth.and some other advanced features which are not necessary to change.Figure 4: Screenshot of IP Webcam application upon startup.Upon establishing a connection, the IP Webcam application displays the video from thecamera on the camera device?s own screen in real time as well as its IP address, which isrequired for connecting to it (see Section 3.1.1).2.3.2 MJPEG LibrariesTwo classes are used in this project for receiving mjpeg video streams. The MjpegInputStreamand MjpegView classes are used to parse and display the input stream asynchronously. Theyare modified versions of open source classes with the same names that can be found in mul-tiple locations online but have no clear original author (they are therefore assumed to be inthe public domain).122.3.3 FFMPEG LibraryFast forward mpeg (ffmpeg) is an open source project that aims to provide a cross-platformmultimedia handling solution. It contains libraries for decoding, encoding, transcoding,muxing, demuxing, streaming, filtering, and playing a large variety of different multimediafiles. A branch of the ffmpeg build from the official Git (a version control system) repositorydeveloped specifically for the ARM architecture is used in our project [11]. Most Androiddevices run on ARM processors. The detailed build instruction for the ffmpeg library islisted in Section 8. For our project we will include the ffmpeg executables in our Androidapplication.The command that we use from ffmpeg is called ffmpeg, a command line tool thatis used for converting multimedia files between formats and concatenating and encodingmultiple still frames into a video. We therefore only need to build ffmpeg from the sourcecode, and not include other unrelated tools such as ffplay (a media player) and ffprobe(a multimedia stream analyser). The ffmpeg calls various libraries included in the ffmpegsource code, so it is necessary to include all required libraries and ffmpeg into the Androidapplication that we are building. The list of dependent libraries that are required is detailedin the ffmpeg build instructions in Section 8.Our application invokes the executable:ffmpeg -vcodec mjpeg -i frames_ %05d.jpeg -r 15 video.movHere, the -vcodec option specifies the video output codec, which is specified to be mjpegas discussed earlier. The -i option specifies the input file, which is composed of all files withfilenames of the form frames %05d.jpeg where %05d indicates integers with five digis. Thatis, all captured frames from frames 00000.jpeg to frames 99999.jpeg (or whatever thehighest number is) will be used. The -r option specifies the frame rate, which is determinedby our application automatically based on the number of frames received over the timeperiod. The last parameter is the file name of the output video. Here the output file nameis shown to be as an example, but it could be any .mov file. The .mov extensionis a typical extension for mjpeg videos and can be played with a variety of media players,such as Apple QuickTime.The command line can be invoked from within a Java Android application by using thebuilt-in class java.lang.Process.3 Experimental EquipmentTo test the feasibility of our solution, an Android application that fulfills the desired func-tionality has been created for demonstration purposes. This will hereafter be referred to asthe demo application.133.1 Structure and Operation of Demo ApplicationThe demo application can be broken up into four main processes: connecting to the videostream, displaying the video, recording video, and encoding video. These processes constitutethe majority of the code produced during this project, with the rest of the code handling UIelements and transitioning between the four main processes. The basic functionality of theapplication is summarised in Figure 5.Figure 5: General overview flowchart illustrating the basic operation of the demo applicationdeveloped for this project.3.1.1 ConnectingThe IP address of the camera is required to establish a connection between the tablet and thecamera. An IP address consists of four integers from 0 to 255, inclusive, separated by dots.In order to connect, the user must first press the button labelled ?Connect? and then, whenprompted, enter the fourth value of the IP address provided by the IP Webcam application.The reason only the fourth IP address value is required is that the first three values can beautomatically detected by looking at the local IP address of the device running the display14app (they will be the same on both devices) and the port number is always the same (8080).If the app has already been run, the IP address value that had previously been entered isautomatically entered in the prompt for faster reconnection. Refer to screenshots in Figure6.Figure 6: Screenshot of the demo application on startup (top), and after the ?Connect?button has been pressed (bottom).After the connection prompt has been passed, the application attempts to connect tothe specified address and access the video feed. If the http status code returned from theconnection attempt is 401, it indicates an authorisation error. In this case, the applicationalerts the user that the feed requires a username and password and allows the user to enterthem before trying again. If no status code is returned before the 5 second timeout, thenthe user is told the connection attempt has failed and the app returns to the first screen sothey may try again. If the connection is successful, the application now has an input streamthat can be displayed.In order to maintain the responsiveness of the user interface during the attempt to con-nect, the connection is completed asynchronously and the ?Connect? button is temporarilydisabled during the process to prevent the user from attempting a second connection beforethe first one is complete.153.1.2 Displaying VideoAfter the application has successfully connected to the input stream, the stream is passedto the MjpegView class to display it, which in turn uses the MjpegInputStream class. Oncethe MjpegView class has the input stream, it creates a new thread that is used to read inframe data and draw it to the screen.The way the IP Webcam app streams mjpeg video is by sending the data for individualjpeg image frames with a short header and boundary string in between each frame. Theshort header contains the content type (?image/jpeg?), while some other mjpeg sourcessend both the content type and the content length (number of bytes in the current jpegimage) in the header. The boundary string is a random combination of letters and numbersprefixed by two dashes and stays the same over the duration of the connection. An exampleillustrating the format of the data from the IP Webcam app for the first three frames is asfollows:--dkj2DNdzZUX9d (boundary string)Content -type: image/jpeg[JPEG image data for frame 1]--dkj2DNdzZUX9d (boundary string)Content -type: image/jpeg[JPEG image data for frame 2]--dkj2DNdzZUX9d (boundary string)Content -type: image/jpeg[JPEG image data for frame 3]Once the header is parsed for useful information (i.e. content length), the MjpegInputStreamclass reads in bytes from the stream until the jpeg Start Of Image (SOI) marker is found.Once the SOI marker is encountered, bytes from the stream are read and stored until thejpeg End Of Image/File (EOI or EOF) marker is read. At this point, the frame databetween the SOI and EOI markers is decoded into a bitmap which is then returned to theMjpegView class and subsequently drawn to the screen. This process is repeated very quickly(typically 10-30 times per second, depending on the resolution of the source video and thedevice running the demo app) to produce the video seen on the device, until the user eitherdisconnects from the stream or closes the application. Refer to screenshot in Figure 7.At this point, there is a successful connection and video is being streamed and drawn tothe display, so that the user can begin recording the video.3.1.3 Recording VideoOnce the video stream is displayed on the screen, the user can press the ?Record? buttonto begin the process of recording video. After pressing the button, a prompt is shown toallow the user to enter a file name to save the video as. The prompt has a default save namecontaining a unique string of numbers generated from the current date and time, which theuser may choose to use if a more descriptive file name is not required. Refer to screenshotin Figure 8.16Figure 7: Screenshot of the demo application during video streaming. At this stage the usermay either begin recording by pressing ?Record? or disconnect from the camera by pressing?Disconnect?.Figure 8: Screenshot of the demo application immediately after pressing the ?Record? but-ton. The application prompts the user for the name of the video to be saved.Once the video save name has been entered and the prompt is closed by pressing ?OK?,a flag is set that triggers every frame to be saved as a jpeg image to the device?s storage.These files are given a unique numeric ID so they can be identified later when they arecombined and encoded into a video. The saving of each frame is done asynchronously fromthe display process so that the display framerate is not reduced while waiting for the jpegfiles to be saved during recording.The asynchronous function that writes the frame data to a jpeg file uses built-in Java IOand Android Graphics classes to do so. The main classes used are the Bitmap, FileOutputStream,BufferedOutputStream classes. The Bitmap class is used to compress the bitmap image ofthe frame into jpeg format given a quality value ranging from 0 to 100, where 0 will producethe smallest file size and lowest quality while 100 will produce the largest file size and highestquality. Currently, we are using a value of 30 for the quality. The two OutputStream classesare used to save the compressed jpeg to the device?s storage.17The file naming convention for these jpeg files is as follows:[recording ID]_frame_[frame number ].jpgwhere the frame number is simply the number of the current frame, starting from zeroand incrementing every time a frame is saved to jpeg. These jpeg files are saved in atemporary directory that is cleared every time the application is started.During recording, the ?Disconnect? button is disabled so that recording cannot be inter-rupted by simply disconnecting. This is done in order to avoid having the user accidentallydisconnect part way through a recording, which can lead to unexpected behaviour (since thejpeg frames would not be encoded to a proper video file). The user must stop the recordingfirst before disconnecting, or close the application if the need to stop is urgent. Also dis-played on screen near the top is the current duration of the recording, in mm:ss format wheremm is the number of minutes elapsed and ss is the number of seconds. Refer to screenshotin Figure 9.Figure 9: Screenshot of the demo application while it is recording the video. The amount ofelapsed time is shown in the center top and the user may stop the recording at any time bypressing ?Stop Recording?.3.1.4 Encoding VideoWhen the user wants to stop recording they must press the ?Stop Recording? button. Oncethis button is pressed, any subsequent incoming frames are no longer saved as jpeg filesand the process that encodes the saved jpeg files into a playable video begins. The actualencoding is done by the ffmpeg library discussed in section 2.3.3. On the UI side of theprocess, both ?Disconnect? and ?Record? buttons are disabled and a progress bar is overlaidon the screen to inform the user of the current progress of the encoding process. Refer toscreenshot in Figure 10.The progress percentage displayed on the progress bar is calculated by parsing the stan-dard output of the ffmpeg library during encoding for the current frame number andcomparing that with the total number of frames that need to be encoded.18Figure 10: Screenshot of the demo application while it is encoding the video after recording.Once encoding is complete, the temporary jpeg files are permanently deleted. Thisdeletion process also has a progress bar because it can take multiple seconds to delete thehundreds of temporary files that are created during recording. However, typically the deletionprocess takes only a tenth of the time the encoding process requires.The video file is saved as whatever name was specified at the beginning of recording,and it is saved in the QuickTime (.mov) file format. After the encoding is complete, theapplication returns to the video display mode to display the live stream, allowing the user tostart a second recording or disconnect. The display, recording, and encoding process, fromthe time a successful connection is made to when the stream is disconnected is summarisedin Figure 11.Figure 11: A flowchart containing the display, recording, and encoding processes whereitems in green are handled by the MjpegView class and items in blue are handled by theMjpegInputStream class.193.2 Experimental Procedure3.2.1 Latency TestsThe time it takes for the video to be taken by the camera and then displayed on the screenof the demo application is the latency we are interested in measuring for our system. If itis too high, there can be a noticeable difference between what the user is doing and whatappears on their screen, resulting in a difficult to use system with complications involvingwhen the user should start and stop recording.There are two main sources of latency in the system. The first is on the camera deviceand is the time it takes for the IP Webcam app to capture the video from its camera. Whilethis delay is not directly seen by the user, it does add to the overall latency. The secondsource of latency is the time it takes for the frames to be sent across the network from theIP Webcam app to the demo application and then displayed on the user?s device. While thefirst source of latency is outside of our control, we can attempt to minimize the second sourceof latency. In order to measure each of these latencies, we used a stopwatch and a secondcamera to get a snapshot of the latencies from each source. By pointing the IP Webcam appat the stopwatch, holding the device running the demo app next to it, and taking a picturewith the second camera that shows the stopwatch, the first camera, and the demo app alltogether, it is possible to get a timestamp at each stage and measure the latencies from that.A pictorial example of this test is shown in Figure 12, and Figure 13 is a photo from thetest.Figure 12: Diagram illustrating the experimental set up for measuring latency, and thelatencies from the two sources of latency.Latency data was collected for three different sets of settings. These settings were changedon the IP Webcam app, and included resolution and image quality. The settings tested were:? Resolution: 320 ? 240; Compression Quality: 5020Figure 13: One of the photos from the latency test. In the background is a stopwatch; onthe right is a device running the IP Webcam application, and on the left is a device runningthe demo application.? Resolution: 800 ? 480; Compression quality: 50? Resolution: 800 ? 480; Compression Quality: 50The tests were performed using a Sony Ericcson Xperia as the camera running IP Webcamand a Samsung Galaxy S III running the demo application. Note that the screen refresh rateand response time for both the camera device?s screen and the receiver device?s screen areon the order of 10 ms, so any measured result should have an uncertainty of about 10 ms.3.2.2 Framerate TestsThe framerate is the rate at which video frames are recorded. It is primarily limited by thenetwork bandwidth, the speed at which the camera device can capture new frames, and therate at which the tablet running the demo application can save the frames.The framerate of the video stream is continuously monitored by the demo application.While the framerate may fluctuate slightly during the recording in part due to randomnetwork effects such as packet loss, it is not anticipated to cause any problems for ourpurposes. At the end of the recording, the application automatically determines the averageframerate f , which is defined as such:21f =Nt(1)where N is the number of frames recorded for the video and t is the time duration ofthe video. The output video file is encoded with framerate f . The framerate was measuredfor resolutions 320 ? 240, 640 ? 480, 800 ? 480, and 1280 ? 720 at compression qualitiesof 1, 25, 50, 75, 100. The tests were performed using a Sony Ericcson Xperia as the camerarunning IP Webcam and a Samsung Galaxy S III running the demo application.3.2.3 Practical Usage TestsSince the ultimate goal of the project is to allow a human therapist to remotely view awheelchair trainee performing new skills, it is important to show that, in addition to thebenchmark figures, the video is also of sufficient subjective quality to provide a pleasantviewing experience which allows one to clearly see what is going on. To test the systemunder real-world conditions, three videos were recorded:? (1.76 MiB, 14 seconds): a person on a wheelchair navigates around aperpendicular bend in a corridor.? (3.89 MiB, 31 seconds): a person on a wheelchair navigates around twostools placed in a corridor in a figure-eight pattern.? (2.67 MiB, 21 seconds): a person on a wheelchair travels along a corridortowards the camera.From the tests of framerate and latency, the IP Webcam setting of a resolution of 640 ?480 and quality 50 was chosen to ensure acceptable performance. These tests were performedusing a Samsung Galaxy S II as the camera running IP Webcam and a Nexus 7 as the tabletrunning the demo application.4 Results and Discussion4.1 LatencyFor each set of three settings, 14 to 16 data points were collected and the full table of datacan be found in Tables 1, 2, 3. The results are graphed in Figure 14.From Figure 14, it can be seen that a change in image quality from 25 to 50 had anegligible effect on latency. However, a change in resolution had a much greater effect,presumably because a higher resolution requires more data to be sent in each frame, causinga corresponding increase in the amount of processing time per frame. It is worth notingagain that the latency from the stopwatch to the camera is out of our control, and it makes22Table 1: Test results for latency in milliseconds from stopwatch to camera for differentsettings of video resolution and frame compression quality.Resolution Qual. Mean ?? (min, max) Trials320 ? 240 50 115.50 ? 22.00 (88, 132) 16800 ? 480 25 167.14 ? 24.72 (132, 220) 14800 ? 480 50 163.60 ? 27.11 (122, 220) 15Table 2: Test results for latency in milliseconds from camera to tablet for different settingsof video resolution and frame compression quality.Resolution Qual. Mean ?? (min, max) Trials320 ? 240 50 40.12 ? 51.09 (0, 176) 16800 ? 480 25 93.71 ? 97.75 (0, 396) 14800 ? 480 50 88.00 ? 81.77 (0, 274) 15Table 3: Test results for total system latency in milliseconds for different settings of videoresolution and frame compression quality.Resolution Qual. Mean ?? (min, max) Trials320 ? 240 50 155.63 ? 56.71 (88, 308) 16800 ? 480 25 260.86 ? 101.32 (176, 572) 14800 ? 480 50 251.60 ? 61.43 (214, 396) 1523Figure 14: Graph of latency times for three different sets of settings. For more details, pleaserefer to Tables 1, 2, 3.up approximately two thirds of the overall latency in the system. The total latency rarelyincreases above one third of a second, which is an acceptable amount of delay for the purposesof our demo application. While the latency may be noticeable by the user, it should notsignificantly affect the ability for the wheelchair user to record videos of themselves. In theworst case, the user may prematurely stop the recording a third of a second before the entiretraining routine is recorded, but under normal circumstances the last third of a second ofany training routine is of little or no importance.4.2 FramerateThe results from the framerate test are shown in Table 4. We see that for a resolution of640 ? 480, the framerate is around 15 fps. The higher the framerate, the smoother theappearance of moving entities in the video. If the framerate were insufficient, then finenuances in the actions and movements of the wheelchair trainee may not be possible toperceive. Conversely, a very high framerate requires a large amount of bandwidth and hasdiminishing returns on visual clarity due to the ability of the human eye to perceive motionat a high framerate. The flicker fusion limit, which is generally accepted to be the minimumflicker rate beyond which the human eye is less likely to discern additional motion detail,is generally accepted to be approximately 15 Hz [12]. That is, a flickering light that turnson and off faster than 15 Hz will generally be perceived to be a steady light. This limit isintrinsically related to the acceptable framerate. Our experimental results show that ourvideo should approximately meet this limit.In our practical usage test, it is our subjective opinion that the resulting video is smoothenough to easily capture wheelchair operation.24Table 4: Test results for average number of frames per second for different settings of videoresolution and frame compression quality. Due to fluctuations in wireless network band-width, the framerate can vary with an uncertainty of ?2 fps. For results labeled N/A, theapplication became unresponsive.Resolution Q=1 Q=25 Q=50 Q=75 Q=100320 ? 240 30 30 29 29 25640 ? 480 16 15 15 15 N/A800 ? 480 16 15 15 15 N/A1280 ? 720 8 8 8 7 N/A4.3 Practical Usage TestsFrom visual inspection, the videos were acceptably smooth and detailed. The average fram-erate was 15 fps, which is enough to capture motion at the speeds expected for a wheelchair.It was immediately clear to the viewer what the wheelchair user is doing, and nuances suchas arm and wrist positioning could be easily discernible if the wheelchair is not more than 8m away. Even in the narrow corridor in which this test was performed, the field of view wassufficient to capture all the manoeuvres that a wheelchair trainee is expected to perform.Figure 15: Frames extracted from the sample videos,, respectively, in clockwise order.In addition to being able to effectively perceive the details from the training routine, thevideo files were a reasonably small size (1.76 MiB, 3.89 MiB, and 2.67 MiB). This allows25them to be uploaded quickly. For example, an internet connection upload speed of 1 MB/swould require about 10-30 seconds to upload such video clips.5 ConclusionsWheelchair training can be costly and time-consuming, and in many places, difficult toaccess. EPICWheelS is an Android application that aims to provide a low-cost and easilyaccessible method of training wheelchair users in techniques and skills for safe and efficientwheelchair usage. Although EPICWheelS contains much of the functionality to teach skillsto new wheelchair users, it is missing the ability for a user to easily record a video ofthemselves demonstrating new skills for evaluation by a remote wheelchair therapist. Ourdemo application has been designed to fill in that missing functionality.By streaming video from one Android device?s camera over a local network to anotherAndroid device, our demo application allows the user to see and record themselves in nearreal time (less than 300 ms delay) as they perform new wheelchair skills. Running ona Samsung Galaxy S III phone, the demo application can display and record video witha resolution of 640 ? 480 pixels at a frame rate of approximately 15 frames per second.Although it was designed for use with a specific Android IP Webcam application, the sourcecode requires minimal modification to be used with any other mjpeg video stream. Thesimple interface of the demo application makes this functionality easy to use, even for userswith little experience handling touch-interface devices.If the functionality from our demo application is incorporated into the EPICWheelSapplication, EPICWheelS will be one step closer to achieving its goal of providing the benefitsof working with a physical therapist to wheelchair users that don?t have direct access torehabilitation training.6 Project Deliverables6.1 List of DeliverablesThe following is a list of deliverables originally planned for this project:? An addition to the application?s source code that allows video to be streamed in realtime from an external camera and displayed on the tablet?s screen.? This has been changed to writing a standalone application that contains the videostreaming and recording feature as a proof of concept. This is because the currentversion of EPICWheelS is slated to undergo trials and is therefore in feature freeze.? Our sponsor will receive this deliverable by checking out the source code located inthe Mercurial repository on Bitbucket, and he will receive a document containinginstructions for compiling and building the application from the source code.26? A commercially available camera capable of streaming video to the tablet.? Our solution uses a secondary Android device as the camera. There exist manysuch commercially available devices that can be used for this purpose. An IPcamera application is required for the Android device to output a mjpeg stream.The Android online App Store is replete with free applications for this purpose; theone that has been tested for our project is called IP Webcam [6]. This applicationis also open source, so we can customise it for our specific use.The next two deliverables were originally planned as extra objectives to be completed onlyif we have additional time. However, we have decided to focus on the first two deliverablesinstead.? Modifications to the EPICWheelS application source code that implements an SQLitedatabase to store user and file information locally, replacing the manifest files currentlybeing used.? A redesigned graphical user interface that improves usability, is more visually appeal-ing, and delivers a better user experience.6.2 Financial SummaryRefer to Table 5 for a list of components used in this project.Table 5: Financial summary of components used in this project.Component Quantity Vendor Total cost ($) Purchased byXoom Tablet 1 Motorola approx. 350 SponsorNexus 7 Tablet 1 Google 250 SponsorNexus 7 Tablet 1 Google 250 SelfGalaxy S III 1 Samsung N/A SelfGalaxy S II 1 Samsung N/A SelfXperia 1 Sony Ericsson N/A SelfWheelchair 1 Unknown N/A Project LabWiFi Router 1 Various N/A Self6.3 Ongoing Commitments by Team MembersHere is a list of current ongoing commitments by team members:? Return all hardware to sponsor.? Documentation of the demo application source code.? Technical support for sponsor.277 RecommendationsFuture work may include merging the demo application source code into the full EPICWheelSapplication so as to integrate video streaming/recording capability into the final product.The voice mail sending capabilities can be augmented to include the ability to send videoclips. A fork of the IP Webcam application can be created and customised for an easy andconsistent user experience, for example, by automatically determining the default settingsand displaying the IP address in a prominent fashion for the user to enter (see Section 3.1.1).Furthermore, the graphical user interface theme may be improved for aesthetic appeal andease of use.8 Appendix: FFmpeg build instructionsThe following instructions have been tested on Linux and Mac OS X and should work onmost unix-like operating systems.? Download ffmpeg for Android from [11]:http :// -4f7d2fe -android -2011 -03 -07.tar.gzand decompress the downloaded file.? In terminal, navigate to the decompressed file directory ffmpeg android and run./ Download android-ndk-r5c (possibly r8 will also work, although it is untested) anddecompress the downloaded file.? In terminal, execute the following after replacing <PATH TO NDK> and <PATH TO ffmpeg android>with their respective correct paths:$<PATH_TO_NDK >/ build/tools/make -standalone --platform=android -3 --install -dir=/< PATH_TO_ffmpeg_android >/ toolchain$export PATH=/< PATH_TO_ffmpeg_android >/ toolchain/bin:$PATH$export CC=arm -linux -androideabi -gcc? In terminal, navigate to the ffmpeg folder in ffmpeg android.? In terminal, execute the following after ensuring that the file path for sysroot hasbeen set correctly:$./ configure --target -os=linux --cross -prefix=arm -linux -androideabi ---arch=arm --sysroot =/ Users/ansonliang/Desktop/android -ndk -r5c/platforms/android -3/arch -arm --soname -prefix =/data/data/com.example.demoapp/ --enable -shared --disable -symver --enable -smallA^--optimization -flags=-O2 --enable -encoder=mpeg2video --enable -encoder=nellymoser --enable -protocol=file --prefix =../ build/ffmpeg/armeabi --disable -doc --extra -cflags= --extra -ldflags=28$make$make installNOTE: com.example.demoapp corresponds to the specific package name specified inAndroidManifest.xml for the intended Android application. For example: in this caseit is:<manifest xmlns:android=http :// ="com.example.demoapp" android:versionCode ="1" android:versionName ="1.0" >? Look for the following required files in ../build/ffmpeg/armeabi:? ffmpeg? place these files into the assets directory in your Android source code directory.Refer to Section 2.3.3 for usage details.29References[1] S. de Groot, M. de Bruin, S.P. Noomen, L.H.V. van der Woude Mechanical efficiencyand propulsion technique after 7 weeks of low-intensity wheelchair training. ClinicalBiomechanics Volume 23, Issue 4, May 2008, Pages 434-441[2] A.H. MacPhee, R.L. Kirby, A.L. Coolen, C. Smith, D.A. MacLeod, D.J. DupuisWheelchair skills training program: a randomized clinical trial of wheelchair users un-dergoing initial rehabilitation. Archives of Physical Medicine and Rehabilitation Volume85, Issue 1, January 2004, Pages 41-50[3] J. Moore The beenfits of mobile apps for patients and providers. British Journal ofHealthcare Management Volume 18, Issue 9, September 2012, Pages 465-467[4] B. Caulfield, J. Blood, B. Smyth, D. Kelly Rehabilitation exercise feedback on Androidplatform. WH ?11 Proceedings of the 2nd Conference on Wireless Health Article No. 18,October 2011[5] A.M. Paquette, M. Moe, S. Jalbert, Ben. Docksteader StrokeLink: empowering sur-vivors.http://strokelink.caAccessed 22 September 2012[6] P. KhlebovichIP Webcam. 22 September 2012[7] Approved class specification documents: video class. docs/Accessed 22 September 2012[8] EZ-Robot shop. EZ-Robot Inc. 22 September 2012[9] G. Kewney High speed Bluetooth comes a step closer: enhanced data rate approved.News Wireless. 22 September 2012[10] Looxcie. Looxcie, Inc. http://www.looxcie.comAccessed 22 September 201230[11] H. Eriksson Open Source Bambuser 7 January 2013[12] W.H. Swanson, T. Ueno, V.C. Smith, J. Pokorny Temporal modulation sensitivity andpulse-detection thresholds for chromatic and luminance perturbations. Journal of theOptical Society of America Volume 4, Issue 10, 1987, Pages 1992-200531


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items