UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A hybrid P2P pre-release distribution framework for flash crowd avoidance in P2P video on demand streaming Chiu, Stanley Kai Him 2008

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2008_fall_chiu_stanley_kai_him.pdf [ 1.31MB ]
JSON: 1.0051225.json
JSON-LD: 1.0051225+ld.json
RDF/XML (Pretty): 1.0051225.xml
RDF/JSON: 1.0051225+rdf.json
Turtle: 1.0051225+rdf-turtle.txt
N-Triples: 1.0051225+rdf-ntriples.txt
Original Record: 1.0051225 +original-record.json
Full Text

Full Text

A Hybrid P2P Pre-ReleaseDistribution Frameworkfor FlashCrowd Avoidancein P2P VideoonDemand StreamingbyStanley Kai Him ChiuB.Sc., The University of British Columbia,2005A THESIS SUBMITTED IN PARTIALFULFILMENT OFTHE REQUIREMENTS FORTHE DEGREE OFMaster of ScienceinThe Faculty of Graduate Studies(Computer Science)The University Of British Columbia(Vancouver)July, 2008©Stanley Kai Him Chiu 2008pAbstractIn recent years, the high maintenance cost of centralized video on demand systems has led to the development of peer to peer video on demand systems.These peer to peer systems help to remove the cost and bandwidth limitationsof a centralized group of servers. In a peer to peer scenario, the publisher and asmall set of peers who were published to must handle all video requests. If manypeers request a video after it is released, the small set of peers with the videocached may not have enough bandwidth to satisfy all requests. This situationis known as a flash crowd. We propose a hybrid peer to peer framework thatallows publishers to publish videos before release time. For marketing purposes,it is common for videos that are ready for distribution to be kept frombeingreleased until a preset release time. By distributing a video before the releasetime, more peers will have the video at release time, thus allowing more requeststo be handled. A hybrid peer to peer encryption management system is usedto prevent users from viewing videos before release time. In order to determinewho to distribute a video to before users are allowed to view the video, we designa hybrid peer to peer subscription system. In this system, users may specifyinterest in sets of videos and are notified of new videos matching the interestso that retrieval may start. Finally, we modify an existing peerto peer videoon demand framework to better handle concurrent streaming and downloading.Our experiments show that this framework can greatly increasea peer to peerstreaming system’s ability to handle flash crowd situations.11Table of ContentsAbstractjjTable of ContentsiiiList of TablesviList of Figuresvii- Acknowledgementsix1 Introduction11.1 Motivation11.2 Thesis Contributions31.3 Thesis Organization42 Background Informationand Related Works52.1 Background Information52.1.1 Traditional Television52.1.2 Internet Video Distribution62.1.3 Internet Video Streaming62.1.4 Peer to Peer Systems72.2 Related Works72.2.1 Proxy-based Systems. 7111Table of Contents2.2.2 Peer to Peer Multicast82.2.3 Peer to Peer Video on Demand82.2.4 Commercial Video SchedulingSystems93 System Design103.1 Introduction103.2 System Operation123.3 Hybrid Peer to Peer Storage System123.4 Video Subscription143.4.1 Subscription Granularity143.4.2 Subscription InformationStorage153.4.3 User NotificationSystem173.5 Encryption ManagementSystem213.5.1 Encryption213.5.2 Encryption InformationStorage213.5.3 Encryption Information Retrieval223.6 Video Retrieval.243.6.1 Video Downloading243.6.2 Stream Prioritization254 System Implementation304.1 Architecture304.1.1 Framework304.1.2 Component Details304.2 Implementation Details334.2.1 Related Software334.2.2 System Flow334.2.3 Graphic User Interface36ivTable of Contents5 Evaluation385.1 Theoretical385.2 Simulation405.2.1 Simulation Setup405.2.2 Simulation Methodology425.2.3 Simulation Results436 Conclusion and Future Work556.1 Conclusions556.2 Future Work56Bibliography59VList of Tables3.1 Video Information163.2 Storage Schema for ScheduleInformation173.3 Storage Schema for VideoInformation of Channels andSeries 193.4 Storage Schema for EncryptionInformation224.1 Framework Layers31viList of Figures4.1 Sequence diagram ofvideo publication344.2 Sequence diagram ofschedule maintanence354.3 Sequence diagram of scheduleand download display364.4 GUI for publication ofa video364.5 GUI for displaying schedulesand downloads375.1 Flash crowd request pattern425.2 Periodic flash crowdrequest pattern425.3 Constant crowd rejection ratio445.4 Flash crowd rejectionratio455.5 Periodic flash crowd rejectionratio465.6 Stream prioritizationeffects with RWT0 hours 475.7 Stream prioritization effectswith RWT = 2 hours475.8 Stream prioritizationeffects with RWT = 4 hours485.9 Stream prioritization effectswith RWT = 6 hours485.10 Subscription rateeffects with RWT0 hours495.11 Subscription rate effectswith RWT = 2 hours495.12 Subscription rate effectswith RWT = 4 hours505.13 Subscription rate effectswith RWT 6 hours505.14 Completionratio for RWT = 0 hours515.15 Completion ratiofor RWT = 2 hours51viiList of Figures5.16 Completion ratio for RWT4 hours525.17 Completion ratio for RWT= 6 hours525.18 Rejection ratio for 1 movie535.19 Rejection ratio for 50 movies535.20 Rejection ratio for 100movies54viiiAcknowledgementsI would like to thank my supervisor, Dr.Son T. Vuong, for his supportandencouragement. I would also liketo express my gratitude for mysecond reader,Dr. Charles Krasic, for hissupport and suggestions.I would also like to thank all themembers of the NIC lab, whomade the laba great environment to work in. Their commentsand suggestions are greatlyappreciated.Finally, I express thanks tomy family and my relatives whohave supportedme throughout the years.ixChapter 1Introduction1.1 MotivationVideo on demand systems havegained popularity in recentyears. These systemsallow users to watch any partof any video availableon the system whenevertheywant. In the past, these systemswere implemented by havinga central serveror group of servers providevideos for all other usersof the system. However,this approach is limitedby the high cost andbandwidth required to maintainthe central servers.The emergence of thepeer to peer computingmodel promises to solvethisproblem by removing theneed for a central server.This model has provensuccessful in solvingfile sharing problems[3, 5]. Many researchershave attempted to create peerto peer systems suitable forvideo on demand streaming[10, 20, 28]. Inpeer to peer systems, eachpeer acts as both a serverand aclient, allowing for thesharing of resources amongstpeers without a centralizedserver.Without an expensive serverwith large amounts ofbandwidth, the qualityand smoothness of thevideo playback is oftenlimited by the limitedupstreambandwidth available topeers on the internet. Theseissues are especiallyamparent during flash crowdsituations, where a greatnumber of users attempttowatch the same video or smallset of videos in a shortamount of time. Thisoften occurs when a popularvideo is released. In peerto peer approaches,only1Chapter 1. Introductionthe publisher and the few peerspublished to have the videoat release time.Since these few peers have limitedbandwidth, it is unlikelyand often impossible for these few peersto smoothly stream thevideo informationto all peerswho requested the video.Commercial peer to peer videostreaming services, suchas Joost41andBabelGum [1], alleviatethis problem by maintaininga set of servers to helpdistribute the video. However,like with a central serversolution, this methodmay be costly to maintain.In recent years, centralized schedulesystems have been introducedto a fewcommercial systems, includingVeoh[71and BBC iPlayer (2}. Thesesystems display schedules of videos for usersand allow them to subscribeto them. Streaming quality may be improvedin situations where onlysmall numbers ofusersare subscribed to or watchthe video after release.However, in the caseof popular videos, large numberof subscribed viewersmay automatically attempttodownload the video afterrelease, increasing the demandon the senders.Thismay cause video startupdelay and video qualityto become worse.In this thesis, we attemptto alleviate theproblem of flash crowdswithoutrequiring the maintenanceof a set of costly serversfor video distribution.Popular videos, such as televisionseries and movies, oftenhave a preset release formarketing and advertisingpurposes, even if thevideo data itself isready to bereleased. For example,a season of a televisionseries that has beenedited andis ready to be viewed willoften be released slowlyover the duration of aseason.Movies that are ready fordigital distribution arealso delayed normally inorderto synchronize with thedistribution of physicalmedia such as DVDs,whichtake time to produceand transport.We propose allowingthe distribution of videosbefore release time toalleviate flash crowd problems.To support this, the systemmust decide: (1)how to2Chapter 1. Introductionprevent users from watchingvideos before release time, (2)who to distributeearly releases of videos to,and (3) how to distribute video.We design a hybridpeer to peer encryption managementsystem, hybrid peer topeer subscriptionsystem, and modify an existingpeer to peer video on demandsystem, Bitvampire [20], to solve theseproblems. By using a hybridpeer to peer approach, theload on the central servercan be kept to a very lowlevel while supportingalarge number of users inthe system.1.2 Thesis ContributionsThis thesis attempts tosolve bandwidth andflash crowd problems by allowingpre-release distribution ofvideos using information providedby the user abouttheir video interests. This is achievedwith the following contributions:• Design a suitable hybridpeer to peer video subscriptionand notificationsystem that allows usersto: (1) view schedules,(2) subscribe to setsofvideos, and(3) receive notification whena subscribed video is released.• Design ahybrid peer to peer distributedencryption managementsystemthat allows publishers topublish encrypted videoswith a set date toallowviewing. Decryptionkeys are automatically publishedonto the distributedsystem upon release time,allowing streaming peersto retrieve it anddecrypt the video.• Designa retrieval system that prioritizesrequests from streamingpeersover downloadingpeers to allow smoother streamingof videos andmoreefficient use of aggregatebandwidth for a video inthe network.• Design asystem architecture for implementingthe system and describeimplementation details.We also implement a simpleGUI prototype todemonstrate publisher anduser interactions withthe system.3Chapter 1. Introduction• Evaluate the proposedsystems and algorithms boththeoretically andthrough simulations.1.3 Thesis OrganizationThis thesis is divided into six chapters.Chapter 2 describes thebackgroundinformation for peer to peervideo streaming technologiesand details relatedworks in the field. Chapter3 presents the design for theproposed framework.In Chapter 4, we translatethe design into an architectureand describe implementation details forimplementing the architecture.We also implement asimple prototype witha graphical user interface(GUI) to show how a userinteracts with the proposedsystem. Chapter5 evaluates the frameworkboththeoretically and through simulations.Lastly, we conclude thethesis and discuss possible researchdirections for furtherimprovement of the frameworkinChapter 6.4Chapter 2Background Informationand Related WorksThis chapter provides some backgroundinformation concerning video streamingand peer to peer technologies.2.1 Background Information2.1.1 Traditional TelevisionVideo streaming was originally madepopular by television. Televisionvideostreaming allows fora centralized authority to distribute multiplechannels ofvideo to users. Users may selecta channel and view what the centralizedchanneladministrators have chosen tobe displayed at the time. This streamingofvideo information is achieved by multicastingthe video data over certainpresetfrequencies either over the air or througha physical cable.These systems offered the users littleflexibility. The only choicethe usermay make is what channel he/shewants to watch. Over time,personal videorecorders (PVRs) emerged.These new systems offer the usergreater flexibility.While the channel administratorsget to control at what time anepisode is available, users can nowsee a schedule beforehand and mark videosfor recording.By recording a video, a user is ableto watch it at any time afterit has been5Chapter 2. Background Informationand Related Worksreleased. Pausing, fast forwarding, andjumping to specific parts of videosisalso possible now since the entire video isstored.2.1.2 Internet Video DistributionBecause traditional television multicastingmethods required specific frequencies in the airwaves or specialized cables tobe reserved for transmitting videoinformation, the number of videostreaming companies in an areais often limited. These same limitations also makeit hard for companies to operate worldwide. With the emergence of the internet,a new distribution method capable oftransferring videos became available.The internet is a global networkin whichany connected computer can communicatewith any other connectedcomputer.With this new communicationsmethod, information such as videoscan now betransferred to anyone in the worldas simple files.2.1.3 Internet VideoStreamingCentralized video streaming services[6, 8J gained popularity and demonstratedthe feasibility of distributing videoover the internet. A central serveror a set ofcentralized servers stored the videos,allowing users to accessand stream themdirectly. For the consumers, thesesystems offer great flexibility. Unliketraditional television streaming, itis possible to watch videos that havebeen releasedin the past without having recordedit. Watching any video ever releasedontothe central server and skippingto any part of the video are oftenallowed onsuch systems. Services offeringthese capabilities are commonly knownas videoon demand services.6Chapter 2. Background Informationand Related Works2.1.4 Peer to Peer StreamingSystemsThe central server approaches requiredlarge amounts of bandwidth,storage,and computing power. These requirementsmeant that it is often expensiveto maintain a popular video on demandservice. As hardware technologiesimproved, the bandwidth availableto consumers also increased. This allowedanother video streaming technology, peerto peer video streaming, to emerge.By allowing the viewers, also known as peers inpeer to peer systems, tousetheir upstream bandwidth to share videoswith each other, the centralservercan be eliminated. However, withoutthe costly central servers, thesesystemsare susceptible to problems such as flash crowds,in which many peers want towatch a video in a short amount of time.This is because bandwidth capacityisoften asymmetric, with each peer havingmore download bandwidth than uploadbandwidth. In this situation, peerscan easily consume more bandwidth thanthey contribute.2.2 Related WorksThis section describes pervious worksattempting to solve video streamingdistribution problems.2.2.1 Proxy-based SystemsEarly video on demand systems attemptedto alleviate server demand byusingproxies [9, 11, 22, 25]. Two approachesfor implementing these systemsare (1)prefix-based caching [22, 25]and (2) segment-based caching [9, 11].Proxies inprefix-based caching cache the beginningparts of popular videos. When viewersrequest these videos, the proxies willsend the cached frames while requestsareconcurrently sent to request the centralserver to retrieve remaining partsof7Chapter 2. Background Informationand Related Worksthe video. These proxies help todecrease start up latencyfor viewing videosand also may decrease theload on the central server.Segment-based cachingsystems split a video into partscalled segments. These segmentsare distributedto various proxies, which can helpserve viewers, decreasethe load on the centralserver, and decrease startuplatency.2.2.2 Peer to PeerMulticastPeer to peer video multicastingis another categoryof video streaming research[10, 12—14, 27]. Thesesystems are similarto a peer to peer versionof traditionaltelevision, allowing sendersto stream specific videostreams to large numbersofviewers. One suchsystem, Spreadlt114], operates by creating and maintaininga single tree. Video isstreamed by having nodesin the tree relay videostreamsto its children nodes. However,this approach doesnot handle node failuresvery well, especiallyif the failure is atthe source. Narada[12, 13], NICE[10], and ZIGZAG [27]focus on multiple senderand multiple receiverstreamingapplications. In thesesystems, trees are formedwhen a sender wantsto streamvideo to multiple receivers.2.2.3 Peer to PeerVideo on DemandPeer to peer video ondemand systemsare yet another categoryof research[20, 28]. In thesesystems, viewers havegreater flexibility.Viewers are generally allowed tostart watching anypart of any videoexisting on the system.Mohamed et al [28] proposesa system in whicha set of relatively stablepeers,named seed peersor super peers, store allvideos initially.As peers stream videosfrom seed peers,they cache the video.Caching the video allowsthe peers tohelp the superpeers in further distributingvideos. However,these systems arevulnerable to flashcrowds. In this situation,only the seed peershave the video,8Chapter 2. BackgroundInformation and Related Worksmaking it difficult to streamto all peers at a rate that allows smoothplaybackof the video.BitVampire[201improves peer to peer videoon demand streamingby allowing viewers to aggregate bandwidthfrom multiple senders and splittingvideosinto smaller parts called segments.PALS [23] uses layered codingfor peer to peer streaming.When congestionoccurs, PALS may still beable to display a lower qualityversion of the videoutilizing the bandwidth still availablefrom senders.2.2.4 CommercialVideo SchedulingSystemsIn recent years, commercialpeer to peer video on demandsystems [2, 4, 7] haveappeared. Very recently,scheduling systems havebeen introduced toVeoh [7]and BBC iPlayer [2). Thesesystems allow retrievalof schedules and allowusersto subscribe to video seriesor centralized videolists. Clients periodicallypoilthe server, and thusare able to automaticallyretrieve videos uponrelease.9Chapter3System DesignThis chapter presents thedesign of the video on demandsystem. By allowingthe publication and distributionof videos before releasetime, we can increasethe video quality and increasethe smoothness of the videoplayback.3.1 IntroductionAs described in Chapter1, peer to peer video streamingtechnologies are vulnerable to flash crowds.Commercial productsgenerally attemptto solve thisproblem by settingup large amounts of serverswith high bandwidth,but maintenance of the serverscan be costly. Wepropose another solution.By allowingpublishers to publish anddistribute videosbefore release time, weincrease theduration of time the videocan propagate throughthe peer to peernetworkbefore any peer watchesthe video. As it propagatesthrough the network tosubscribers, it is cachedby these subscribed peers,thus increasing the availability of the video. Havingcached the video or partsof the video, this setofsubscribers will needto retrieve less data orno data at all when viewingof thevideo occurs afterthe video release time.In order for this frameworkto improve video streamingperformance, videosmust be published beforerelease time. Fortunately,this seems to be thecasefor many types ofvideos, such as televisionseries and movies.For televisionseries, there is normallya set time for each episodeto be displayed duringa10Chapter 3. System Designseason, even if the series is ready beforehand. There also often needs to beat least a few hours of time between having the video ready and the airing ofthe video on television, due to physical preparation and transportation delays.Since publishers normally synchronize the release time for various distributionmethods, this means that there is often time before the release time in whichthe movie is ready and may be distributed in our framework. For movies, digitalrelease dates are also synchronized with the distribution of physical mediumssuch as DVDs. The creation and transportation of physical media often takestime, which may be used for pre-release distribution of the media in our proposedframework.Our design consists of three major parts, which are (1) the subscription system, (2) the encryption management system, and (3) the peer to peer streamingsystem. The subscription system allows peers to specify the videos they are interested in and automatically retrieve these videos. By allowing subscribedpeers to automatically retrieve videos, published videos will automaticallybepropagated through the peer to peer network even before release time, increasing availability. To prevent peers from watching distributed videos before therelease time is reached, the encryption management system does not distributedecryption keys for encrypted videos before release time. The actual distribution of peer to peer videos is based on Bitvampire [20]. However, by allowing peers to automatically download videos, situations where downloadingpeers and streaming peers compete for bandwidth might occur. We modify thealgorithms described in Bitvampire to handle these situations by prioritizingstreaming peers.11Chapter 3. System Design3.2 System OperationWhen a peer publishes a video,information about the video and decryptioninformation is sent to a light weightcentral server. The publishingpeer maythen start distributing an encryptedversion of the video dataonto the peerto peer video on demandsystem. After receiving the publicationinformation,the central server first stores thevideo information locally.The central servernow updates the video scheduleto include the published video andstores thisinformation onto the hybridpeer to peer storage systemdescribed in Section3.3. When the release time forthe video is reached, the centralserver publishesthe decryption informationonto the hybrid peer to peer storagesystem.Schedule information is accessibleby all peers in the network throughthehybrid peer to peer storagesystem. A peer may subscribeto a set of videos orwatch any released video. Whena peer watches a video,a modified version ofthe Bitvampire [20) streaming algorithmis used to retrieve thevideo. Beforedisplaying the video, which isonly allowed after release time,the decryptionkey is retrieved from thehybrid peer to peer systemand used to decrypt thevideo.By periodically pollingthe hybrid peer to peerstorage system, peers haveaccess to up to date schedules.When a new video in theschedule matchesa subscription, the peer willautomatically retrieve thevideo using a modifiedversion of theBitvampire streaming algorithmdescribed in Section3.6.3.3 Hybrid Peerto Peer StorageSystemWe first describe the storagesystem used in this thesis.Hybrid peer to peerframeworks [21, 28, 29]use a mix of different centralizedor peer to peer architectures to improve performance.B. T. Loo et al. [21] investigatethe performance12Chapter 3. System Designof various hybrid peer to peer infrastructures.These hybrid peer to peer architectures often increase performance by choosingbetween various algorithmssuch as searching through flooding or searchingthrough structured peer to peersearches dynamically depending on query.By supporting multiple infrastructures, the best features of different algorithmsmay be combined.We propose a simple hybrid peer to peer storage systemwhich is easy to implement and is suitable for our system.Information stored in oursystem, suchas decryption information, shouldnever be lost. In our system, we usea centralserver to create schedules andcontrol publishing. This allows theadministrators of the central server to havelocal access to informationabout all videos,and allows them to control the publicationof videos. The centralserver alsomanages decryption keys beforepublication to ensure normaluntrusted peersdo not get access to this key beforerelease. Since a central serveralready exists,it is logical to use the central serverfor retrieval of this information.However,we do not want to cause greatprocessor and bandwidth loadon the centralserver. Therefore, we combinea peer to peer infrastructure with thecentralserver for distribution ofinformation, using the central serveronly as a backupof information. When informationis lost in the peer to peerinfrastructure, arequest is sent to the central server.The central server not only respondstothis request, but alsorepublishes this information ontothe peer to peer infrastructure. By using this hybrid architecture,we hope to achieve greaterstabilityby restoring the values on the distributedsystem when a value is lost,and byallowing viewing to workusing the distributed systemeven when the centralserver is down. We alsohope to decrease both the bandwidthand processorrequirements of the centralserver by making peer streamingand schedulingrequests use the peer to peer systemunless problems occur.Distributed hash tables(DHTs) [24,261 are well suited to become the peer13Chapter 3. System Designto peer infrastructurefor our system. DHTs allowthe storage of< key, value>pairs and allows lookupof a value by supplyinga key. DHTs have the advantageof having fast lookups, beingscalable, and being faulttolerant. One maindisadvantage of DHTsis that queries haveto be exact key lookups.Complexqueries such as wildcards arehard to support overDHTs. Fortunately,allqueries in this thesis canbe designed to be lookupsof exact keys withoutanylost of features.3.4 Video SubscriptionA video subscription systemallows users tospecify interest in setsof videos,which is referredto as subscription.It also allows the clientprogram to automatically detect whena video matchinga subscription is published.Thisallows videos to be downloadedbefore viewing ofthe video occurs. It alleviatesthe flash crowd problemby propagatingthe video throughoutthe peer to peernetwork as subscriberscache downloadedvideos, and by allowingsubscribers tohave time to retrievethe video data beforestreaming.3.4.1 SubscriptionGranularityIn order to allow simpleand intuitive specificationof sets of videos,a methodof specifying subscriptionsneeds to exist. Personalvideo recorder systemshavebeen popular in recentyears. Such systemsnormally allow usersto see theschedule before the releasedate. Users arethen able to eitherselect a specificvideo to record, orselect a recurringtime slot to record.Advanced PVR systemscan also allow theautomatic recording ofa television series.Based on this popularsystem, this thesisproposes to allow3 types of subscriptions: specificvideo, series, and channel.Subscribing toa specific videoallows the video tobe retrieved beforeviewing. This methodof subscriptionis14Chapter 3. System Designespecially helpful for viewing of long high quality videos, such as movies. Thesecond type of subscription, series subscription, allows subscription to the setof videos belonging to a specific series. This is useful for television series withmultiple episodes. By using this subscription type, the userdoes not have towait for an episode to be announced before subscribing. Instead, the useronlyneeds to select a series to be recorded and new episodes will be automaticallyqueued for retrieval. The third type of subscription, channels, allows the subscription to sets of videos belonging to a specific channel. A channel containssets of videos with features users might be interested in. For example, shorthumorous videos or movies produced by a specific producer can be grouped intoone channel, allowing users to simply subscribe to that to automatically retrievethose videos.3.4.2 Subscription Information StorageIn order to support the subscription system described above, video informationmust first be given by the publisher to the central server. Whena publisherpublishes a video, the publisher must specific a time for when thevideo will bereleased. This allows schedules to be created and updated. Thepublisher mustalso specify the channel, series, episode number, and videoname of the video. Ifthe channel, series, or episode number are not specified,it is automatically setto a special keyword notapplicable. The storage of channel and seriesnumbersallows users who have subscribed to them to be notified properly,while thespecification of episode number allows the retrieval of only one copy ofthe videoshould multiple publishers publish different encodings of the same episodeondifferent channels. The name and episode numbers are also informationtheuser is likely to find useful when searching for a video. Afterreceiving theinformation, the central server assigns the video a unique video identifier(VID)15Chapter 3. System DesignName DescriptionVideo Identifier Uniqueidentifier for the video.Name The nameof the video.Release time Thetime after which users maywatch the video.Series name The seriesthis video belongs to.Episode name The episodethis video belongsto.Channel name__- Thechannel this videobelongs to.Table 3.1: Video Informationin order to distinguishbetween different videoswith the same name. Table3.1shows the schema thatthe central server storesto allow this systemto operate.With this information, thecentral server can maintaina schedule containingthe information about videosand when they willbe released. This is basicallya list of the video informationfor a set of videos, which willbe referred to asthe schedule in the restof this thesis.We can now use the hybridpeer to peer storage systemto store the schedule.By putting this informationonto the hybrid system,peers may periodicallypoiito find new schedules. Inorder to store theschedule on our hybridpeer to peerstorage system, we mustmap the information toa < key, value > pair. Sincethere is currently onlyone large schedule containingall videos, any universallyknown key word, such as“schedule” may be usedas the key. The value forthiskey will be the schedulefor all videos in thesystem.However, each keyis often managedby only one node or a smallset of nodesin DHT algorithms[24, 26]. With onenode managing the scheduleinformationfor all videos and handlingall user requestsfor the schedule, theprocess andbandwidth load onthis peer will be veryhigh. Reliability issuesmight alsobecome a problem,since peers on a DHT areoften unreliable. Eventhough thehybrid peer to peerapproach allows the centralserver to be usedas a backup, thecentral server will spendgreat amounts of bandwidthsending the full scheduleto a new peer everytime a node handling the scheduleleaves the network.16Chapter 3. System DesignKey Value Description“ Schedule” + s schedule8 DHT pair allowing storage of aschedule containing informationfor videos between s and s + d.Table 3.2: Storage Schema for Schedule InformationTherefore, we optimize the design by splitting the schedule with informationabout all videos into schedules with information about all videos in a certaintime range. The time range, r, controls the size of the schedules. If r is set to avery high value, certain peers on the DHT may have to handle great amountsof requests. If r is set to a very low value, in order to get the schedule of videosin a certain time period, such as a day, many peers might need to be contactedsince each peer is only responsible for a very short time range.To map this to a < key, value >, we can simply concatenate “schedule”with the start time, s, for the time range, r, of the schedule. We will refer to aschedule storing information for videos between s and s+ d as schedule8.Thevalue of r and one value of s, such as 0, must be globally known so users maymap any time to a the correct schedule. For example, if it is known that 0 is avalid s and we want find the schedule containing the time 100005 to a schedule,we may simply search the DHT using the key “schedule” concatenated with theinteger value of floor(100005/r) * r to find the appropriate schedule. Table 3.2shows the <key, value> pair stored onto the DHT.3.4.3 User Notification SystemAutomatic retrieval of videos matching subscriptions by users allows videos topropagate through the network. One simple approach is to have users poll thehybrid peer to peer storage system periodically for updates to all schedules.Periodically matching updated videos in the schedules to a locally stored listof17Chapter 3. System Designsubscriptions allows a list of videos matchingsubscriptions to be stored.Thevideos on this list may thenbe automatically retrieved.Unfortunately, this simple approachis infeasible in large networks sincepeersmust retrieve all schedulesperiodically, which will greatly increasethe overheadintroduced by the subscriptionsystem. To solve this, we proposethat usersonly poll and retrieve schedulesfor a range of time, polirarige, beforeand afterthe current time. This is suitable forour system since users will oftenonlyneed schedules of recently releasedand upcoming videos for normaloperationof the framework. Only thoseusers who specifically want tosee older videoswill have use for veryold schedules. This time range is theperiod where videosjust released or soon tobe released are located, and will bethe time periodthat affects flash crowds most.A high value for polirarige allowsthe automaticretrieval of videos with releasetimes further in the future, whilea low value forpollrange decreasesthe amount of overhead introducedby the system.This optimization greatlydecreases the overhead andfeasibility of the system, but introduces anew problem. If a userhas not run the programina time period greater than polirarige,schedules for the time periodbeforecurrenttime — polirange will not beretrieved. Without knowledgeof the videosreleased in that period,videos that match the user subscriptionsmight not befound. As previouslymentioned, retrieving schedulesfor every time periodsince the start of the systemto retrieve information forall videos requires largeamounts of bandwidth and processoroverhead and is likely infeasible.Recall that this thesis supportsthree granularities for subscribingto videos,which are specific videosubscriptions, series subscriptions,and channel sub-.scriptions. Specific video selectionsare unique in that a usermust have foundit on a schedule, which means thevideo information is alreadyknown from theschedule. Therefore, usersmay not be notified of subscribedvideos only for18Chapter 3. System DesignKey ValueDescription“ channel”+ List < videos> DHT pair allowingstorage andchannelnameretrieval of a list of videosinchannel channelname.“ series”+ List < videos> DHT pair allowingstorage andseriesnameretrieval of a list of videosin series serie.sname.Table 3.3: Storage Schemafor Video Information ofChannels and Serieschannel and series subscriptions.With the hybrid peerto peer system alreadyavailable, we propose simplyputting lists of video informationfor channels andseries directly onto thestorage system. We map theselists of videos to thestorage system by generatingunique keys, which are“channel”+ channelnamefor channels, are “series”+ seriesname for series. Table 3.3 shows thenewinformation stored by the storagesystem.With this information available,users may simply retrieveupdates of videoinformation for channelsand series they have subscribedto when they start theprogram. There is no needto periodically poll for updatesto these lists, sinceupcoming and current schedulesare periodically retrieved.Program 3.1 illustratesthe pseudocode for ‘this algorithm.The program firstretrieves a list of videosfor series the user has subscribedto. The list of videosmatching series subscriptionsis then added to the subscribedVideos list. Thisis repeated for channels. Periodically,schedules for the time periodbetweencurrentTime — pollRangeand currentTime+ poliRange are updated. Thevideos in these schedulesare then matched to the subscriptions,and matchingvideos are addedto the subscribedVideo list.19Chapter 3. System DesignProgram 3.1 Algorithm for retrievingschedules and finding subscribedvideos.//@param poliRange The numberof time units before andafter// the current time to poll for//@param poliRate The numberof time units between pollsfor// updated schedules//param seriesSubscriptionsList of series subscriptions//param channelSubscriptionsList of channel subscriptions//param videoSubscriptions Listof video subscriptions//@param subscribed VideosList of videos matching// subscriptionspublic void receivedBlockEncryptionHandler(long poliRange longpollRate , List seriesSubscriptionsList channelSubscriptionsList videoSubscriptions ListsubscribedVideos){List seriesVideoList = getSeriesVideoList(seriesSubscriptiansupdateSubscribedSeriesVideos(subscribedVideos , seriesVideoListseriesSubs.criptions)List channelVideoList =getChannelVideoList(channelSubscriptions)updateSubscribedChannelVideos(subscribedVideoschannelVideoListchannelSubscriptions)while (!quit){long startTime = currentTime— poliRange;long endTime = currentTime+ pvllRange;List schedules = getSchedules(startTime endTime)List matchedVideos= findSubscribedVideos (schedulesseriesSubscriptions , channelSubscriptionsvideoSubscriptions);subscribedVideos.add(matchedVideos);Thread, wait (pollRate)20Chapter 3. System Design3.5 Encryption ManagementSystemWithout encryptingvideos, early distributionof videos allows usersto viewvideos before releasetime. This is obviously undesirablebehaviour and is likelyunacceptable to publishers.In order to prevent viewingof videos before releasetime, we introduce asimple encryption managementsystem.3.5.1 EncryptionWhen a video is published,we encrypt the videobefore distributingit to preventviewing of the video.Encryption allowsthe transformationof the video data intonormally unusabledata. However, withthe correct decryptionkey, a decryptionalgorithm canbe used to transformthis unusable databack into the originalvideo data. Our designallows the useof any encryptionalgorithm thatusesonly a decryption keyfor decryption.In Bitvampire[20], videos are splitinto smaller parts,segments, beforepublication occurs. Thesesegments are furthersplit into smaller pieces,blocks. Bychoosing a randomkey and an encryptionmethod, the publishermay encryptthese small blocksbefore distribution.Users who retrieve theseencrypted blockswill not be able to watchit. Since these blocksare very small, itgenerally doesnot affect the startupdelay of the videostream.3.5.2 EncryptionInformation StorageNormal operationof the system requiresthat users be ableto watch the videoafter release time.In order to supportthis, we proposethat the publishersends the decryptionkey and the encryptionmethod name to thecentral serverduring publication.The central server doesnot distribute thisinformation toanyone beforerelease time,thus ensuring thatviewing of thevideo does notoccur. When therelease time is reached,the decryptionkey may be putonto21Chapter 3. SystemDesignKey ValueDescription“ key” + VID decryptionkey DHTpair allowing storageandretrieval of thedecryption keyfor a video withunique identifierVID.“ method”+ VID encryption methodDHT pair allowingstoragenameand retrieval of theencryptionmethod name fora video withunique identifierVID.Table 3.4: StorageSchema for EncryptionInformationthe hybrid peer topeer storage systemfor users to retrieve.Table 3.4 showshow this informationmay be mapped into< key, value > pairsusing the uniquevideo identifierVID.3.5.3 EncryptionInformationRetrievalWhen a user choosesto watch a video afterrelease time, the userfirst retrievesthe decryption key andencryption from thestorage system.As each blockarrives throughthe underlying peerto peer streamingapplication, it cansimplybe decrypted beforebeing displayed bythe peer to peer streamingapplication.Using this approach,peers will decryptthe video every timethey view thevideo. If the encryptionmethod is very demandingon processing power,anoptimization might beuseful to shift whenthe video is decryptedand decreasethe number oftimes a video is decrypted.This allows for smootherplayback ofthe video even by userswho have olderand slower computers.We optimize the algorithmby having users storethe decrypted version ofthevideo whenever avideo is downloadedor viewed after releasetime. Whenwestore pieces of the movies,segments, we also storewhether or not thesegmentis encrypted.When sendersdistribute segmentsto receivers, the sendermust specify22Chapter 3. System Designwhether or not the segment is encrypted.Before release time, it willalwaysbe encrypted. However, after release time,if it is encrypted, the user maydecrypt it before storage. By using this optimization,copies of the videos onthenetwork will become decryptedversions over time and will propagateas decrypted versions. Each viewer willstill only need to decrypt each blockat mostonce, with some users viewing afterrelease time not having todecrypt at all.Program 3.2 presents the pseudocodefor the retrieval of blocks. After theretrieval of a block, the user firstchecks to see if the video blockis encrypted. Ifit is not, the block may simply bestored as a decrypted block. If it isencryptedand the video has not been released,the user stores the encrypted versionof theblock. If it is encrypted and the videohas been released, the user retrievesthedecryption information from thehybrid peer to peer storage systemand uses itto decrypt the block. This blockis then stored in its decrypted form.Program 3.2 Algorithm forhandling the decryption of blocks.//param releaseTime Therelease time of the video//@param vid The uniqueidentifier for the video//@param block The blockof video data//param encrypted Trueif the transferred block isencrypted// else falsepublic void receivedBlockEncryptionHandler(longreleaseTime , mtvid, Block block, booleanencrypted){long currentTime = getCurrentTime()if (!encrypted){storeDecryptedBlock(vid, block)}else{if (currentTime< releaseTime){storeEncryptedBlock(vid , block);}else{byte[1decryptionKey = getDecryptionKey(vid)String decryptionMethod= getDecryptionMethod(vid)Block decryptedBlock= decrypt(block , decryptionKeydecryptionMethod)storeDecryptedBlock(vid , decryptedBlock)}}}23Chapter 3. System Design3.6 Video RetrievalWith the help of the subscriptionand encryption systems, videosmay now bedistributed onto thepeer to peer video on demandstreaming network.Thissection investigatesoptimizations to Bitvampire’speer to peer streamingalgorithm for allowingdownloading requests andstreaming requeststo coexistwithout affecting streamingquality.3.6.1 Video DownloadingThere are now two kindsof data transfers, videostreams and videodownloads.Video streams occur whena user plays a video, whilevideo downloads occurwhen videos match user subscriptions.The video streams arehandled by thepeer to peer streaming algorithm.The download transferscan use traditionalpeer to peer file transferringalgorithms or peer topeer streaming algorithms.Peer to peer file transferringalgorithms generally work bysending blocks outof order. If a user decidesto watch a subscribed videothat has not completeddownloading, out of orderblocks already cached maynot help in loweringthebandwidth requirementsfor smooth playback ofthe video.For example, imagine a situationwhere a peer has downloadedthe 10 minutes of a 20 minute videowith a bitrate of 100KBps.The peer now decidesto start watching the video.If the peer has downloadedthe last 10 minutesof the video, the data downloadeddoes not help in streamingthe first 10 minutes of the video. Thepeer requires a constant100KBps for smoothplayback. If the peer has downloadedthe first 10 minutes, theuser only requires(20miriutes — lorninutes) *lOOKBps/2ominutes= 50KBps for smoothplayback of the video. Thisis because the peer already hasthe video informationrequired to display the first10 minutes, and can use the20 minutes of playbacktime to download thelast 10 minutes while maintainingsmooth video playback.24Chapter 3. System DesignAs illustrated by the example,in order video transfers arepreferable for ourdesign. Therefore, ourdesign makes use of peerto peer streaming algorithmsfor both streaming anddownloading.3.6.2 Stream PrioritizationWith both downloading andstreaming transfers occurringat the same time,downloads maycompete with video streamsfor bandwidth whenthere is notenough bandwidth inthe system. Slow video transferin video download situations simply means the downloadwill go slower. Inthe case of video streaming,slow video transfer cancause video playbackto stutter and pause.We thereforeoptimize the system toprioritize video streams.Receiver Side PrioritizationOn the receiver side,if a user chooses tostream a video whiledownloadingsubscribed videos, thereceiver may not haveenough downloadbandwidth tosupport both. As previouslydescribed, the user willlikely prefer a smoothvideostream over faster videodownloads. Thus,we modify the Bitvampirealgorithmto prioritize video streaming.Videos in Bitvampire[20] are split into smallerpieces calledsegments. Retrieval of videos is achievedthrough location ofpeers with a specificsegmentand requesting the segmentfrom these peers. Peersin Bitvampire keeptrackof their download bandwidth.If the incoming rate fromsender s decreasesfora period T or is notifiedby sender s toswitch, it will switchthe sender sbysending its unfinishedpart of the requestto another peer withthe segmentcached.By keeping track of whethera video transferis a stream or adownload, theretrieval algorithmcan be improvedto prioritize streaming.Using the algorithm25Chapter 3. System Designdescribed by Bitvampire to monitordownload speeds, we can determinewhetherthe video stream is being retrievedat the requested speed. Ifit is, then thedownloading of other videos mustnot be affecting the speed ofthe stream, thusrequiring nothing to be done.However, if the speed ofthe stream transferis lower than the speed requestedfrom the suppliers, the downloadingof othervideos may be affecting the retrievalspeed of the stream, possibly causingpausesin the video playback.We modify the Bitvampire supplierswitching algorithm to solvethis issue.When it is detected that the incomingrate for the video streamtransfers arelower than they should be fora period T, instead of switchingsenders, anyvideo transfers for downloadingvideos are first stopped.The receiver thenwaits another period T,and if the incoming rate is stilltoo low, the supplierisswitched. After the user finisheswatching the video stream,video downloadingis resumed.Supplier Side PrioritizationIn Bitvampire, each peer setsthe maximum upload bandwidth,bandm, that itcan handle. When peersretrieve videos, they retrievea list of peers with the segment they want to retrieve.The scheduling algorithmin Bitvampire schedulesamongst peers with the segmentusing information abouteach peer’s maximumupload bandwidth,bandmax and available bandwidth,bandavaji. Without modification of this algorithm, the supplierwould treat downloadsand streams ofvideos the same way. Videoplayback smoothness is directlyaffected by thebandwidth of video streams,but not by the bandwidthof video downloads. Wetherefore modify this algorithmto prioritize video streams.We allow peers retrievingvideos to send a flag tovideo suppliers, specifyingwhether the video transferis a download or stream. Inorder to allow suppliers toreplace downloaderswith streamers when it is unableto serve a stream request,26Chapter 3. System Designeach supplier keeps track of thebandwidth reservedby each downloader.Asupplier also maintainsinformation about theamount of bandwidthreservedfor downloads,banddownloads, along with the previouslymentionedbandmaxand baridavaji.When a peer attemptsto stream a video, ittries the originalalgorithm byfinding suppliers withthe segment, and reservingbandwidth fromsuppliers using knowledge ofbandm and bandavaji for each supplier.If it is unable tofind enough suppliersto reserve the required bandwidth,the peer now assumessuppliers have bandavail =bandavail + banddovjnloads and reserves bandwidthamongst suppliers usingthis new information.Program 3.3 illustratesthis algorithm. When thereis not enough bandwidthfor smooth streaming,the peerstarts requesting thatsuppliers free bandwidthby stopping transfersto down-loaders. This repeatsuntil there is enoughbandwidth to streamthe videosmoothly or the listof suppliers is depleted.Program 3.4 presentshow a supplier canfree bandwidth forstreaming requestors. The supplierstops transfersto downloaders one byone until it hassaved the requested amountof bandwidth. The supplieralso updates theavailable bandwidth, bandwidthreserved for downloads,and the list of downloaders.27Chapter 3. System DesignProgram 3.3 Algorithm for requestingthe stream of a video segment.//param vid The unique identifierfor the video//param segmentNum The segmentof the video that is being//requested//param requiredBandwidth The bandwidthrequired to stream//smoothlypublic void streamSegment(int vid, mt segmentNum, mtrequiredBandwidth){SupplierHsuppliers = findSuppliers(vid , segmentNum)let totalAvailBandwidth = 0;for (mt = 0; i< suppliers, size0i++){totalAvailBandwidth+= supplier [i].availBandwidth;}if (totalAvailBandwidth> requiredBandwidth){requestSegment (vid , segmentNum, requiredBandwidth, suppliers)}else{for (mt = 0; i< suppliers, sizeQ ;i++){hit missingBandwidth = requiredBandwidth—totalAvailBandwidth;if (missingBandwidth<= 0){streamSegment(vid segmentNum,requiredBandwidth);return;}if (missingBandwidth< supplier [i]. reservedDownloadBandwidth)supplier [i].freeFromDownloaders(missingBandwidth)totalAvailBandwidth+= missingBandwidth;}else{supplier [i]. freeFromDownloaders(supplier[ireservedDownloadBandwidth);totalAvailBandwidth+= supplier [i].reservedDownloadBandwidth);}}streamSegment(vid, segmentNum, requiredBandwidth);}28Chapter 3. System DesignProgram 3.4 Algorithmfor freeing bandwidthby stopping transfers todown-loaders.//param requiredBandwidthThe extra bandwidthrequired by the// streaming peer.//param downloadersThe list of downloadersbeing// transferred to.//‘param numDownloadersThe number of downloaderspublic void freeFromDownloaders(intrequiredBandwidth, Listdownloaders , mt numDownloaders){for (mt i = 0; i< numDownloaders; i++){if (requiredBandwidth<= 0){return;}reservedDownloadBandwidth—= downloaders get(i).reservedBandwidth;availBandwidth+= downloaders get(i) reservedBandwidth;requiredBandwidth —=downloaders I].reservedBandwidth;stopTransfer (downloadersget (i))downloaders remove(I)}29Chapter 4System ImplementationThis chapter describes theimplementation detailsof the system. In section 4.1,we describe the general architectureof the system. Detailsfor implementingthis architecture usingexisting technologiesare described in section Architecture4.1.1 FrameworkOur system is composedof multiple layersbased on a simplifiedversion of theRTG framework presentedin BitVampire [20], whichis inspired by theJXTAframework [16]. TheRTG framework allowsthe decouplingof each layer, thusseparating the systemlogic and allowing algorithmsto be replaced with relativeease. Table 4.1 describeseach layer and lists componentsthat belong to eachlayer.4.1.2 ComponentDetailsAt the highest level ofthis architecture,interaction with the useris supportedthrough the use ofa graphical user interface(GUI). Initiation andinteractionwith the GUI triggersthe Application Logiccomponent, which processestheuser’s requests.In order to handle theserequests, the ApplicationLogic component callsthe30Chapter 4. System ImplementationrLayer DescriptionComponentsApplicationThis layer containsap- GUI, Application Logicplication specificcomponents, which oftenincludes the GUI.AbstractionThis layer providesa sim- Controllerpie interface that allowsthe applicationlayer toaccess the service layer.Service layer modulescanthus be swappedeasilywithout changesto the application layer.Service Theservice layer containsEncryption,Subscription,common services.Differ- Automaticretrieval, Peerent algorithmsto provide to peer streaminga service may be implemented hereto affect theperformanceof the system.Core Thislayer contains theHybrid peerto peer storhigh level peerto peer agesystemcommunicationmodels.CommunicationThis layer containsthe low DHT, Socketslevel peerto peer communication details.Table 4.1: FrameworkLayers31Chapter 4. System Implementationabstraction layer Controller component.This component is normallyonly anadaptor, containing no real logic.By calling the right service layercomponent,the controller can delegatethe logical computations to thecorrect service layercomponents.In this system, there are four mainservice components, which areEncryption,Subscription, Automatic Retrieval,and Peer to Peer Streaming. Encryptionand decryption of video data is handledby the Encryption component.TheSubscription component handlesuser subscriptions, detection ofvideos matching user subscriptions, and updatingof schedules. When a video matchingauser’s subscriptions is detected, itis handled by the Automatic Retrievalcomponent. The Automatic Retrieval componentmaintains a list of videosthatmatch subscriptions and its downloadstatus. The role of this componentisto provide the GUI with the a list of downloadingvideos, as well as automatically starting and stopping downloads tocontrol the number of videos beingdownloaded concurrently. The lastmajor component, Peer to Peer Streaming,allows for the streaming of video data. Inthis system, this is provided by Bit-vampire.Communication with other peers isrequired for the operation of manyofthese components. The core layer abstractsthis in order to facilitate the sharingand reuse of the core layer componentsby higher levels. In this system, themajor component in this layer isthe Hybrid Peer to Peer component, whichallows data to be retrieved from the DHTif information is on the DHT, orthecentral server if the DHT has lostthe information requested.The DHT and central server communicationis supported by the final layer,communication. The DHT componentallows the storage of< key, value >pairs on the network, while the CentralServer component allows communication between the central server and the client.32Chapter 4. System Implementation4.2 Implementation Details4.2.1 Related SoftwareThis prototype makes use of 2other software frameworks: BitVampire[20] andFreePastry[151.Bitvampire provides the peerto peer streaming algorithmswhile FreePastry provides the distributedhash table algorithms. SincebothFreePastry and BitVampire are builtusing the Java programming language,Java is also used to implement this prototype.4.2.2 System FlowThis section details the flowof high level requests with which a usermay interactwith the system. Viewing of videos isnot shown here because it is handledbythe underlying peer to peer video streamingarchitecture, Bitvampire.PublicationFigure 4.1 shows the UML sequencediagram for the publication of a video.When a user publishes a video using theGUI, the logic in the applicationlayerGUI classes checks the input information forerrors. If there are no errors, thecentral server is informed of the publicationthrough the communicationlayerCentrcilServerHaridler class. As part of thecore layer, this class handles thecommunication between the centralserver and the client. A responsecontaininga unique video identifier for the video is expectedfrom the central server, andispassed back to the application logicin the GUI when it is received. Topublishthe actual video data, the abstractionlayer Controller class is now calledinorder to gain access to a service. The Controllerclass passes the request to theappropriate service, MediaService. MediaServiceis part of the Bitvampirecode and provides peer to peer streamingservices to the applicaton. Oneof its33Chapter 4. System Implementationfunctions, publish, is now called tohandle the distribution ofthe video data.We modify MediaService to call the Encryptionservice class for encryptionofdata before distributing the data.ICentralS Hardier_____ _______ ______pubishOHISchedule Updates andAutomatic Video RetrievalFigure 4.2 shows the sequencefor updating schedules anddownloading videos.The service layer SubscriptionManagerclass calls the communicationlayerHybridSto’rage class to retrieveupdated video lists for channelsand series theuser has subscribed to.The service layer RetrievalManagerclass is calledto match this updated video listwith its current list of videodownloads. Ifnew videos to downloadare found, they are queuedby the RetrievalManagerand automatically downloadedusing the MediaService class.Updates for repub3ist)Figure 4.1: Sequence diagram ofvideo publication34Chapter 4. System Implementationcent schedules are then retrieved by the SubscriptioriManagerusing the theHybridStorage class. This list of videos is sentto the RetrievalManager, whichchecks for matches to subscriptions and queuesany matched videos for download. Periodic polling for recent schedules is doneby the SubscriptionManager.1HvbO4$ae RevMjMedaSercepdaGSChnnMdeoLiIs()ubbdV)dJVo()reenVideosdabdVkIeo)iFigure 4.2: Sequence diagram of schedule maintanenceView Schedules and DownloadsThe flow for viewing schedules and viewing downloads isillustrated in 4.3. After a user decides to view the schedule usingthe application layer GUI, theGUI simply retrieves the schedules from the service layerSubscriptionManagerand the download list from the service layer RetrievalManager.Since bothSubscriptionManager and RetrievalManager keepsthese lists up to date,there is no need to contact the network.35Chapter 4. System Implementation[ l:ii1[rIDIManaoerI Ige&dueFigure 4.3: Sequence diagram of schedule and download display4.2.3 Graphic User InterfaceWe implement a prototype with a Graphical User Interface (GUI) to demonstrate how publishers and clients may interact with this system. The GUI forinput of publication details and display of the schedule information are shownin Figure 4.4 and Figure 4.5 respectively.Figure 4.4: GUI for publication of a videoIn Figure 4.4, the publisher is asked to input information which our frame-dwnoadngVdeoL.MedlaNanle FoundMediaTVpe:MPEG1(C) .4deo.mpegJ.BRate(kbps: I 4OrSeries Name: FoundChannel Name:ICoar________ReleaseDe(Y,WMMDD )OU8OFOc —Releaserpme(IIIMM): !23S0LLTh____36Chapter 4. System Implementationass.W JI..Jjca UI,9 DE.Ncad,Du CUP’9 F O6P.I-2 -uP O - US-w..tss, to,,’lEpC—-SesissNInat F,.,,COatmdNa..nI CeugtrS.jsat.tsct.m.1stSatens UCSPIOCCJUJR.Wss*TNss P1106-i p20060006RtI.fl. States, loop ROIfl.4dFigure 4.5: GUI for displaying schedulesand downloadswork requires, such as thetitle (“Keywords” in the BitvampireGUI), series(“MediaName” in the Bitvampire GUI),release date, and release time.Otherinformation, including bitrate, is requiredby the Bitvampire.Figure 4.5 displays a user’sGUI after he has subscribed to a seriesnamed“Found”. This is done by first clickingon any video in the schedule,shownin the left side of the interface. Onthe right side, various informationaboutthe video such as the channel name andseries name are displayed, along withbuttons for each type of subscription.These buttons allow the userto subscribeto sets of videos. The screenshot showsthat both the released episode3 andthe unreleased episode 4 are automaticallydetected as subscribed videosandare displayed in the download area.The statuses of both episodesare also setto “Downloading”, signifying that automaticretrieval has started.37Chapter 5EvaluationThis chapter evaluates the proposedsystem both theoretically and throughsimulations.5.1 TheoreticalWe evaluate the hybrid peerto peer approach by comparingit to a central serverapproach.In a traditional central serversystem, each request goes directlyto the central server, introducingbandwidtht units of bandwidth overhead, wherebandwidtht = requests * messagesizep+requestsr * messagesize(5.1)+numpollschedules * poliRate * averagemessagesize3requests represents the numberof publish requests and requests,.represents the number of retrieval requests.messagesizep is the size of apublishmessage, while messagesize and averagemessagesizeare the sizes of videoinformation messages and schedule messagesrespectively. numpoUschedulesrepresents the number of schedules withinthe polirange and poliRate istherate at which users poii for updatesto schedules.38Chapter 5. EvaluationIn the hybrid peer to peer storage system,the system overhead is definedasbanclwidth, = requests * messagesizep+lostrequest.sr * (2 * messagesizer)(5.2)+lostrequests5* (2 * averagemessagesize3)where lostrequestsr is the numberof retrieval requests not retrievablefromthe DHT and lostrequests3 arethe number of schedule requestsnot retrievablefrom the DHT. Note that weassume the DHT overheadis negligible, since thehigh frequency and size of viewingrequests and schedulerequests should faroutweigh the DHT overhead insystems like Freepastry.In the hybrid system, we assumeeach lost request on the DHTresults in2* message becausethe central server answers therequest and also restores theinformation onto the DHT.We evaluate each part of the equationindependently. For publishingmessages, we see that both systems requirerequests * messagesizep unitsof datatransfer. The DHT is not used forpublishing requests and thusthere is nodifference in bandwidth incurred.However, publishing messagesare likely to bethe least common since publishingonly occurs once per video.We now evaluate the retrieval relatedcost. We first find the differencebetween the hybrid system andthe traditional system, as shownbelow:lostrequests *(2 * messagesize)— requests * messagesize (5.3)= (2 * lostrequestsr —requestsr) * messagesize.We see that if (2 * lostrequestsr)< requestsr, then the hybridpeer to peersystem requires less bandwidth. In otherwords, if 50% or more of theDHTrequests are not lost, thenthe hybrid peer to peer system performsbetter. The39Chapter 5. Evaluationschedule related cost comparison isequivalent to the retrieval related cost.It iseasy to see that if 50% or more of theDHT requests are not lost, then thehybridpeer to peer system has an advantage.We also conclude that the percentageoflost messages in the DHT is proportionalto the amount of bandwidthrequiredby the hybrid storage system.Simulations done by Rowstronet al. have achieved greater than95% successrate in many cases with feasibleparameters using PAST, whichis a storagesystem built on top of pastry[24]. At 95%, the hybrid peerto peer system onlyuses requests,./(2 * 0.95 * requests,.) *100% = 52.6% of the bandwidth usedby a traditional central server approachfor video informationand scheduledistribution.5.2 SimulationWe implement a simulator with whichwe evaluate the performanceof the subscription and pre-release distributionsystem. The simulator usedis a modifiedversion of the original Bitvampire [20]simulator.5.2.1 Simulation SetupWe simulate a network with4500 peers and 6 seed peers. Seedpeers in Bitvampire are peers who are more stable,have more bandwidth available,and havemore storage space.Most parameters in the simulation areequivalent to the Bitvampiresimulation parameters. Peers havea maximum bandwidth ranging between512 kbpsto 2 Mbps and a maximum delay ranging from4ms to lOms. While peers onlyallow a maximum of 128 kbpsto 1 Mbps to be used for upload,seed peers’ havemaximum upstream bandwidth rangingfrom 1 Mbps to 2 Mbps.We allow seed peers to store a maximumof 1000 to 3000 segments whilepeers40Chapter 5. Evaluationmay store 24 to 36 segments. In Bitvampire’ssimulator, each segment itself is5 minutes long and roughly 19MB. Each video consistsof 12 segments and is60 minutes in length. The bitrate of each video is512kbps. In Bitvampire’ssimulation, 500 movies exist inthe system. In this simulation, we chooseto useonly 50 movies. This allows us tosimulate a flash crowd overa small set ofmovies more easily. We willvary this parameter in one of thesimulations.Peers in peer to peer networks oftenleave and rejoin. To properlysimulatethis, the Bitvampire simulator simulates20 peers leaving per minute,with peersrejoin the network after 15 minutesto 3 hours.Three different video request arrivalpatterns are used to simulatedifferentkinds of user behaviour inthe real world: constant arrival,flash crowd arrival,and periodic flash crowd arrival.To simulate a constant arrivalpattern, 20requests arrive per minute constantly.This kind of behaviour canoccur forpopular old videos, attracting a constantstream of interested viewers.Anotherkind of arrival pattern, flash crowds,often occurs when a new video isreleasedand great numbers of peoplewant to view it at the sametime. To simulate this,we first simulate peer requestsat a rate of 20 requests per minute,then increasethe rate to 120 requestsper minute, then decrease itback to 20. The thirdkind of traffic, periodic flash crowds,can occur when different popularvideosare released one after another.We simulate this by alternatingbetween highrequest rates of 120 requestsper minute and low requestrates of 20 requestsper minute. Figure 5.1and Figure 5.2 show the flashcrowd pattern and theperiodic flash crowd patternrespectively.In this simulator and the Bitvampiresimulator, the popularityof videos follow a Zipf-like distribution. Ina Zipf distribution, the mostpopular videohas a popularity proportionaltoi/ia,where a is a parameter.This is chosenbecause Cherkasova et al. [19]collected statistics of videoaccess frequencies in41Chapter 5. Evaluation140 — — ———e• 120 -———————————________-___________________.. 100:. E€0 -—————----—.40 —---———-——_______200 rT,r’—-rq7nrrrrrrnrrnrrrnmrlmnmnTrr:imr-nntmt-rr—flflRtflWI!fltfttflTtflfll’— ..lime (minutes)Figure 5.1: Flashcrowd request pattern140— ——-..120—..1a0._a0 —-—-——.————.--- —..——60 -— ——- ——! .___.._‘20-0.-4 r- m D Li, —t.. m 0 U — r- in SiLi, ,.4 •.. in a, UI..4 — IN in Cs Ii,40 40 I.. C Cs SI 01 0 0 .lime (minutes)Figure 5.2: Periodicflash crowd requestpatternHPLabs media serverand HP corporatemedia solutionsserver over aperiod oftime and foundthat file access frequenciesclosely resembleda Zipf-likedistribution witha value of a between1.4 and 1.6. FollowingBitvampire’ssettingsand the results ofCherkasova et aL,a is set to 1.4 in thesimulator.5.2.2 SimulationMethodologyIn our simulation,we have 3 phases:publishing, subscribing,and watching.Following BitVampire’ssimulation methodology,we first enterthe publishing42Chapter 5. Evaluationphase and allow all videos to be published. In the subscription phase, a percent age of the peers are chosen to be subscribers. These subscribers subscribe tovideos and automatically start the downloading process. After a preset period oftime, video releases are simulated by allowing peers to watch videos. Requestsare generated following the request crowd patterns previously discussed. Eachpattern generates video requests for 2 hours.5.2.3 Simulation ResultsThis section describes the results from the simulations. We first simulate andevaluate the effect of subscriptions on streaming capacity. We then examinethe effects of our stream prioritization algorithm. Next, we vary the percentageof peers who watch subscribed videos and measure its effects. The streamingquality difference between subscribers and non-subscribers is investigated next.Finally, we investigate the effects of varying the number of movies involved inthe flash crowd.Subscription Effects on Streaming QualityWe first explore the effects of allowing subscriptions on the quality of videostreams. To measure the effect, we copy Bitvampire's methodology and measurethe segment rejection ratio for viewers. The rejection ratio is defined as thepercentage of segment requests for streaming videos which do not succeed inreserving bandwidth. The rejection ratio is a good measure of the performanceof the streaming system because a high rejection ratio means a high percentageof peers are likely to experience pauses in the video, while a low rejection ratiomeans few peers are likely to experience pauses.Figure 5.3, Figure 5.4, and Figure 5.5 show the simulation results for constantcrowd, flash crowd, and periodic flash crowd request patterns respectively. For43Chapter 5. EvaluationFigure 5.3: Constantcrowd rejection ratioeach crowd, data is collectedfor the original Bitvampirealgorithm and theBitvampire algorithm withsubscriptions. We use 4different parameters forthetime period between publicationand release time, whichwe will call release waittime (RWT) in the restof the thesis.All three crowd patterns are affectedby subscriptions in the sameway. WithRWT 0, which is the casein a subscription system withoutany pre-releasedistribution capabilities, thesubscription system increasesthe rejection ratio.This means users aremore likely to experience pausesand long wait times.This occurs because downloaderswho would have beenidle in a system withoutsubscriptions now compete withstreaming peers for bandwidth.However, as the RWTis increased to 2 hours, thecapacity of the systembecomes greater thanthe original Bitvampiresystem without subscription support. In this situation,peers have 2 hours to downloadthe video before releasetime, and thereforeincrease the number of copiesof segments on the network.These peers havealso cached some segments, anddo not need to request themafter release time.As RWT is increased to 4 hoursand 6 hours, the rejectionratio becomes lower and loweruntil it is almost 0.—+—No 5obscritio0.6hoursRW1 2 hours0.7—RW14 hour,0.6— RwT S hours0.50.20.1T)me Mu,utesI44Chapter 5. Evaluationo—4-— No SubscriptionRWToOhour03 ——awt6hour,Io-tune (minutesFigure 5.4: Flashcrowd rejectionratioStream PrioritizationEffects on StreamingQualityThe next set ofsimulations evaluatesthe effects of thestream prioritizationalgorithm. In theprevious section,we saw thatthe subscriptionsystem hassimilar effectson all three crowd patterns.In all future experiments,we willuse only the flashcrowd pattern.Like the previoussimulations, we willuse therejection ratio asa measure of the streamingquality experiencedby users.Figures 5.6,5.7, 5.8, and 5.9 showthe rejection ratiosfor 0 hours, 2 hours,4 hours, and6 hours of time beforevideos are releasedafter publicationrespectively. In all settings,the optimized algorithmsmaintain a lower rejectionratiothan the unoptimizedversion. At RWT= 0 hours, we see thatthe streamprioritization algorithmis able to performalmost as well asthe original Bit-vampire eventhough many downloadersare attempting toreserve bandwidth.All following simulationswill use the optimizedversion of the algorithm.Subscription RateEffects on StreamingQualityWith the followingset of experiments,we evaluate theeffects of the subscriptionrate on streamingquality. We definethe subscriptionrate as percentageof peers45Chapter 5. EvaluationFigure 5.5: Periodicflash crowd rejectionratiowho watch subscribedvideos in the simulation.Figure 5.10 showsthat with 0 hoursbetween publicationand release time,the rejection rate forall percentagesof subscriptionsare similar. Peerswithsubscriptions to videoshave not had a chanceto download the videobefore otherpeers start watchingthe video. Since watchingpeers have priorityand there isnot enough bandwidthin the network for allrequests, only thewatching peerssucceed in requestingvideos. This meansthat all of thebandwidth availablefrom suppliers is usedfor streaming peers,so subscriptions anddownloaders donot affect the systemor the streaming rejectionratio.With 2 hours betweenpublications as shownin Figure 5.11,we see a resultthat might appearstrange. While havingsubscriptions resultsin a significantlylower rejection ratiothan not havingsubscriptions, a lowerpercentage ofsubscriptions also resultsin a lower rejectionratio. The causefor this is that withonly 2 hours betweenpublication and releasetime of videos,a high subscription percentageresults in a flood ofdownload requests thatdo not finish beforewatching starts. Asa result, the later segmentsin each videohave not hadachance to propagatethrough the networkand get cached.030.8 -0.70.6. 0.5—4— No Subscription——RWT nO hours— RWT=2houro—RWT4 hours.—*-- RWT 6 hoursrirgne frsblute5)46Chapter 5. EvaluationFigure 5.6: Stream prioritizationeffects with RWT= 0 hoursFigure 5.7: Streamprioritization effectswith RWT = 2 hoursFigure 5.12 showsthe rejection ratios with4 hours betweenpublication andrelease time of videos.We see that in this situation,25% subscription rateresults in the lowestrejection ratio. Thisshows a subscriptionrate of 25% isthe optimal pointin this situation wheremany downloadingpeers are able tocomplete downloadingvideos before the videosare released. As thesubscriptionis increased or decreasedfrom this optimalpoint, the rejection ratioincreases.As we increase theRWT further to6 hours, we see that the optimalpointis at 100% subscriptionratio. This is shownin Figure 5.13. In thiscase, the10.90.8—4— No Subscription.20.7——Ljnoptimized0.60ptimized.2 0.5. 100200 300Tune (m,nuteu0.90.8—4-—Na Subscription0.7——-Unoptimizeci•!0.6Optimized0. 100200 300Tune mrnutes)47Chapter 5. EvaluationFigure 5.8: Stream prioritizationeffects with RWT= 4 hoursFigure 5.9: Streamprioritisation effectswith RWT= 6 hoursRWT is long enoughthat subscribers arealmost all capableof downloading thecomplete video beforerelease.From these results,we can concludethat there is an optimalpercentageof subscribers dependingon the network capacityand numberof requests. Ifthere is not enoughbandwidth for subscribersto download significantamountsof the video beforerelease time, thena lower subscription ratemay result inalower rejectionratio. However, we notethat with RWT>= 2, rejection ratiois significantly lowerthan the system withoutsubscription capabilities.——No Subscription—— lJnoptimizedOptimized0.90.8070. 100200 300Twne (r,smutesl—I— No Subscription—*-- inoptimizedOptimized0. 100200 300Irne Cmiiutes)48Chapter5. EvaluationIncentives forSubscribingto VideosWe examine thestreaming qualitydifference betweensubscribers andnon-subscribers.In order to evaluatethis, we run thesimulation andmeasure thepercentage of totalsegments transferredfor subscribersand non-subscribersusing various parametersfor RWT. The percentage,which we willrefer to asthe complete rate,is calculatedusing percenttrequestst/requestsend, whererequests1 is the numberof requests at timei and end iswhen all peers havestopped streamingvideos. A higherpercentageof total segmentstransferred—4——No Subscription-*—100%Subscrtption Rate50%Subsription Rate—25% Subscription Rate0. 100200 300rime mmLitesJFigure 5.10:Subscription rateeffects with RWT= 0 hours——100%5ubscrptionRate——50%Subtcription Rate25%Subscription Rate—10%Subscription Rate0.30.280.2&1 100200 300Thne Cininutes)Figure 5.11: Subscriptionrate effects withRWT = 2 hours49Chapter 5. EvaluationFigure 5.12: Subscriptionrate effects withRWT = 4 hoursFigure 5.13: Subscriptionrate effects withRWT =6 hourssignifies that theset of peers havebeen more successfulin retrieving requestedvideos.Figures 5.14, 5.15,5,16, and 5.17 showthe simulationresults for RWTsettings of 0 hours,2 hours, 4 hours,and 6 hours respectively.In all of the results,subscribers havea higher percentageof total segmentstransferred at allpointsin time. This showsthat subscribersare more likelyto finish streamingvideosearlier withfewer disruptionsin the video playback.We also see that witha RWT setting of0 minutes, the differenceis negligible.0.02500230.015S0.01—4--’100%Sjbscnp0onRate-*—50%SubscripbonRate25%SubscriptionRate—‘—lO%Subscription Rate0.005a0 50100 150rime (minutes)0.025—4—100%5ubscript4orRate0.02*50%Subtcription Rjte, 001525%SubcriptionRateC—10%SubsrptionRate0.0100o50o o 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0— fT 0TO Tn TO r.. 0 0 0 — TN IfTO If TO 0.r.ne Iminutes)50Chapter 5. Evaluation0.90.8.6 .70.6•—— Non Subscribers——5ubscribersTh,se mi,ute41.20.8:! 0.6.6(10.40.2Figure 5.14:Completionratio for RWT= 0 hours‘—4—Non Subscribers•—— Subscribers0 15 30 45 6075 90 105120135150165180195210225240time Cm.nutesFigure 5.15:Completion ratiofor RWT =2 hoursAs previouslymentioned, in thissituation withstream prioritizationin effect,subscribers’download requestsdo not get acceptedand there islittle differencebetween subscribersand viewers. However,as RWT increases,we note thatthere is a noticeabledifferencebetween subscribersand non-subscribers.Thesmoother playbackof video givespeers a natural incentiveto subscribe tovideos.51Chapter5. EvaluationFigure 5.16:Completion ratiofor RWT =4 hoursFigure 5.17:Completionratio for RWT= 6 hoursEffect ofVaryingthe Numberof MoviesWe measurethe rejectionratio for 1,50, and 100movies to evaluatethe effect ofvarying thenumber offlash crowd movieson streamingquality. Itis importantto note thatthe numberof moviesin this settingis the numberof movies whichreceivea large amountof requestsafter theyare released.This numberdoesnot includemovies whichfew peoplerequest butexists on thenetwork, whichis not simulatedsince we onlywant to seethe effectsof subscriptionon flashcrowds.0.90$.—— Nn Subscribers—f- Subscribers0 10 20 30 40 5060 70 80 90 100110120130140150160170180Thne Cmi,,utesl—--Non Subscribers—f—SubscribersI0.90.8.Q 10 20 30 40 5050 70 80 90 1001101201301401S0160170180Th,sc Cm,utee52Chapter 5. Evaluation1.00.9—-- No Subscription——RWr0 hoursRW1o 2 hours‘-*—RW14hours—‘—RWTh6hour0 25 5075 100 125150 175Time (mmutreFigure 5.18: Rejectionratio for 1 movie—q—No Subscription——RWT 0 hours•RWI=2 hoursRWT4 hours—RWT6 hours00 50 100150 200 250300 350Tmie CmaiutesjFigure 5.19:Rejection ratiofor 50 moviesThe rejection ratiofor 1, 50, and100 movies are shownin Figures 5.18,5.19, and 5.20 respectively.With 1 movie,the rejection ratiois decreased to0when RWT>= 2. This showsthe pre-releasesystem is mosteffective withaflash crowd consistingof few movies.With few movies,the segments quicklypropagate throughthe network andget cachedby peers, thus increasingthebandwidth availablefor streamingof these segments.With 100 movies,we seethat the rejectionratio is higher thanthe rejectionratio of 1 movie.With alarger number ofmovies, segmentsdo not propagateand get cachedas quickly0.90.8•0.70.6C. 5. EvaluationFigure 5.20:Rejection ratio for100 moviessince the requests are splitamongst a greater numberof segments.In all of the results,rejection ratios forno subscriptionand RWT = 0 arevery close. As RWT isincreased, the rejectionratio is lowered inall threesimulations.—4—No Subscription—*—RW1 0 hoursRWt2 hours—RWT4 hours—RWT6 hours0.90.80.7• 0.60.5I0. (mHsuteu54Chapter 6Conclusionand FutureWork6.1 ConclusionsIn this thesis, wepropose a hybrid peerto peer framework fora subscription anda pre-release distributionsystem. This frameworkenhances peer topeer videoon demand streamingby allowing publishersto publish videosbefore releasetime. This feature allowsvideo playback toremain smooth even inflash crowdsituations. Wedesign a subscriptionsystem to allowpeers with unreleasedvideos to knowwho to distribute thevideos to. Anencryption managementsystem is proposed toprevent users fromwatching videosbefore release time.Finally, we modify thestreaming algorithm ina peer to peer videoon demandframework, Bitvampire120],to support downloadingof videos. We optimizethis algorithm byprioritizing streamingpeers over downloadingpeers.We show the viabilityof this systemby describing implementationdetailsfor creating a prototypeusing existing frameworks.These frameworksare Bitvampire, a peerto peer video ondemand framework, andFreePastry,a DHTframework. AGUI prototype is implementedto investigate howusers mayinteract with thesystem.In order to evaluate oursystem, we modifiedthe simulator usedby Bit55Chapter 6. Conclusion and FutureWorkvampire to support subscriptions.The simulation resultsshow that the streamprioritization algorithm increasesthe streaming qualityexperienced by peers.The results also show thatallowing the pre-releasedistribution of videoscansignificantly increases the qualityof video playback in flashcrowd situations.This remains true asthe percentage of subscribersand the number of moviesaffected by the flash crowd isvaried. We also observethat subscribers on average experience better playbackquality than non-subscribers,creating a naturalincentive for users to subscribe.6.2 Future WorkThere are several areas thatcan be interesting toinvestigate. The first oneisthe investigation of peer to peercontinuous query systems,such as P2P-DIET[17]. Continuous query systemsgenerally allow morecomplex queries tobemade. For example, auser might be able to subscribeto all series startingwith the letter A, or all comediesmade by a certain director.Investigation intothe performance difference,whether or not users wantthis level of subscriptioncapability, and how to makea user friendly interface for it canlead to interestingresults.Scalable video is another interestingarea of research. Prioritydrop [18] andother scalable video approachesallow a lower quality versionof the video to bedownloaded first. By combiningmore video data with the lowerquality version,a higher quality version of thevideo may be displayed. Usingthis approach, asthe bandwidth available varies,the playback of the video canstill be smooth,albeit at a lower qualitywhen the bandwidth ofthe transfer is low. Scalablevideo approaches can be integratedinto the video download retrievalalgorithm.Instead of always downloadingthe blocks in order, a scalablevideo approachwould allow the downloadingof a lower quality layerfirst. The advantage of56Chapter 6. Conclusion andFuture Workthis is that if there is little orno bandwidth available for streamingthe videoand the download of the videois not complete, the user wouldbe able to playmore of the movie usinga scalable video approach.Another possible areaof exploration is supporting authenticationand permission control. In traditionaltelevision systems, administratorsare able tocontrol what channels eachpeer has access to, allowingthe traditional businessmodel of charging for accessto videos to be used.In the proposed architecture,the administrator is unable to preventpeers from transferring datato each otherand is unable to prevent peersfrom modifying informationon the DHT. Onepossible solution is for a trustedcentral server to generatea public key and aprivate key using asymmetric cryptographyalgorithms such as RSA.Asymmetric cryptography can be usedto allow the central server tosign data with itsprivate key. With knowledgeof the central server’spublic key, peers can checkwhether or not the central serverhas signed the data, andthus can determineif data came from the centralserver. The algorithmsin this framework can bemodified to only acceptvideo information and decryptionkeys signed by thecentral server, therebypreventing unauthorized peersfrom publishing videosbymodifying the DHT.To allow permission control,each peer receivesa signedcertificate from the central serverstating his/her permissions,name, and public key. Whenever a peer, PeerA,requests a video from anotherpeer, PeerB,he/she must send their certificateto prove they have permissionto access thevideo. PeerA also sendsa personally signed message containingthe currenttime and the name of PeerB,which PeerB can check usingthe public key fromPeerA’s certificate. This messageallows PeerB to know therequest was sent byPeerA, and not an old requestrepeated by a malicious peer.This system allowsthe central server to control whohas access to what channels,while requiringvery little bandwidth fromthe central server. The centralserver only needs to57Chapter 6. Conclusion and Future Workbe involved in the creation of user accounts andchanging of permissions.Finally, abuse prevention is another interesting directionto investigate. Forexample, in algorithms such as streamprioritization, the requester is trustedand assumed to be telling the truth.However, a malicious or greedy peermayalways send stream requests instead ofdownload requests. A malicious peermayalso flood the network with reservationmessages or freeDowriloadBandwidthmessages which causes suppliers todrop downloaders. Peers mayalso abuse thesystem by continuously downloadingwhile only allowing very little upstreambandwidth to be utilized for sharing.Incentive mechanisms maybe used toprevent this.58Bibliography(1] Babelgum. http://www babelgum corn.[21Bbc iplayer. http://www.bbc.co.uk/iplayer/.[3] Bittorrent. http: //www .bittorrent.corn.[4] Joost. http://www.joost.com.[5] Napster. http //www napster.corn.[6] Netifix. http: //www. netflix.corn.(7] Veoh. http://www.veoh.com.[8] Youtube. http: //www youtube corn.[9] S. Acharya and B. Smith.MiddleMan: A VideoCaching Proxy Server.Proceedings of NOSSDAV,2000.[10] S. Banerjee, B. Bhattacharjee,and C. Kommareddy. Scalableapplicationlayer multicast.Proceedings of the OO2 conferenceon Applications,technologies, architectures,and protocols for computercommunications, pages205—217, 2002.[11] Y. Chae, K. Guo,MM Buddhikot, S. Sun,and EW Zegura. Silo, rainbow,and caching token: schemesfor scalable, fault tolerantstream caching.Selected Areas in Communications,IEEE Journal on, 20(7):1328—1344,2002.59Bibliography[12] Y. Chu,S. Rao, S. Seshan,and H. Zhang. Enablingconferencing applications on the internetusing an overlaymuilticast architecture.Proceedingsof the 2001 SIGCOMMconference, 31(4):55—67,2001.[13] Y. Chu, SGRao, S. Seshan,and H. Zhang.A case for endsystem multicast. SelectedAreas in Communications,IEEE Journalon, 20(8):1456—1471, 2002.[14] H. Deshpande,M. Bawa, andH. Garcia-Molina.Streaming LiveMediaover a Peer-to-PeerNetwork. Submittedfor publication,2002.[15] P. Druschel,E. Engineer,R. Gil, Y.C.Hu, S. Iyer, A.Ladd, et al. FreePastry. Softwareavailable at http://www.cs. rice. edu/CS/Systems/Pastry/FreePastry.[16] L. Gong. ProjectJXTA: A TechnologyOverview. SunMicrosystems,2001.[17] S. Idreos, M.Koubarakis, andC. Tryfonopoulos.P2P-DIET:an extensible P2P servicethat unifies ad-hocand continuousquerying in super-peernetworks. Proceedingsof the 2004 ACMSIGMOD internationalconferenceon Managementof data, pages933—934, 2004.[18] C. Krasic,J. Walpole, andW. Feng. Quality-adaptivemedia streamingbypriority drop.Proceedings ofthe 13th internationalworkshop onNetworkand operatingsystems supportfor digital audioand video, pages112—121,2003.[19] M. GuptaL. Cherkasova.Charactering Locality,Evolution, andLife Spanof Accesses inEnterprise MediaServer Workloads.Proceedingsof NOSSDAy, 2002.60Bibliography[20) X. Liu. Bit Vampire:A Cost-Effective Architecturefor On-Demand MediaStreaming in HeterogeneousP2P Networks. PhD thesis,The University ofBritish Columbia, 2005.[21] B.T. Loo, R. Huebsch,I. Stoica, and J.M. Hellerstein.The case for a hybridP2P search infrastructure.Proceedings of the3rd IPTPS, 2004.[22] S. Ramesh, I. Rhee,K. Guo, ST Microelectron,and CA San Jose. Multicast with cache (Mcache):an adaptive zero-delayvideo-on-demandservice. Circuits and Systemsfor Video Technology,IEEE Transactionson,11(3):440—456, 2001.[23] R. Rejaie and A. Ortega.PALS: peer-to-peer adaptivelayered streaming.Proceedings of the 13thinternational workshopon Network and operatingsystems support for digitalaudio and video, pages153—161, 2003.[24] A. Rowstron and P.Druschel. Pastry: Scalable,distributed object locationand routing for large-scalepeer-to-peer systems, 2001.[25] S. Sen, J. Rexford,and D. Towsley. Proxyprefix caching for multimediastreams. INFO COM’99.Eighteenth Annual JointConference of the IEEEComputer and CommunicationsSocieties. Proceedings. IEEE,3, 1999.[26] I. Stoica, R. Morris,D. Karger, M.F. Kaashoek, andH. Balakrishnan.Chord: A scalable peer-to-peerlookup service for internet applications.Proceedings of the 2001SIGCOMM conference, 31(4):149—160, 2001.[271DA Tran, KA Hua,and T. Do. ZIGZAG: an efficientpeer-to-peer schemefor media streaming. INFOCOM2003. Twenty-Second AnnualJoint Conference of the IEEE Computerand Communications Societies.IEEE, 2.61Bibliographyj28j D. Xu, H.K. Chai,C. Rosenberg, and S. Kulkarni. Analysisof a HybridArchitecture for Cost-EffectiveStreaming Media Distribution.Proceedingsof SPIE, 5019:87, 2003.[29] B. Yang and H. Garcia-Molina.Comparing Hybrid Peer-to-PeerSystems.The VLDB Journal, pages561—570, 2001.62


Citation Scheme:


Usage Statistics

Country Views Downloads
United States 22 4
Iraq 6 0
China 4 20
Australia 4 0
Germany 2 2
Japan 2 0
France 1 0
Canada 1 0
City Views Downloads
Unknown 14 6
Buffalo 8 0
Washington 4 0
Kensington 4 0
Ashburn 2 0
Beijing 2 0
Shenzhen 2 20
Lewes 2 0
Arlington Heights 1 0
Mountain View 1 0
Redmond 1 0
Tokyo 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items