Object Tracking In Distributed SystemsbyYingchun XuB.Sc., South China Institute of Technology, China, 1984M.Sc., Beijing University, China, 1989A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEINTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF COMPUTER SCIENCEWe accept this thesis as conformingto the required standardUniversity of British ColumbiaApril 1993©Yingchun Xu, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature) Department ofThe University of British ColumbiaVancouver, CanadaDate oDE-6 (2/88)AbstractObject mobility (or process migration) is a very important feature in modern distributedsystems. By allowing objects to migrate from one machine to another preemptively, the sys-tem can provide several benefits. These include sharing load, reducing communication cost,increasing availability, and improving invocation performance.One problem with object mobility is object tracking. Several schemes exist to handle theproblem. A forwarding address scheme is used in DEMOS/MP and Emerald. To reduce thenumber of forwarding messages, DEMOS/MP uses an urgent updating policy to compress themessage path. Whenever an object is moved, an updating message is sent to the last nodeimmediately.In Emerald, a lazy updating policy is adopted. When an object migrates, no updatingmessage is sent. The new location is piggybacked on each reply message. This can also compressthe message path for later invocations. The difference between the two is that DEMOS/MPplaces the cost on object migration while Emerald places the cost on object invocation. Theyboth use the same updating policy for all objects.We adopt a philosophy in which objects with different behaviors use different updatingpolicies. Objects are divided into two groups: active objects and quiet objects. Active objectsare defined as objects which move more often than they are invoked. Quiet objects are invokedmore often than they are moved. To optimize object moving and invoking, active objectsshould use a lazy updating policy while quiet objects should use an urgent updating policy.We call this policy the adaptive updating policy.The object tracking function is separated from Emerald and implemented as an independentprotocol layer, OFP. A reliable datagram protocol, RDP has been designed to support OFP.Experiments are done to validate the protocols' performance. The protocols are implementedon the x-kernel platform.iiContentsAbstract^ iiContents iiiList of Figures^ vList of Tables viAcknowledgement^ vii1 Introduction 12 Background and Related Work 72.1 Object Migration ^ 72.2 Object Tracking 92.2.1^DEMOS/MP 102.2.2^Charlotte ^ 112.2.3^V 122.2.4^Locus 132.2.5^Sprite ^ 132.2.6^Emerald 142.3 Conclusion 173 Reliable Datagram Protocol 183.1 Reliable Data Transmission ^ 183.2 Implementation ^ 203.3 Conclusion 23iii4 Object Finding Protocol 244.1 Mechanism ^ 254.1.1^Forwarding Address Scheme and Broadcasting ^ 264.1.2^Message Types ^ 284.2 Policies 304.2.1^Lazy Update Policy vs. Urgent Update Policy^ 314.2.2^Adaptive Update Policy ^ 324.3 Conclusion ^ 335 Experiments and Analysis 355.1 Experiment Design ^ 355.2 Experimental Results of the Lazy Update Policy ^ 365.3 Experiment Result with Urgent Update Policies 395.4 Adaptive Update Policy ^ 476 Conclusions and Future Work 526.1 Contribution ^ 526.2 Future Work 54ivList of Figures1.1 Protocols Dependency Relationships ^52.1 Emerald Language Primitives. ^ 153.1 Sender's State Transfer Diagram 223.2 Receiver's State Transfer Diagram ^ 225.1 Lazy Update Policy. (a)Updating Cost Per Migration. (b)Front View of (a).(c)Forwarding Cost Per Invocation. (d)Front View of (c). (e)Updating Cost PerOperation. (f)Front View of (e). (g)Forwarding Cost Per Operation. (h)FrontView of (g). (i)Total Cost Per Operation. (j)Front View of (i). 405.2 Urgent Update Policy. (a)Updating Cost Per Migration. (b)Front View of (a) ^(c)Forwarding Cost Per Invocation. (d)Front View of (c). (e)Updating Cost PerOperation. (f)Front View of (e). (g)Forwarding Cost Per Operation. (h)FrontView of (g). (i)Total Cost Per Operation. (j)Front View of (i). ^ 445.3 Urgent Update Policy. (a)Total Cost with 0% of Updating Cost. (b)Front Viewof (a). (c)Total Cost with 20% of Updating Cost. (d)Front View of (c). (e)TotalCost with 50% of Updating Cost. (f)Front View of (e). (g)Total Cost with 80%of Updating Cost. (h)Front View of (g).^ 485.4 Adaptive Update Performance Graphs (a) Performance with 100% Updating.(b) Front View of (a). (c) Performance with 50% Updating. (d) Front View of(c).^ 51List of TablesLazy Update Policy Update Cost Per Migration. ^ 36Lazy Update Policy Forwarding Cost Per Invocation 37Lazy Update Policy Update Cost Per Operation. ^ 37Lazy Update Policy Forwarding Cost Per Operation 37Lazy Update Policy Total Cost 38Urgent Update Policy Version 1 Update Cost Per Migration. ^ 41Urgent Update Policy Version 1 Forwarding Cost Per Invocation 41Urgent Update Policy Version 1 Update Cost Per Operation. ^ 42Urgent Update Policy Version 1 Forwarding Cost Per Operation 42Urgent Update Policy Version 1 Total Cost^ 42Urgent Update Policy Version 2 Total Cost 43Urgent Update Policy Version 3 Total Cost^ 43Urgent Update Policy Version 1 Total Cost 46Urgent Update Policy Version 1 Total Cost^ 46Urgent Update Policy Version 1 Total Cost 46Urgent Update Policy Version 1 Total Cost^ 47The Difference Of Lazy Update Policy Performance and Urgent Update PolicyVersion 1 Performance. ^ 495.18 The Decision Table of Adaptive Update Policy: * - Lazy, + - Urgent, - - Both 495.19 The Difference Of Lazy Update Policy Performance and Urgent Update PolicyVersion 1 Performance. ^ 505.20 The Decision Table of Adaptive Update Policy: * - Lazy, + - Urgent, - - Both 505.21 The Adaptive Update Policy Total Cost ^ 505.22 The Adaptive Update Policy Total Cost 515.15.25.35.45.55.65.75.85.95.105.115.125.135.145.155.165.17viAcknowledgementI would like to extend my special thanks to my supervisor, Dr. Norm Hutchinson, for hisadvice, encouragement and guidance throughout the whole work. He patiently directed everystep for this research.I would like to thank Dr. Gerald Neufeld, for reading through this thesis and offering manyvaluable suggestions.I would also like to thank Catherine Leung and Chris Healey for carefully reading this thesisand corrections.Especially, I would like to thank my wife, Sophia, for her support and endless love.Last but not the least, I would like to thank my daughter, Iris, who makes my life and workfull of happiness.viiChapter 1IntroductionAs computer networks are developed and become more common, distributed systems becomemore and more realistic. One of the major goals of a distributed system is to support resourcesharing.Object mobility (or process migration) has become an important feature supported by sev-eral existing distributed systems. Such systems include Charlotte[Aea89], DEMOS/MP[PM83],Locus[PW85], Sprite[D091], V[Che88], and Emerald[Hut87][RTL+91]. By allowing objects tomigrate from one node to another preemptively, we obtain several advantages[Jul88].1. load sharing — By moving objects around the system, one can take advantage of lightlyused processors.2. Communications performance — Active objects that interact intensively can be movedto the same node to reduce the communications cost for the duration of their interaction.3. Availability — Objects can be moved to different nodes to provide better failure coverage.4. Reconfiguration — Objects can be moved following either a failure or a recovery or priorto scheduled downtime.5. Utilizing special capabilities — A movable object can take advantage of unique hard-ware or software capabilities on a particular node.In addition, Emerald's fine-grained mobility provides the following three additional benefits:1. Data Movement — Mobility provides a simple way for the programmer to move datafrom node to node without having to explicitly package data. No separate message-passing or file-transfer mechanism is required.12. Invocation Performance — Mobility has the potential for improving the performanceof remote invocation by moving parameter objects to the remote site for the duration ofthe invocation.3. Garbage Collection — Mobility can help simplify distributed garbage collection bymoving objects to sites where references to them exist.One major problem that must be solved in any system with object migration is object track-ing. When objects are allowed to move without limitation, this problem becomes even moreimportant. To simply keep tracking the object's current location is expensive and unnecessary.Existing distributed systems deal with this problem using several different schemes.Some systems depend on a third party such as an object's original node, home node, or aname server to get the object's new location when the old one becomes invalid. The single nodethat knows the location of an object causes unreliability and is a potential bottleneck. Somesystems use special communication tools such as broadcast or multicast to send an object's newlocation whenever the object migrates. Broadcast or multicast is unreliable and can only beeffective in a local area network. Finally, a forwarding address scheme is used in DEMOS/MPand Emerald.Whenever an object moves, it leaves a forwarding address behind. This forwarding addresscan be used as a hint to the object's location. The forwarding address may not be the currentaddress of the object since the object may have moved again. Simply using forwarding addressesto send messages incurs a serious overhead. Some method has to be used to shorten theforwarding address path. Different strategies are discussed in [Fow85].In DEMOS/MP, an urgent update policy is used. Whenever an object migrates, an updatemessage is sent to the last node on the path. The new forwarding address path goes from thelast node to the correct node directly jumping over the node which sent the update message.In Emerald, a lazy update policy is adopted. When an object migrates, no update messageis sent. Only the object source and destination nodes update the object's location. When thereis an invocation on an object, the object's new location is piggybacked on the reply message.Later invocations will now follow a shorter message transfer path.The difference between DEMOS/MP and Emerald is that DEMOS/MP places the cost onthe object moving stage while Emerald places the cost on the object invoking stage. They bothuse the same update policy on all the objects.Some ideas can be derived from real life by comparing objects to people. Suppose thereare two groups of people. The first group of people move frequently and also receive very few2letters from their friends. The second group of people move less, but receive many letters fromtheir friends. The first group of people does not have to write letters to tell their friends theirnew address every time they move, since they know the new address is temporary and theyalso know that their friends have little chance of writing letters to them. This group of peoplesends out change-of-address notifications lazily.For the second group of people, things are different. Whenever they move, they will writeletters to tell their friends their new address. They know that they will live at the new addressfor a long time and they also know that their friends often write them. Sending letters toinform friends of their new address is urgent for this group of people.In some cases, if people in the first group want to keep in touch with their friends, eventhough they move a lot, they have to notify their friends of their new location whenever theyrelocate. Although it costs more to send these change-of-address notifications, if their friendshave good things to tell them, they can receive this news very quickly.Back to the object world, our point is that objects with different behaviors should usedifferent update policies. The objects can be divided into two classes: active objects and quietobjects. Active objects are defined as objects which move more often than they are invoked.Quiet objects are invoked more often than they move. To optimize object moving and invoking,it is not difficult to see that active objects should use a lazy update policy while quiet objectsshould use an urgent update policy. This policy is called an adaptive update policy.Another component of the update policy is selecting which nodes should receive updatedlocation information when an object moves. We do not want to update a node which hasnever invoked the migrating object. There are two kinds of relationships: object relationshipsand node relationships. When an object invokes another object, the first object is defined asa dependent of the second one. This is called an object dependency. A node dependency isderived from object dependencies. When an object at one node depends on another object atanother node, the first node is defined as a dependent of the second one. The dependency canbe strong or weak. The more invocations performed, the stronger the dependency. In order toimprove performance, only the dependent nodes are updated.A metric activity which is a real number between 0 and 1 can be used to measure the levelof activity of an object. When the activity of an object is near 1, the object is defined to beactive. Conversely, when the activity of an object is near 0, the object is defined to be quiet.How should we choose the activity of an object? This depends on the performance require-ments of the object. If we require an object to have good performance, we have to computeits activity according to its real behavior. The only way to do this is to record its behavior.3We may assign an object an initial activity estimate when the object is created. We can thenadjust its activity based on its behavior.If we require an object to exhibit some special behavior, we need to assign it an appropriateactivity. Sometimes this is necessary and important. In some applications, the invoking objectsrequire a quick response from an invoked object. If the invoked object's real behavior is activeand lazy updating is used during migrating, the invocation messages will go through a longtransfer path to reach the invoked object. Thus the invocation response will be delayed. Inorder to provide the invoking object with a quick invocation response, the invoked object canbe assigned a low activity, which forces the system to use an urgent update policy. Althoughthis will increase the object migration cost, a quick invocation response is gained as a tradeoff.The object tracking function is separated from the Emerald run-time system and imple-mented as an independent protocol layer, OFP. The reliable datagram protocol, RDP is alsodesigned and implemented to support the OFP protocol. RDP is built on UDP. The Emeraldrun-time system can also send messages directly through RDP. This is often used to reply toan invocation. The protocol relationships are given in Figure 1.1.OFP presents its user, the Emerald run-time system, with the ability to send a messageto an object. The semantics of RDP is to send a message to a node. The function of OFP isto map an object's reference to its current location. In this sense, it is a kind of distributedmapping.When the distributed mapping fails, broadcast is used to query the nodes within a localarea network. Because of its unreliability, OFP will force every node on the local area networkto reply to the broadcast message (except nodes which are down). This ensures that we willget an object's location information if the object is on the local area network or if some nodeknows where the object resides.One of the nodes in a local area network is used as an object tracking server. When anobject migrates out of the local area network, the server must be informed. The same is truewhen an object migrates into the local area network. When a server receives a local broadcastmessage for an object that has left the local network and has registered with the server, theserver will reply to the broadcast message with the object's foreign address.If a local server loses an object address, the server should contact other internet serversto find the object. Currently, this part is not implemented in this thesis, and is left as futurework.In the real world, object invocation patterns are not random. There is some amount oflocality in real object invocation patterns. Locality is defined as the probability of the next45object to be invoked being the same as the previous one invoked. Through experimentation,the effect of object invocation locality on the performance of OFP protocols was studied.All the protocols are implemented and tested on the x-kernel platform. The x-kernel is usedto support protocol design and implementation. Its uniform interface and powerful utilitiesmade it easy to implement and test our protocols.Experiments have been designed and conducted to study the performance of the OFP proto-col with different updating policies and different object invocation localities. The experimentsare conducted on Sparc stations, using 12 x-kernel nodes.The contributions of this thesis are:• The use of an adaptive update policy in the forwarding address scheme to improve theperformance of object tracking.• The design and implementation of an object tracking protocol as a separated mechanismto transparently support Emerald object mobility.• The design and implementation of a reliable datagram protocol to deliver reliable messagetransfer services to upper layer protocols or users.• The extension of the object tracking protocol to the internet.• The use of locality to model object invocation and study the effect of locality on OFPprotocol performance.The thesis is organized as follows: Chapter 2 will survey the various object tracking schemesused in existing distributed systems. Chapters 3 and 4 will discuss the design and implemen-tation of the protocols RDP and OFP in detail. The experimental results will be given andanalyzed in Chapter 5. Chapter 6 discusses the contributions of the thesis, future work andconclusions.6Chapter 2Background and Related WorkObject tracking originates from object (or process) migration. Object migration has two com-ponents, policy and mechanism. The policy of a system decides which objects should be moved,when they should move, and where they should be moved, while mechanism decides how anobject is moved.2.1 Object MigrationObject migration in a distributed operating system is a facility to dynamically relocate anobject from a processor on which it is executing (the source processor) to another processor(the destination processor). The basic idea of object migration is to move an object during itsexecution, so that it continues its execution on another processor, with continuous access toall its resources. This operation may be initiated without the knowledge of the running objector any object interacting with it. That means the migration is transparent; it should preserveall elements of computation and communication.Here migration means both object mobility in object-oriented distributed systems such asEmerald and process migration in process-based distributed systems such as Charlotte, V, andSprite. In fact, object mobility is different from process migration in two respects[JLHB88,Gos91]:1. The unit of distribution and mobility is an object. Some of Emerald's objects containprocesses, other objects contain only data such as arrays, records and single integers.Emerald processes are light weight threads. In process migration, the entire processaddress space is moved. Therefore the unit of object mobility can be much smaller thanin process migration systems. Thus, Emerald's object mobility subsumes both processmigration and data transfer.72. Emerald has language support for mobility while process migration does not have lan-guage support.In the previous chapter, several advantages of object migration have been mentioned. Theseinclude sharing load, improving communications performance, increasing the object availabil-ity, reconfiguration, utilizing special capabilities, moving data, improving remote invocationperformance and simplifying garbage collection. There are many problems involved in objectmigration. The first group of problems occurs in the preparing stage. At this time, the mi-grating object and its destination have to be determined. For load sharing, there must besome mechanism to tell which nodes are heavy loaded and which are not. In Emerald, the userdecides which objects should be migrated and where they should move.The second group of problems occurs in the object moving stage. In this stage, the systemhas to decide how much of an object should be migrated, which parts of an object should bemigrated and when they should be detached and moved. These problems are even more seriousduring process migration, since in process-based systems the processes which are migratedinclude an entire address space.The process migration facilities which move entire address spaces often suffer a seriousproblem: as a program grows, the migrating cost grows in a linear fashion. Thus, moving thecontents of a large address space stands out as the bottleneck during process migration. Itdominates all other costs[Zay87].A process to be migrated must be frozen, which means its activity is suspended and com-munication is deferred. Because freezing generates some problems (for example, a process canbe suspended too long) the V system uses a special mechanism, called a pre-copying technique,to copy most of the process state before freezing it[TLC85][Che88]. This can reduce freezingtime. However, this technique suffers from one drawback: it may increase the total time formigration.To deal with this problem, a solution which allows instant execution on the destinationnode has been proposed in [Zay87]. The approach performs a logical transfer, which requiresonly a small portion of the address space to be physically transmitted. Thus, instead ofmoving the entire contents at migration time, an individual on-demand page (IOU) for all orpart of the data can be sent. When a migrating process executing on the destination nodeattempts to reference memory pages which are not available locally, a request is transmittedto the source node to copy the desired blocks from their remote locations. This has twoimportant consequences: context transmission times during migration are greatly reduced withthis demand-driven copy-on-reference approach; and these times are virtually independent8of the size of the address space.Unfortunately, this technique also introduces new problems. A destination node will dependheavily on its source nodes. The load of a source node will increase when the processes migratingfrom it increase. Also, the reliability of a destination node will depend on that of its sourcenodes.2.2 Object TrackingObject tracking is an implementation problem. When there is no limitation on object migration,tracking objects becomes even more difficult. Communication cost with a migrated objectdepends largely on the cost of tracking the location of the object.In order to track a migrated object, its new location has to be made known. One option isto transmit information about the new location of the object to other nodes immediately afterit migrates. Another option is to defer the transmission of location information until anotherobject wants to communicate with the migrated object. In this case, the information about thenew location of a moving object is made available either through a location server or throughbroadcasting. Node to node transmissions are expensive. Some systems use the first option,some the second. Other systems use both with the second option complimenting the first, sincein the first option it is expensive to update all the nodes.The advantages of a location server are that the migration mechanism is simple and themoving cost is reasonable. However, system reliability will depend on the reliability of the lo-cation server. It is also possible that the location server may become a performance bottleneck.Using only the second option to get a new location is expensive. It can increase communicationtime and cost.If the system was aware of which nodes are likely to communicate with a migrating object,it could update only those nodes when moving the object. There are two approaches forchoosing these communication buddies, either they are pre-determined or they are decideddynamically by the system. Different objects may have different communication buddies andit is unreasonable to assume that the system will have complete information. There may besome nodes which are not recognized as buddies of a particular object but which want tocommunicate with a migrating object. This means the second mechanism has to be used tocomplete the object tracking.In the following subsections, object tracking schemes used by several existing distributedsystems are surveyed in detail.92.2.1 DEMOS/MPDEMOS/MP is a distributed operating system developed at the University of California,Berkeley[PM83]. It is a message passing, process-based system. Messages are passed by meansof links associated with a process. Links are managed by the kernel; the kernel participatesin all link operations even though conceptual control remains with a process. Since a linkalways points to its creator process, links are context-independent. DEMOS/MP's migrationfacility uses the message forwarding approach to deal with the communication state and futurecommunication.DEMOS/MP implements message forwarding by leaving a forwarding address on the sourcecomputer. Messages sent to an object after it left will be resent to the object's new locationby the source computer. One obvious problem is that indirect transmission of messages candegrade overall system performance because of long forwarding address paths. This conflictswith the use of process migration to improve overall distributed system performance.There are two possible solutions. In the first, all messages addressed to the migrated processare returned to their senders as not delivered. In this system no forwarding process is requiredin the source computer. However, kernels of all senders have to perform some action to findthe new location of the migrated process, for example, through broadcasting.The second solution is based on updating links. By updating a link we mean updating theprocess address of the link As a result of this operation, the link's unique identifier is notchanged, but the computer location is updated to specify the new location of the migratedprocess. Updates of all links of a migrating process can be performed easily for processeswhich are not sending messages during migration. Since messages could be in transit, someforwarding mechanism is required. It was suggested that links should be updated as they areused, rather than all at once.This update operation is performed by the kernel of the source computer when forwardingmessages. A special message is sent containing the process identifier of the sender, the processidentifier of the intended receiver (the migrated process) and the new location of the receiver.These solutions, however, are not implemented in DEMOS/MP.In conclusion , DEMOS/MP uses a type of urgent update policy. Whenever an object ismigrating, the source node will update all the links of the migrated process. This policy willincrease the cost of migration but decrease the cost of further communication. On average, ifprocesses communicate often and migrate rarely, the total migration cost will be reasonable.On the other hand, with frequent communications and rare migration, the cost of individ-10ual migrations will be high. Overall, system performance will depend strongly on processes'behavior.2.2.2 CharlotteCharlotte[Aea89] is a distributed operating system which offers a uniform, location-transparentmessage-based communication facility. Processes communicate with each other through links.One of the most important differences between DEMOS/MP and Charlotte is that in the latter,once a process migrates out of the source computer, no forwarding addresses or trail of stubsis left to perform future communication. The source kernel deletes all data structures relevantto the migrating process. The process migration facility of this system requires readdressing ofthe process's communication links.In Charlotte, after a process is selected, process migration is performed by the kernel inthree phases: negotiation, transfer and clean-up. During the transfer phase, the kernel of eachof the migrating process's link ends is told the new logical address of the link. Each kernel thatreceives a link update message must acknowledge the message. Once all acknowledgementshave been received, it is safe to start transferring the process's data structures, since no moremessages will be received across its link.Processes which have no link with the migrating process establish new links through the useof a name server. This name server must be informed of the new location whenever a processmigrates.Charlotte's updating policy is urgent update. However, it is different from the DEMOS/MP'spolicy. In Charlotte, the source node is not responsible for redirecting the messages of the mi-grated process. No forwarding address is left on the source node. But not all the nodes areupdated. For the nodes which do not have links with the process, they have to depend on athird party to know where the process is. Therefore, whenever a process migrates, it is stillnecessary for the process to update a third party such as a name server.The advantage of this policy is that future communication cost is low and better perfor-mance is achieved. However, there are also drawbacks to this policy. The disadvantages arethat the dependency of a third node to establish a new link will reduce reliability, and the costof migration increases. Charlotte may have good performance if processes move infrequentlyand communicate often.112.2.3 VThe V[Che88] distributed system is an operating system designed for a collection of worksta-tions connected by a high-performance network. The system consists of the V Kernel, a setof service modules, a command set, and runtime libraries. The user workstations are diskless,hence they communicate with various servers using the V kernel's inter-process communication(IPC) facilities.Process migration in the V system is the migration of the logical host containing the process.A logical host is defined to be an address space within which multiple V processes may run. Inother words, when a process is migrating, its child processes are also migrating, unless a childprocess is already executing remotely from its parent process.After finding the destinated workstation, the logical host may be migrated through thefollowing steps.1. Initialization on the destination computer.2. Pre-copying the state.3. Freezing the state and completing the copy.4. Unfreezing the new copy and rebinding references.In step 3, once pre-copying of the state is completed, the logical host is frozen and thecopying of its state is completed. Freezing is a very sensitive operation because althoughexecution can be suspended within a logical host, messages can still be sent from remotecomputers.Request messages are queued for the recipient process. A reply-pending packet is sent tothe sender on each retransmission(as is done in the normal case). When the logical host isdeleted from the source computer after migration, all queued messages are discarded. Thesender has to retransmit its message to the destination host. Moreover, all the senders use thenew binding of logical host to host destination address, which is broadcast when the logicalhost is migrated. Reply messages are handled by discarding them and replying using theretransmission capabilities of the IPC facilities.The updating policy used by V is an urgent update. All the nodes on the local networkwill be updated by broadcasting. Thus updating depends heavily on V's broadcast facility.Each kernel which receives a broadcast message will keep the new binding in a cache for futurecommunication.12One problem with this policy is the reliability of broadcast. There is no guarantee thatall the nodes will receive the broadcast message. Some nodes will lose these messages, andin order to communicate with the migrated process, these nodes have to query the migratedprocess's new location by broadcasting.Another problem is the broadcast can only be effective on a local network. For a long haulnetwork the cost of broadcasting is prohibitive.2.2.4 LocusLocus[PW85] was a distributed version of UNIX, developed at UCLA. It has became a com-mercial product of the Locus Computing Corporation. Locus executes on a network of VAXminicomputers and M68000-based microcomputers connected by 10 megabit Ethernet. Thesystem features a high degree of network transparency, a flexible and reliable file system, anda recovery facility for system failures. It also provides support for accessing resources on othermachines running different operating systems.In Locus, each process PID includes a site identifier SID which specifies the site on whichthe process was created. Locus uses this site as the synchronization site for the process. If thereal original site, as specified by the SID, is not currently on the network, then a surrogateoriginal site is used. All sites on the network agree in advance who will act as surrogate.When a process migrates, its original site and surrogate will be updated with the newlocation of the process. This is an urgent update, however, updates are done only on the originand surrogate and not on all nodes. Thus, when a process wants to send a message to themigrated process, it will send the message to the original site or its surrogate, and the originalsite or surrogate will redirect the message to the migrated process.The policy used here has good performance for both moving and future communication. Foreach migration, only one or two nodes are updated. But since future communications dependson the original node, the original node shoulders more burden. Second, the reliability of theoriginal node directly affects the whole system. Moreover, there is also the possibility that thelocation of some processes may be lost forever if both the original node and the surrogate fail.2.2.5 SpriteSprite[D091] is an experimental network operating system developed at the University ofCalifornia at Berkeley. It is part of the SPUR project, whose goal is to develop high performancemultiprocessor workstations.13Transparent process migration is one of Sprite's features. Sprite uses a home node conceptwith each migrating process supporting transparency. In Sprite, there are two aspects oftransparency, both of which can be defined with respect to a process's home node. When thereis no migration, the process will be executed on the home node. If a process is migrated, itshould appear as if the process were still executing on its home machine. In order to achievetransparency, Sprite uses four different techniques:1. Location-independent kernel calls.2. Transfering process state from the source machine to the target at migration time.3. Forwarding kernel calls home and forwarding from home to remote.4. A few kernel calls must update state on both sites: the home node and the current node.Sprite's home node is similar to LOCUS's original site. Home nodes are used for forwardingmessages for future communication. To improve performance, each machine will cache therecently-used process identifiers and their last known execution sites. An incorrect executionsite is detected when it is used again. The correct site is found by sending message to its homenode.In general, Sprite uses an urgent update policy. Future communications pass through thehome node. Sprite shares the same drawbacks as Locus, namely, reliability and dependency onthe home node performance.2.2.6 EmeraldEmerald is an object-based language and system designed for the construction of distributedprograms. It was developed at University of Washington[BHJ+86][BHJL86][Hut87]. Objectmobility is one of the important features in Emerald.Object mobility has the potential for improving the performance of remote invocation bymoving parameter objects to the remote computer for the duration of the invocation; it canalso simplify distributed garbage collection by moving objects to computers where referencesexist.The natural construct which allows processes and data to be treated uniformly is an object.An Emerald object has four components:1. a unique network-wide name;142. a representation, the data local to the object, which consists of primitive data and refer-ences to other objects;3. a set of operations that can be invoked on the object;4. an optional process.Objects containing a process are active. Otherwise, they are passive data structures.The Emerald mobility concepts are integrated into the language. This results in the pos-sibility of extensive cooperation between the compiler and the run-time system. This leadsfinally to large improvement on efficiency over previously discussed systems.In particular, object mobility in Emerald is provided by a small set of language primitives.These primitives are shown in Figure 2.1.Locate^an object (for example,LocateX returns the nodeWhere X resides);Move^an object to another nodefor example, Move X to Ycollocates X with Y);Fix^an object at a particularnode (for example,Fix X at Y ) ;Unflx^an object and make itmobile again followinga fix (for example,Unfix X ) ;Reflx^an object by atomicallyperforming an Unfix, Moveand Fix at a new mode (forexample, Retro( X at Z ) ;Figure 2.1: Emerald Language Primitives.Some of the primitives require transparent object location. The Emerald run-time systemsupports location transparency by tracking migrating objects. In Emerald, there is no limita-tion on object migration. So it is very important to efficiently track migrating objects. Theprinciple used by Emerald is to put the migration cost onto invoking nodes, not the sourcenodes of migrating objects. In other words, Emerald adopts a lazy updating policy.15Emerald uses a forwarding address scheme to track migrating objects. Emerald's forward-ing address is a tuple (timestamp, forwarding-address). The timestamp is used to determinewhether a forwarding address is up-to-date. When updating a node, if the forwarding addressof an object in the updating message is older than that of the object on the updated node, noupdating is done.Using clock time to generate timestamps is expensive because the protocol used to maintaintime consistency on every node is not trivial. In Emerald, an object's hop count or migratingcount is used as the timestamp of the object. When an object is created, its timestamp isset to zero. Whenever the object migrates, its timestamp is incremented. This timestamp isgood enough to maintain up-to-date forwarding addresses and abandon delayed or duplicatedupdating messages[Fow85]. Another usage of timestamps is to limit the moves of an objectwhen its hop count reaches some threshold(e.g., the number of nodes in the system) to avoida situation in which an object might "outrun" messages trying to reach it.In fact, Emerald's updating policy is not completely lazy. When an object migrates, itssource and destination nodes are updated since this can be done without cost. Each node invok-ing an object gets the object's new location through piggybacking on a reply message[Ju188].Emerald uses two protocols to locate an object. First it optimistically uses weak location.If that fails, it uses the more expensive strong location to ensure correctness. For example,when the kernel performs a remote invocation, it uses weak location to obtain a location hintand sends an invocation message to the presumed target node. If the hint turns out to be rightthen the invocation is performed efficiently using a minimum number of network packets. Ifthe hint is wrong then strong location is used to locate the object and the invocation messageis forwarded. Using weak location works well in most cases where objects are not movedfrequently.Emerald strong location uses a forwarding address scheme. Every time an object moves, itleaves behind a forwarding address so that any reference to the object using its old locationmay be forwarded to the object's next location. In the absence of node crashes, a forwardingaddress can be used to provide strong location semantics. When a node crashes, its forwardingaddresses are lost but objects that have moved away from the node before the crash will stillbe accessible even though they cannot be found using an access path that includes the crashednode[Ju188].Broadcasting is used as the next step of strong location when the forwarding address failsto locate an object. In case some nodes fail to reply to a broadcast message, node to nodecommunication is used to force all nodes to reply. Timeout is used to avoid unlimited blocking16when waiting for a reply.2.3 ConclusionAlthough different schemes have been used by the above systems, one of the common featureof these schemes is that the same updating policy is used by each scheme on all migratingobject. Its advantage is that the object tracking mechanism is simple. But migration costsof some objects are higher, depending on their behaviors. In order to have good migrationperformance for every object, as mentioned in last chapter, different updating policy should beused on objects with different behaviors.A forwarding address scheme is also used in our object finding protocol. The relationshipbetween object behavior and optimal updating policy is studied. The following chapters willdiscuss these protocols in detail.17Chapter 3Reliable Datagram ProtocolObject tracking requires a reliable data transmission service. There are two types of protocols,connection-oriented and connectionless, which can be used to support our object tracking.Which one should be used? We consider that a reliable packet oriented protocol is mostappropriate for our problem although we could also use TCP. RDP is a Reliable Data Protocolwhich is built on UDP. It adds reliability to UDP.RDP is a protocol which is designed to provide a reliable datagram transfer service for OFP,the Object Finding Protocol. TCP is more suitable for large, continuous data transmission.The data transmission unit in TCP is a byte. For RDP, the data transmission unit is a packet.Packets are completely independent of each other. The semantics of RDP is to reliably send amessage to its destination with a given internet address.RDP can also be built directly on IP. However for portability, we choose to build RDP onUDP.3.1 Reliable Data TransmissionIn order to reliably send a packet, the sending side must receive an acknowledgement for eachreceived packet. Any packets which are not acknowledged in a specific period of time areassumed to be lost and are retransmitted. To prevent the sender from waiting forever fora reply, a timer is used for each transmission of a data packet. If the timer expires beforereceiving an acknowledgement, the sender will resend the packet and reset the timer. Aftersome number of failed retransmissions, the sender will stop trying and report the failure to theupper layer.A sequence number is used for each data packet. An acknowledgement includes the ac-knowledged data packet's sequence number. A sequence number is four bytes long and suitable18for future high speed networks; in a high speed network, a two byte sequence number can easilywrap around in the network life time so that it is impossible to know whether a message isa duplicated message or a new message with the same sequence number (See TCP extensionRFC 1323).The first sequence number is randomly generated. Then it is incremented and used as thesequence number of the next sending packet. Each RDP session has its own sequence number.A sliding window mechanism is used on both the sender and receiver sides. These windowsare used to control transmission throughput. Currently window size is fixed on both sides. Asender creates its own sliding window when a packet is sent by an upper layer. At the receivingside, when a first data packet is received, the window is initialized based on the received packet.The RDP sending window mechanism consists of a start sequence number, L_SEQ_NUM,which is initialized to a random number generated by an RDP session, the size of window,W_SIZE, an array of bits, WIN_BITS, which represents the status of the packets that havebeen sent, and a base pointer which points to the start of the window.A RDP receiving window mechanism consists of a base sequence number, B_SEQ_NUM,which is initialized to the sequence number from a first data packet, the starting of sequencenumber, L_SEQ_NUM, which is initialized to B_SEQ_NUM, the size of window, W_SIZE, anarray of bits, WIN_BITS which represents the status of packets that have been received and abase pointer which points to the start of the window.There are several message types defined in RDP. These are FIRSTDATA, NACK,MSG_SND, MSG.ACK, MSG_SYN. They are explained in detail as follows:• FIRSTDATA: The first data packet sent is of type FIRSTDATA. This data type isused to transfer the first data packet between two nodes and triggers the construction ofthe receiver's sliding window.• NACK: Packet of this type is sent to the sender whenever the receiver receives a FIRST-DATA packet with the same base sequence number as that of the receiving window mech-anism. The data packet may be a duplicated packet or one that is sent after the senderrecovered from a crash and happened to use the same sequence number as the one usedlast time. If the sender has crashed and recovered, it resends the FIRSTDATA packetwith a new sequence number after it receives the NACK message.• MSG_SND: The data packets sent after a FIRSTDATA packet have this type.• MSG_ACK: This type of packet is sent as an acknowledgement of the received data19packets including both MSG_SND and FIRSTDATA packets.• MSG_SYN: This type of packet is replied when a node receives a MSG_SND packetbefore receiving a FIRSTDATA message. This may happen when the FIRSTDATAmessage got lost. In this case, the sender will resend all the outstanding data packetswith the first one as a FIRSTDATA packet.Each time a new RDP session is created, the first packet sent has type FIRSTDATA; therest of the data packets sent through the session have type MSG_SND. When a machine receivesa FIRSTDATA packet, and a window mechanism has not been established with this sender,a new one is created and a MSG_ACK is returned to the sender. Otherwise, the receiver willcheck whether the packet has the same base sequence number as that of the existing window.If this is the case, the packet can either be a duplicate or one that is sent after the senderhas recovered from a crash. A NACK packet is then sent back to the sender to let the senderconfirm that it is establishing a new connection. If the sender receives a NACK message foran unacknowledged FIRSTDATA packet, it will resend a FIRSTDATA packet with a differentbase sequence number. If a FIRSTDATA packet with a different base sequence number fromthat of the receiver's window is received, the old window will be destroyed and a new one willbe created.When a MSG_SND data packet is received by the receiver with no window, a MSG_SYNpacket is returned to the sender. If a window exists, and the packet falls into the window, thecorresponding bit of the window is set and the window slides if possible. Whenever the windowslides, its lower bound sequence number will be updated. If the packet falls beyond the upperbound of the window, the packet is dropped. If the packet falls below the lower bound, thepacket is dropped and a MSG_ACK message is sent back to the sender.After receiving a MSG_SYN message, the sender will resend all the outstanding data packetswith the first one as FIRSTDATA type. If there are no outstanding data packets, the MSG_SYNmessage is simply ignored.3.2 ImplementationRDP is implemented in the x-kernel. In the x-kernel, all protocols has the same interface. Thex-kernel has a group of utilities to manipulate messages and to manage mapping, therefore itis easy to implement protocols in the x-kernel (See [HP91]).In the x-kernel, a protocol may depend on one or more other protocols. A protocol mayalso deliver services to one or more other protocols. There are two kinds of objects in the20x-kernel corresponding to each protocol: protocol objects and session objects. Each protocolhas exactly one protocol object and may have zero or more session objects. A protocol objectcontains the information about the protocol. A session object contains the information abouta specific communication channel through the protocol. Protocol objects are created staticallywhile session objects are created dynamically.RDP is built on UDP. The header structure of an RDP packet is given as follows:struct header {UDPport sport; /* source port */UDPport dport; /* destination port */u_long sqnum; /* sequence number */u_short ulen; /* message length */u_short msgtype /* the type of the message */} HDR;RDP uses a port number to identify the upper layer protocol. In other words, each upperlayer protocol has to have its own unique port number registered with RDP so that a receivedpacket can be sent to the correct upper layer protocol. A port number is a two byte shortinteger. The sequence number is a four byte integer. Each packet has a unique sequencenumber. Message length is a two byte integer which contains the sum of the header size andmessage size. Message type is a two byte integer.To send a message through RDP, an upper layer user has to tell RDP its local port numberand the message destination port number. The local and destination port number are used touniquely create a RDP session at the local side as well as the remote side.In RDP, a sender has five states. Its state transfer diagram is given in Figure 3.1. A receiverhas three states as shown in Figure 3.2.In the state transfer diagrams, both the sender and the receiver have no close state. Cur-rently there is no close phase in RDP. In our current system, as soon as a RDP session isopened, it exists forever. If we choose to have a close phase, we have to decide who should beresponsible for closing the RDP session. There are two choices. One is to let RDP users takethe responsibility and another is the RDP. In the later case, a garbage collector can be usedto automatically check each session. If a session has not be touched for a specific amount oftime, it will be collected and a function supported by users can be called through the garbagecollector. The function is user specified. In our current RDP, the function is null. Nothing is21Figure 3.1: Sender's State Transfer DiagramReceive FIRSTDATA Receive FIRSFigure 3.2: Receiver's State Transfer Diagram22done. This works well for RDP because RDP can recover itself from the crash of either side.Collecting a session is similar to a node crash.3.3 ConclusionRDP is a reliable packet oriented protocol. It is built on UDP and has a window mechanismwhich is similar to that of TCP. The window mechanism is used to control packet replies andretransmission. Communication is started by sending a FIRSTDATA data packet which is adata message as well as a control message.A garbage collector is used to close the sessions which have not been used for certain amountof time so that the closing phase is not necessary in RDP.Sliding windows have been used to control data throughput and message retransmission.In TCP, the data unit is byte, but in RDP the data unit is a packet and the size of a packetmay vary. So sliding windows just roughly control communication throughput. In other words,they just control the packet transmission number but not the real data size.23Chapter 4Object Finding ProtocolOFP is a protocol which is designed to send a message to an object. It is built on RDP. RDPsends a message to a node with a given address while OFP sends a message to an object witha given ID. OFP will map the ID to the node address at which the object currently resides.If objects do not migrate, it would be easy for OFP to map an object ID to a node addressby simply keeping a static mapping table. If objects move arbitrarily, the mapping processbecomes more complicated. Whenever an object migrates, its old address must be updated.A forwarding address scheme is used in the OFP protocol to locate objects. Whenever anobject migrates, it leaves its new address at its last location. The new location is called theforwarding address. It is used as a hint to locate the object.A mapping table is necessary for each node. The forwarding address of an object is kept inthe mapping table. Each object from the mapping table has a dependent set. The dependentset of an object contains the location of the other objects which rare recently invoked theobject.There are several schemes used in distributed systems to track object migration. In general,they can be classified into two categories. In the first category, a lazy update policy is used.In the second category, an urgent update policy is used. Both of the policies have somedisadvantages if they are used separately. A better policy can be used instead of these twopolicies. It is called an adaptive update policy which is a combination of both the lazy andurgent update policies in one system.How do we distinguish a lazy policy from an urgent policy? There is no absolute definition.Generally speaking, a lazy update policy puts more cost on object invocation while an urgentupdate policy puts more cost on object migration.The adaptive policy is based on an object's activity, which was defined previously as a realnumber between 0 and 1. As the activity increases, there are more object migrations and less24object invocations. The activity of an object determines which policy is used on the object.In the following sections we first describe the mechanism of the OFP protocol. Follow-ing that, we describe the various update policies which will be used in OFP to improve theperformance of object tracking.4.1 MechanismWe have mentioned that the function of OFP is to map an object ID to a node address wherethe object is supposed to be located. Each node has a local mapping table. If a user wantsto send a message to an object, the local mapping table is used to relate the message to itsdestination node. The local node first checks whether the object is on the local node. If not,it maps the object ID to a hint location and sends the message to that location through RDP.The node which receives the message does the same thing: checking whether the object is localand forwarding the message if necessary. The procedure continues until the message reachesits destination node. This is a dynamic and distributed mapping.The semantics of OFP is to send a message to an object with a given an object ID, howeverit does not guarantee that messages are received. OFP can detect a failure on the sending sidebut cannot detect failures on the receiving side.There are several reasons to have the specific kind of OFP semantics.1. Send failure can be easily tested and reported to upper layer users.2. An OFP message may go through more than one node. It is difficult and expensive toguarantee that a message is received successfully because every node on the forwardingaddress chain has to return an acknowledgement to its predecessor.3. Most OFP messages are for object invocation, and invocations always request replies.The upper layer getting a reply implies that the invocation is successful. Otherwise, theinvocation is considered a failure. Therefore, no stronger message transmission semanticsare necessary.4. Even if the object is received successfully, the receiver may still crash after receiving amessage but before sending a reply. This is equivalent to a receiving failure from thesender's point of view.In our system we assume that there are two important operations which can be appliedon any object: invocation and migration. Object invocation involves sending a message to an25object. This is supported by OFP. Object migration involves sending an object itself to a node.This is supported by RDP. Some applications request to move an object to a location whereanother object locates. In this case, a migration message should be treated as an invocationmessage and sent through the OFP layer. We assume that the object migration always requirea node address.In each node, there is a local mapping table which is used to keep track of objects migratedin and/or out of the node. Whenever an object migrates, the OFP layer has to be notifiedabout the migration by the upper layer user.4.1.1 Forwarding Address Scheme and BroadcastingOFP uses forwarding addresses to track moving objects. Whenever an object migrates, aforwarding address is left behind on the original node. Future messages sent to the originalnode will be forwarded according to the forwarding addresses. A node's internet address isused as a forwarding address.The forwarding address scheme has the following advantages:• The location information of an object is distributed among a variety of nodes on thenetwork. No single node has to take the responsibility of keeping records of all theobjects' location information. Systems are more fault-tolerant.• Highly replicated information can accelerate object searching, accelerate object invocationand improve object tracking efficiency.• An adaptive update policy can be used to optimize the performance of object migrationand invocation.Each forwarding address has a timestamp associated with it. The timestamp indicatesthe age of the forwarding address. When updating a forwarding address, the timestamp ofthe update message is compared against the one found in the mapping table. Only when thetimestamp of an update message is up-to-date can the updating be considered effective andexecuted. The timestamp is useful in avoiding forwarding address chain loops.The hops that an object has moved through are used as the timestamp. Initially, eachobject's timestamp is set to zero. When an object migrates, its timestamp is incremented.Using an object's moving hops as its timestamp has been proved effective and correct in elim-inating old and duplicated update messages and in preventing the forwarding address chainfrom forming a loop.26The forwarding address mechanism can fail to find an object in only two situations: eitherthe sending node has no location information at all for an object or a node failure has causeda break in the forwarding address chain. When the forwarding address scheme cannot locatean object, broadcast is used as the next step to locate an object. Broadcast is only effectiveon a LAN where broadcast is supported.When the upper layer user wants to send a message to an object through OFP, it transfersthe message and the object ID to the local OFP. OFP checks the local mapping table, andthen decides whether the message should be sent or not. If the object is not at the local nodeand also there is no hint from the local mapping table, the message is broadcasted on the localarea network. Otherwise, the message is sent to the location mapped from the local mappingtable.We assume that every node knows of the existence of every other node on the local network.Every node will reply to the received broadcast message. If a node has an object, it will replywith an UPDATE message. Otherwise, it will reply with an UNKNOWN message. TheUPDATE message contains the current location of the object, while the UNKNOWN messageindicates that the object is not on the node.After broadcasting a message, the sender will count all the received UNKNOWN messagesuntil an UPDATE message is received. Then it will drop the rest of UNKNOWN messages. Ifan UPDATE message is received, it will use the new object location contained in the UPDATEmessage to update the local mapping table provided that the UPDATE message has an effectivetimestamp. If no UPDATE message is received and all the UNKNOWN messages have beenreceived before a timeout, the local node will tell the upper layer that the object does not existon the local area network. If not all the UNKNOWN messages are received before timeout, thelocal node will use point to point messages to ensure that the nodes from which no message isreceived are either down or eventually reply.OFP adopts non-blocking message transmission semantics. The reason to use non-blockingtransmission is that the OFP message transmission does not guarantee that messages arereceived by objects. After a message is transferred to the OFP layer, control returns to thesender immediately after initiating the message transmission. If broadcasting for an object'slocation is required, all the messages sent to the specific object are kept in an object waitingqueue so that message transmission can proceed in parallel with the object location.After receiving a DATA message for an object, the receiver will first check whether theobject is local. If it is, the receiver will send the message to its upper layer, add the sourcenode to the object dependent set and send an UPDATE message to the source node if the27object's activity is higher than the selected threshold. If the object is not local, the receiverthen tries the local mapping table. If it fails to get any hint from the local mapping table, thelast node is added to the dependent set and an NDATA message is sent back to the source nodeto indicate that something is wrong with the object location. If local mapping is successful, thelast node of the message is added to the dependent set, the local node is added to the messageheader as the last node and the message is forwarded to the next node.We mentioned that object migration bypass the OFP layer and goes directly through theRDP layer. When migrating an object in or out of a node, the local OFP must be informedabout the migration. OFP will register the migration in its mapping table.When OFP updates its mapping table, it also checks the object dependent set. If thedependent set is not empty and the object activity is larger than the activity threshold, thenupdate messages are sent to all the dependents. Finally, the dependent set is cleared.After receiving an update message, OFP first checks whether the update is effective bycomparing the object timestamp contained in the update message with the timestamp in themapping table. Then OFP checks if the dependent set is empty. If not and the object activityis larger than the threshold, an update message is sent to each of the dependents.To reduce the transmission of update messages, each update message contains an updatednode set. At first, when a node sends an update message, the sender and dependent set of anobject are added to the updated node set, because a node may be present in several dependentsets for the same object. The dependent set is checked against a received updated node set toeliminate duplicate transmission.In the x-kernel, when a user wants to send a message through a lower layer protocol, it firsthas to open a session of that protocol. Opening a session usually requires some communicationand therefore two participant addresses must normally be given as parameters to the openrequest. For some server sessions, only the local address is known when the session is opened.When opening an OFP session, only one participant address need be specified. The objectID of the object to which messages should be delivered need also be specified.4.1.2 Message TypesThere are several message types defined, which are DATA, UPDATE, UNKNOWN, NDATAand BDATA. They are described in detail as follows:• DATA message: This is the OFP data message sent to an object from one node to anothernode. Usually, object invocation requests are sent as DATA messages.28• UPDATE message: The message is used to update an object location. It is sent whena node finds that an object hint is out of date. The message is sent when an objectmigrates or when an object is invoked. When an object migrates, all its dependents areupdated. When an object is invoked, only the source node is updated if the object hinton the source node is out of date.• UNKNOWN message: This type of message is used to reply to a broadcast message whenthe location of an object is unknown.• NDATA message: This type of message is used to reply to a DATA message that theobject is unknown and no forwarding address exists for the object.• BDATA message: This type of message is broadcast to all the nodes on a local network.When local mapping fails, message broadcasting is used. Broadcasting only works onlocal networks.If an object is out of a local network, a server is used. Each local network has a server.In principle, any object which migrates in or out of a local network has to be registeredon its local server. If the local server has a hint for an object, it will reply with the hintto the local node which wants to communicate with the object. If the local server has nohint or the hint is lost, it will contact the other local servers on the wide area networkthrough end to end communication. This is left as future work. Currently we assumethat the objects only migrates within a local area network.All the OFP message types have the same header structure which is given as follows:struct header {u_short msgtype;u_short protocolnum;TSTAMP timestamp;OBJID objectid;IPhost srchost;u_short pnodenum;u_short ulen;IPhost lastnode;IPhost lastlastnode;} HDR;29• msgtype: The field specifies the type of the message as described above.• protocolnum: There may be more than one OFP user. To identify these users andcorrectly send a received message to its upper layer user, each user has an ID or protocolnumber registered in OFP.• timestamp: This is the timestamp of an object. It is initialized to zero and records thenumber of times an object has been moved. It is used only in UPDATE messages.• objectid: Each object has a unique ID. It is used to identify which object the messageis for. For a DATA message or a BDATA message, the object ID represents the objectto which a message is being sent. For an UNKNOWN message or a NDATA message,it represents the object for which no location information is known. For an UPDATEmessage, it represents the object for which new location information is included in themessage.• srchost: For a DATA type a BDATA type or an UPDATE message, it indicates thesource node of the message. For an UNKNOWN or NDATA message, it indicates thesource node of the message to which the UNKNOWN or NDATA is a reply.• pnodenum: This field is used to count the number of nodes a specific message has passedthrough. This field can be used to test whether the last node is same as the source node.If the pnodenum is larger than 1, it means the source node is different from the last node.This is useful to eliminate unnecessary update message sending.• ulen: This is the total message length including the body of a message.• lastnode: This contains the address of the last node through which a message haspassed. In a forwarding address chain, it is the node before the destination node.• lastlastnode: In a DATA or BDATA message, it contains address of the second lastnode by which a message has passed. In the forwarding address chain, it is the nodebefore the lastnode. In an UPDATE message, this field is used to store the new locationof an object.4.2 PoliciesWhen a system is designed, there are two aspects which need to be taken care of. They arecorrectness and efficiency.30In last section, we have discussed the mechanisms used in OFP protocol. The mechanismsonly guarantee that the OFP protocol functions correctly but not necessarily that it functionsefficiently. To make the OFP protocol function efficiently, the updating policies used in OFPprotocol have to be studied. In the following sections, we will study different updating poli-cies which can be used in the forwarding address scheme and try to find one with the bestperformance.4.2.1 Lazy Update Policy vs. Urgent Update PolicyThe forwarding addresses can form a chain. The length of the chain depends on the systembehavior and the update policy used. If the lazy update policy is used in a system with a lotof object migrations a long chain can be easily formed. A long address chain makes the costsof invocations higher although there is almost no update cost for object migration. In thissystem, invocation cost dominates while the migration cost is less important. Every invocationhas to pass through a long chain and generates a lot of forwarding messages. On the otherhand, if the urgent update policy is used in a system with a lot of active objects and very fewobject invocations, the result would be low invocation cost and high migration cost. In thiscase, the cost of object migration dominates the cost of invocation.To reduce the total object migration and invocation cost, different objects should use differ-ent update policies. For active objects, migration is the main part of their behavior. To reduceits behavior cost, we have to reduce its migration cost. On the other hand, for quiet objectswith a lot of invocations, invocations are the primary part of their behavior. To reduce itsbehavior cost, we have to reduce its invocation cost. We use the lazy update policy to reduceobject migration cost and the urgent update policy to reduce object invocation cost.The problem is how to define a quiet object and an active object. We use an activitythreshold as our classification standard. An object with an activity larger than the thresholdis called an active object. Otherwise, the object is called a quiet object. The most appropriatethreshold value is determined through experiments. Experiments will be discussed in the nextchapter.Using the urgent update policy can reduce the object invocation cost. On the other hand,it increases the object migration cost. On the contrary, using the lazy update policy can reducethe object migration cost but it increases the object invocation cost. The system performancedepends on the total cost of object invocation and object migration. We believe that byadjusting the ratio of migration cost with that of invocation cost, optimal system performancecan be achieved. The adaptive update policy tries to reach an optimal system performance by31making each individual object reach an optimal performance through adjusting the cost of itsmigration and invocation based on its activity. This will be discussed in the following section.The lazy and urgent concepts are not absolute; they are relative to each other. Precisely,our lazy policy is that when object is moved, no update messages are sent to any other objects.This is an extreme case of the lazy policy. By using this update policy, object migration costis minimum. There are no update messages sent when objects migrate.There are two parameters which affect the urgent updating policy. One parameter is thesize of the dependent set carried to the next node when object migrates. Another parameteris the number of hops the dependent set is carried on. By adjusting the two parameters, manydifferent versions of the urgent update policy can be generated.We define 3 versions of our urgent update policy, cleverly named version 1, 2, and 3, inwhich dependent sets are carried for 0, 1, and 2 hops respectively. Therefore, version 3 ismore urgent than version 2, while version 2 is more urgent than version 1. For all of thesepolicies, every time an object migrates, an update message is sent out to all its dependents.The larger the dependent set becomes, the more update messages that are sent. The furtherthe dependent set is carried on, the larger the dependent set becomes.4.2.2 Adaptive Update PolicyOne of the important parameters in a forwarding address scheme is the update policy. Thisincludes when to send update messages and to which nodes the update messages should be sent.Two important things should be taken into consideration when an update policy is selected:object behavior and object relationships. Object behavior decides when to send an updatemessage. Object relationship determines nodes to which an update message should be sent.We use the activity to describe the behavior of an object. An object with activity value Ameans that the object has probability A to migrate and probability (1-A) to be invoked.The dependent set is used to describe an object's relationship with other objects. Eachobject has a dependent set. The set can be built dynamically and/or statically. An object onlysends update messages to the nodes in its dependent set when it migrates. When an object Xinvokes an object Y, the node on which object X is located is considered as a dependent of theobject Y and is added into its dependent set.How many dependents should be updated when an object migrates? This is another fac-tor which affects the cost of object migration and object invocation. If all the dependentsare updated, the object migration cost is high. However, the object invocation cost is low.Conversely, invocation cost is high and migration cost is low with the lazy update policy.32Currently the dependent set can only describe whether an object has relationship with theother objects. It cannot describe how strong the relationships are. We could use fuzzy sets toprecisely describe the strength of each object relationship. By using fuzzy sets, the relationshipcan be described as a value from 0 to 1. The larger the value, the stronger the relationship.Our adaptive update policy is based on object activity. It is a combination of the lazy andurgent update policies through an activity threshold. For a lazy update policy and an urgentupdate policy, if there exists a threshold from these two policies, then an adaptive update policyexists. This adaptive update policy is based on the specific lazy and urgent update policiesand the specific activity threshold. The threshold is determined through experiments on thelazy and the urgent update policies.4.3 ConclusionOFP is built on RDP. The semantics of RDP is to send a message to a given node. Thesemantics of OFP is to send a message to a given object. It maps an object ID to an object'scurrent location. The current location of an object changes with time, therefore the mappingis dynamic and distributed.A forwarding address scheme is used in OFP to track object migration. To reduce theobject migration and invocation cost, it is important to balance the cost of object migrationand object invocation. We believe that an adaptive update policy can be used to optimize thiscost.There are two factors affecting an update policy. The first factor is when to send updatemessages. Updating can be done when objects move or when objects are invoked. Anotherfactor is how many dependents should be updated. All the dependents, part of the dependentsor no dependents may be updated depending on the selection standards.Three different versions of urgent update policies and one lazy update policy have beendiscussed. By combining an urgent update policies with the lazy update policy, we can get anadaptive update policy. There is a decision table for each adaptive update policy. A decisiontable is generated by comparing the experimental results of an urgent update policy and thatof a lazy update policy. The decision table tells you at what activity and what locality anurgent or a lazy update policy should be used. Based on the decision table, the activity, andthe locality, the adaptive update policy choose to use either the urgent or the lazy updatepolicy.OFP has been implemented entirely in the x-kernel environment. In next chapter, we will33describe how to design and conduct experiments on lazy and urgent update policies and howto use these experimental results to generate a decision table which can be used by an adaptiveupdate policy.34Chapter 5Experiments and AnalysisExperiments were conducted in a system which includes an implementation of the OFP andRDP protocols. In OFP, the forwarding address scheme is used to track object location. Theupdate policy used to cut forwarding address chains is quite important for improving OFPperformance. Three different update policies were used: the lazy update policy, the urgentupdate policy and the adaptive update policy. This section discusses our experiments, presentsthe results of those experiments and discusses the experimental consequences.5.1 Experiment DesignEach experiment was run using 12 Sparc workstations. Each workstation ran one x-kernelnode, for a total of 12 x-kernel machines.Each x-kernel machine initially registers 10 objects. Before starting, all the machines haveinformation on the location of all objects. In each experiment, every machine independentlyperforms 200 operations. Each operation is either an object invocation or an object migration.The number of invocations and migrations are controlled by the object activity: larger activityvalues result in migrations, smaller activity values result in more invocations.In a real system, there is some amount of locality in the pattern of object invocation. Tomake our experiment more meaningful, locality is incorporated into our model. Invocation withlocality 0 will randomly invoke objects. Invocation with locality 1 will always invoke the sameobject. Separate experiments have been completed using different localities. This study doesnot investigate the actual level of locality in a real system; that can only be done in the contextof a real application of object mobility. Different systems may have different locality. Ourpurpose in using locality in our experiment is to show that locality does affect the performanceof OFP and must be taken into account.35Update Message Cost Per MigrationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Table 5.1: Lazy Update Policy Update Cost Per Migration.In our experiments, we vary the level of object activity and invocation locality, and col-lect statistics for all of our update policies. In each experiment, the following information iscollected:1. Total number of forwarding messages; reflects the cost of invocation.2. Total number of update messages; reflects the cost of object migration.3. Total object invocations.4. Total object migrations.Based on this data, we calculated the cost of each invocation, the cost of each migrationand the cost of each operation. We used the cost of each operation as a standard to comparethe performance of the update policies.Normally, reply messages are required for invocations. Reply messages can be used topiggyback an object location. In our experiments, we assumed that there was no sending ofreply message and all the object location information was contained in update messages.5.2 Experimental Results of the Lazy Update PolicyThe following tables show the experimental results from the lazy update policy using variousobject invocation localities and object activities.From the results in Table 5.3, we can see that there was no update cost during the experi-ments. This is consistent with the lazy update policy which never sends any update messages.The only cost is from message forwarding which is generated during object invocation. Thisis shown in Table 5.4. From Table 5.5, we can see that cost becomes higher when the activity36Forwarding Message Cost Per InvocationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.10 0.10 0.10 0.09 0.08 0.09 0.07 0.11 0.03 0.15 0.180.20 1.28 1.20 1.20 1.10 1.15 1.27 1.29 1.12 1.26 1.01 0.770.40 1.63 1.66 1.68 1.76 1.72 1.59 1.60 1.68 1.77 1.65 1.660.60 1.89 1.84 1.92 1.74 1.70 1.97 1.95 1.76 1.71 2.10 1.130.80 1.99 1.94 1.97 1.86 1.85 1.92 1.85 2.13 2.08 2.19 1.900.99 1.74 1.88 2.70 2.26 2.28 1.90 2.47 1.67 1.73 1.56 2.30Table 5.2: Lazy Update Policy Forwarding Cost Per Invocation.Update Message Cost Per OperationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.99 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Table 5.3: Lazy Update Policy Update Cost Per Operation.Forwarding Message Cost Per OperationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.10 0.10 0.10 0.09 0.07 0.09 0.07 0.11 0.03 0.15 0.180.20 1.01 0.95 0.95 0.87 0.92 1.00 1.03 0.87 0.99 0.79 0.590.40 0.97 1.00 1.00 1.04 1.03 0.94 0.96 1.01 1.06 0.98 1.000.60 0.72 0.71 0.76 0.66 0.65 0.75 0.74 0.68 0.65 0.82 0.420.80 0.36 0.35 0.38 0.33 0.35 0.36 0.37 0.38 0.41 0.44 0.330.99 0.01 0.02 0.02 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02Table 5.4: Lazy Update Policy Forwarding Cost Per Operation.37Total Cost With 100% Update CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.10 0.10 0.10 0.09 0.07 0.09 0.07 0.11 0.03 0.15 0.180.20 1.01 0.95 0.95 0.87 0.92 1.00 1.03 0.87 0.99 0.79 0.590.40 0.97 1.00 1.00 1.04 1.03 0.94 0.96 1.01 1.06 0.98 1.000.60 0.72 0.71 0.76 0.66 0.65 0.75 0.74 0.68 0.65 0.82 0.420.80 0.36 0.35 0.38 0.33 0.35 0.36 0.37 0.38 0.41 0.44 0.330.99 0.01 0.02 0.02 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02Table 5.5: Lazy Update Policy Total Cost.level is between 0.2 and 0.6. There are two factors which can affect the amount of forwardingmessage sending: the frequency of object migration and the frequency of object invocation.The corresponding graphs of the above table results are shown in Figure 5.1.38Activity is defined as the ratio of object moving times over the sum of object moving timesand object invocation times. When the activity is very low, there is less object moving, Sono long forwarding address chains are formed. Even if there are many invocations, there arefew forwarding messages. When activity is very high, although many long forwarding addresschains are formed, the forwarding messages are fewer because there are many fewer invocations.Only when the activity level is moderate (between 0.2 to 0.6), can there be many forwardingmessages generated.Theoretically, locality should not affect the lazy update policy because there is no updatemessage sending. Strong locality will increase the utilization of the update message becausethe next object invoked will very likely be the same as the previous one. The new object'slocation contained in the last update message can be used by subsequent invocations, and noforwarding address chain will be required.In reality, there is an implicit updating under the lazy update policy. When an objectmoves to a destination which is located on its forwarding address chain, the chain will be cutas if an update message were received. From the experiment results we can see that increasinglocality has almost no effect on system performance.5.3 Experiment Result with Urgent Update PoliciesThere are two factors which will decide how urgent an urgent update policy is:1. Object dependent set — the set of machines which have sent invocations to the object.The larger the set is, the more update messages an object will send when it migrates.Each invoked object has a dependent set.2. Dependent set moving hops. The longer the set is maintained by objects when they move,the more update messages an object will send when it migrates.Generally speaking, the more update messages, the fewer forwarding messages. The ques-tion is how large the dependent set should be and how far a dependent set should be maintainedto reach an optimum combination of forwarding messages and update messages.By adjusting the dependent set and the hop number, we try to find an update policy whichwill optimize the performance of OFP. The following are the experimental results from threedifferent versions of the urgent update policy:• Urgent update policy version 1: In this policy, the dependent sets are empty. The numberof hops that sets are maintained by objects is 0. The experiment results based on this39Figure 5.1: Lazy Update Policy. (a)Updating Cost Per Migration. (b)Front View of (a).(c)Forwarding Cost Per Invocation. (d)Front View of (c). (e)Updating Cost Per Operation.(f)Front View of (e). (g)Forwarding Cost Per Operation. (h)Front View of (g). (i)Total CostPer Operation. (j)Front View of (1).40Updating Cost Per MigrationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 5.26 4.81 3.85 3.71 4.22 2.48 2.31 2.13 1.33 0.52 0.120.20 1.96 1.89 1.72 1.58 1.36 1.17 0.91 0.78 0.58 0.33 0.090.40 1.02 0.98 0.85 0.78 0.69 0.61 0.45 0.39 0.26 0.19 0.080.60 0.54 0.51 0.46 0.43 0.35 0.33 0.28 0.23 0.19 0.14 0.060.80 0.24 0.22 0.20 0.18 0.18 0.16 0.13 0.12 0.11 0.08 0.060.99 0.01 0.01 0.00 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01Table 5.6: Urgent Update Policy Version 1 Update Cost Per Migration.Forwarding Cost Per InvocationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.06 0.07 0.09 0.07 0.08 0.05 0.08 0.12 0.11 0.09 0.000.20 0.80 0.77 0.84 0.84 0.85 0.98 0.87 0.99 0.90 1.14 0.090.40 1.19 1.24 1.21 1.23 1.27 1.18 1.17 1.42 1.15 1.15 0.050.60 1.49 1.38 1.50 1.51 1.45 1.62 1.35 1.09 1.16 0.92 0.250.80 1.87 1.86 1.56 1.69 1.53 1.66 1.59 1.38 1.15 1.10 0.370.99 2.73 1.95 2.00 2.44 1.91 2.05 1.25 1.42 1.95 1.64 1.85Table 5.7: Urgent Update Policy Version 1 Forwarding Cost Per Invocation.update policy are shown in Table 5.6, Table 5.7, Table 5.8, Table 5.9 and Table 5.10.Its corresponding graphs are shown in Figure 5.2.41Updating Cost Per OperationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.05 0.07 0.05 0.04 0.05 0.03 0.03 0.03 0.01 0.01 0.000.20 0.41 0.39 0.36 0.34 0.28 0.26 0.19 0.17 0.12 0.07 0.020.40 0.41 0.40 0.35 0.31 0.28 0.24 0.18 0.16 0.11 0.08 0.030.60 0.33 0.31 0.29 0.26 0.21 0.20 0.17 0.14 0.12 0.09 0.040.80 0.20 0.18 0.16 0.15 0.15 0.13 0.11 0.09 0.09 0.07 0.050.99 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01Table 5.8: Urgent Update Policy Version 1 Update Cost Per Operation.Forwarding Cost Per OperationActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.05 0.07 0.09 0.07 0.08 0.05 0.08 0.12 0.11 0.09 0.000.20 0.63 0.61 0.66 0.66 0.67 0.76 0.69 0.78 0.71 0.90 0.070.40 0.72 0.74 0.71 0.74 0.76 0.72 0.69 0.84 0.67 0.67 0.030.60 0.57 0.55 0.57 0.60 0.58 0.62 0.52 0.42 0.45 0.35 0.090.80 0.35 0.33 0.29 0.31 0.28 0.30 0.29 0.26 0.23 0.19 0.070.99 0.02 0.02 0.01 0.02 0.02 0.02 0.01 0.02 0.02 0.02 0.02Table 5.9: Urgent Update Policy Version 1 Forwarding Cost Per Operation.Total Cost With 100% Update CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.11 0.14 0.14 0.11 0.13 0.08 0.11 0.15 0.13 0.09 0.000.20 1.04 1.00 1.02 1.00 0.95 1.02 0.88 0.94 0.84 0.97 0.090.40 1.12 1.13 1.07 1.05 1.04 0.96 0.88 1.00 0.78 0.75 0.060.60 0.90 0.85 0.86 0.86 0.79 0.82 0.69 0.57 0.57 0.44 0.130.80 0.54 0.51 0.45 0.46 0.43 0.44 0.40 0.35 0.31 0.25 0.120.99 0.03 0.03 0.02 0.03 0.03 0.03 0.01 0.03 0.03 0.02 0.03Table 5.10: Urgent Update Policy Version 1 Total Cost.42Total Message Cost With 100% Update CostActivityLocality0.0 0.2 0.4 0.6 0.8 1.00.1 0.10 0.10 0.09 0.11 0.07 0.000.2 1.20 1.08 1.11 0.94 0.88 0.030.4 1.41 1.27 1.71 1.15 0.80 0.130.6 1.19 1.06 0.97 0.74 0.55 0.070.8 0.71 0.63 0.56 0.39 0.33 0.070.99 0.04 0.03 0.04 0.04 0.03 0.03Table 5.11: Urgent Update Policy Version 2 Total Cost.Total Message Cost With 100% Update CostActivityLocality0.0 0.2 0.4 0.6 0.8 1.00.1 0.14 0.13 0.16 0.07 0.12 0.410.2 1.25 1.13 1.18 1.06 0.91 0.060.4 1.51 1.40 1.32 1.17 0.98 0.060.6 1.34 1.25 1.03 0.94 0.71 0.070.8 0.82 0.70 0.57 0.44 0.33 0.080.99 0.03 0.05 0.03 0.05 0.04 0.03Table 5.12: Urgent Update Policy Version 3 Total Cost.• Urgent update policy version 2: In this policy, the dependent set contains the dependentsaccumulated on the last node. The number of hops that the sets are maintained by objectsis 1. The experiment results based on this policy are shown in Table 5.11.• Urgent update policy version 3: In this policy, the dependent set contains the dependentsaccumulated on the last two nodes. The number of hops that sets are maintained byobjects is 2. The experiment results based on this policy are shown in Table 5.12.By analyzing the above results, we can see the following phenomena:1. From Table 5.10, Table 5.11 and Table 5.12 we can see that in general version 1 updatepolicy has a better performance than version 2 and version 3 update policies. Version2 policy has a better performance than version 3 update policy. This is true especiallywhen the locality is between 0 to 0.8 and activity is between 0.2 to 0.8.2. For each individual urgent update policy, as the locality increases, the performance be-comes better especially for activity from 0.2 to 0.8.43Figure 5.2: Urgent Update Policy. (a)Updating Cost Per Migration. (b)Front View of (a).(c)Forwarding Cost Per Invocation. (d)Front View of (c). (e)Updating Cost Per Operation.(f)Front View of (e). (g)Forwarding Cost Per Operation. (h)Front View of (g). (i)Total CostPer Operation. (j)Front View of (i).44The above phenomena can be explained as follows:• For each individual urgent update policy, when the locality increases, object locationinformation contained in one update message can be efficiently used by the subsequentobject invocations which invoke the same object. Thus, the forwarding message volumeis reduced. Also, because of higher locality, the dependent set becomes smaller so theupdate message volume is also reduced. The result is that the OFP performance isimproved.• Comparing version 1 urgent update policy with version 2 and version 3 update policy,we found that with the same locality and activity, the number of forwarding messages inversion 1 is almost the same as that in version 2 and version 3. But the update messagenumbers in version 2 and version 3 are more than that in version 1. This is becauseof the dependent set carried by each moving object. The larger the dependent set andthe further it is carried, the more update messages that are generated. The issue iswhether these update messages can be used efficiently by subsequent invocations so thatforwarding messages can be reduced. The reality is no. It turns out that version 2 andversion 3 update policies are more expensive than version 1 update policy.Because, in most cases, the version 1 urgent update policy is better than version 2 andversion 3, we decided to use version 1 urgent update policy as a representative to compare withthe lazy update policy and to discuss the new adaptive update policy.In the above discussion, we assume update messages are sent in the foreground. In a realsystem, it can be done partially or fully in the background. If we assume that 0%, 20%,50%, or 80% of update message sending can be done in foreground, we obtain the followingperformance results, which are better than those in Table 5.10. These results are calculatedbased on the experimental results which assume update message sending is done 100% inforeground. The results in Table 5.10 are calculated as follows: Totalcost = updatingcost *100% + f orwardingcost. If we assume that 50% of updating is done in background, then theperformance is calculated as follows: Totalcost = updatingcost * 50% + f orwardingcost.Here, we assume that background updating costs nothing. This assumption is based on thefact that in a real system object invocation is relatively infrequent, so in general backgroundupdating is fast enough to update an object location before the next object invocation. In ourexperiments, the object invocation is done as quickly as possible.The results discussed above are shown in Tables 5.13 to 5.16 and their correspondinggraphs are shown in Figure 5.3.45Total Message Cost With 0% Update CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.05 0.07 0.09 0.07 0.08 0.05 0.08 0.12 0.11 0.09 0.000.20 0.63 0.61 0.66 0.66 0.67 0.76 0.69 0.78 0.71 0.90 0.070.40 0.72 0.74 0.71 0.74 0.76 0.72 0.69 0.84 0.67 0.67 0.030.60 0.57 0.55 0.57 0.60 0.58 0.62 0.52 0.42 0.45 0.35 0.090.80 0.35 0.33 0.29 0.31 0.28 0.30 0.29 0.26 0.23 0.19 0.070.99 0.02 0.02 0.00 0.02 0.02 0.02 0.01 0.02 0.02 0.02 0.02Table 5.13: Urgent Update Policy Version 1 Total Cost.Total Message Cost With 20% Update CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.06 0.08 0.10 0.08 0.09 0.06 0.09 0.13 0.11 0.09 0.000.20 0.71 0.69 0.73 0.73 0.73 0.81 0.73 0.81 0.73 0.91 0.070.40 0.80 0.82 0.78 0.80 0.82 0.77 0.73 0.87 0.69 0.69 0.040.60 0.64 0.61 0.63 0.65 0.62 0.66 0.55 0.45 0.47 0.37 0.100.80 0.39 0.37 0.32 0.34 0.31 0.33 0.31 0.28 0.25 0.20 0.080.99 0.02 0.02 0.00 0.02 0.02 0.02 0.01 0.02 0.02 0.02 0.02Table 5.14: Urgent Update Policy Version 1 Total Cost.Total Message Cost With 50% Update CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.08 0.11 0.11 0.09 0.11 0.07 0.10 0.14 0.12 0.10 0.000.20 0.83 0.80 0.84 0.83 0.81 0.89 0.78 0.86 0.77 0.94 0.080.40 0.92 0.94 0.89 0.90 0.90 0.84 0.78 0.92 0.73 0.71 0.040.60 0.73 0.71 0.71 0.73 0.68 0.72 0.60 0.49 0.51 0.39 0.110.80 0.45 0.42 0.37 0.39 0.36 0.36 0.34 0.30 0.28 0.23 0.100.99 0.03 0.03 0.00 0.03 0.03 0.03 0.01 0.03 0.03 0.03 0.03Table 5.15: Urgent Update Policy Version 1 Total Cost.46Total Message Cost With 80% Update CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.09 0.13 0.13 0.10 0.12 0.07 0.10 0.14 0.12 0.10 0.000.20 0.96 0.92 0.95 0.93 0.89 0.97 0.84 0.92 0.81 0.96 0.090.40 1.05 1.06 0.99 0.99 0.98 0.91 0.83 0.97 0.76 0.73 0.050.60 0.83 0.80 0.80 0.81 0.75 0.78 0.66 0.53 0.55 0.42 0.120.80 0.51 0.47 0.42 0.43 0.40 0.40 0.38 0.33 0.30 0.25 0.110.99 0.03 0.03 0.00 0.03 0.03 0.03 0.02 0.03 0.03 0.03 0.03Table 5.16: Urgent Update Policy Version 1 Total Cost.5.4 Adaptive Update PolicyOur results showed that for locality smaller than 0.5, the lazy update policy had a result similarto the urgent update policy. For locality larger than 0.5, the urgent version 1 update policyhad a better result.At first, we thought that the lazy update policy could easily build a long forwarding addresschain which would make subsequent invocations expensive. But in reality, when an objectmoves back to any node on the forwarding address chain, the chain is shortened. When theactivity level is around 0.5, there are enough invocations to help build the forwarding addresschains, but there is also enough migration to help shorten forwarding address chains. In otherwords, the lazy update policy is not really lazy. Object migration includes an implicit updatefor the destination node. For the urgent update policy, when an object moves, the systemdoes not have to send an update message to every node on the local network. By limitingthe dependent set size and number of carrying hops, we got three different versions of urgentupdate policies. We found the lazier one (version 1) had the best performance. The urgentupdate policy is not really completely urgent.Based on these two different update policies, we derived a new update policy based on theabove analysis. The new update policy is called the adaptive update policy; it is a combinationof urgent and lazy updating. This policy uses a decision table to decide when to use an urgentupdate and when to use a lazy update. The decision table is built by directly comparing theresults of urgent update and lazy update.We used the results from lazy update and urgent update with 100% and 50% backgroundupdating as examples to generate decision tables for our adaptive update policy. First wecalculate the difference between urgent policy from lazy policy (Table 5.17 and Table 5.19).Based on these difference, we get the decision tables shown in Tables 5.18 and 5.20.470 .Costo,0.2Activity1 0.13Locality0.20.-60.2Activity1 . 00 . 8Cost(a)(c ):y20 . 6(b)0:1-. 80Locality0.20.-6 (e)•Cos •t0.2Activity^00 .-6Locality.2(g)0.2Activity^0.20.-6Figure 5.3: Urgent Update Policy. (a)Total Cost with 0% of Updating Cost. (b)Front View of(a). (c)Total Cost with 20% of Updating Cost. (d)Front View of (c). (e)Total Cost with 50%of Updating Cost. (f)Front View of (e). (g)Total Cost with 80% of Updating Cost. (h)FrontView of (g).48Difference Table With 100% UpdatingActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.00 -0.04 -0.04 -0.02 -0.06 0.01 -0.04 -0.04 -0.09 0.05 0.180.20 -0.03 -0.05 -0.07 -0.13 -0.03 -0.02 0.15 -0.08 0.16 -0.18 0.500.40 -0.16 -0.14 -0.06 -0.01 -0.01 -0.02 0.09 0.01 0.28 0.23 0.940.60 -0.18 -0.15 -0.10 -0.20 -0.14 -0.07 0.05 0.12 0.08 0.38 0.290.80 -0.19 -0.16 -0.07 -0.13 -0.08 -0.07 -0.03 0.03 0.09 0.18 0.210.99 -0.02 -0.01 0.02 -0.01 0.00 -0.01 0.00 -0.01 -0.01 -0.01 -0.01Table 5.17: The Difference Of Lazy Update Policy Performance and Urgent Update PolicyVersion 1 Performance.Decision Table With 100% UpdatingActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 - * * - * - * * * + -I-0.20 - * * * - - + * + * +0.40 * * * - - - + - + + +0.60 * * * * * * + + + + +0.80 * * * * * * - - + + +0.99 - - - - - - - - - - -Table 5.18: The Decision Table of Adaptive Update Policy: * - Lazy, + - Urgent, - - BothIn the decision tables, + represents the urgent update policy; * represents the lazy updatepolicy and - represents either policy. The decision tables are calculated by using a thresholdvalue 0.03. Any absolute difference which is less than or equal to the threshold is replaced by-. A negative difference whose absolute value is larger than the threshold is replaced by *. Apositive difference whose absolute value is larger than the threshold is replaced by +.The performance of the adaptive update policy based on the above calculations is shown inTables 5.21 and 5.22. The corresponding three dimensional graph is shown in Figure 5.4. Bycomparing the results with those of the lazy and urgent update policies, we concluded that theadaptive update policy had a better performance than both the urgent and the lazy updatepolicies.49Difference Table With 50% UpdatingActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.02 -0.01 -0.01 0.00 -0.04 0.02 -0.03 -0.03 -0.09 0.05 0.180.20 0.18 0.15 0.11 0.04 0.11 0.11 0.25 0.01 0.22 -0.15 0.510.40 0.05 0.06 0.11 0.14 0.13 0.10 0.18 0.09 0.33 0.27 0.960.60 -0.01 0.00 0.05 -0.07 -0.03 0.03 0.14 0.19 0.14 0.43 0.310.80 -0.09 -0.07 0.01 -0.06 -0.01 0.00 0.03 0.08 0.13 0.21 0.230.99 -0.02 -0.01 0.02 -0.01 0.00 -0.01 0.01 -0.01 -0.01 -0.01 -0.01Table 5.19: The Difference Of Lazy Update Policy Performance and Urgent Update PolicyVersion 1 Performance.Decision Table With 50% UpdatingActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 - - - * - - - * + +0.20 + + + + + + + - + * +0.40 + + + + + + + + + + +0.60 - - + * - - + + + + +0.80 * * - * - - - + + + +0.99 - - - - - - - - - - -Table 5.20: The Decision Table of Adaptive Update Policy: * - Lazy, + - Urgent, - - BothPerformance With 100% Updating CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.10 0.10 0.10 0.09 0.07 0.08 0.07 0.11 0.03 0.10 0.000.20 1.01 0.95 0.95 0.87 0.92 1.00 0.88 0.87 0.83 0.79 0.090.40 0.97 1.00 1.00 1.04 1.03 0.94 0.87 1.00 0.78 0.75 0.060.60 0.72 0.71 0.76 0.66 0.65 0.75 0.69 0.56 0.57 0.44 0.130.80 0.36 0.35 0.38 0.33 0.35 0.36 0.37 0.35 0.32 0.26 0.120.99 0.01 0.02 0.00 0.02 0.03 0.02 0.02 0.02 0.02 0.02 0.02Table 5.21: The Adaptive Update Policy Total Cost500.2Activity 0.-6.0Locality0.2(a)0.80 .2^LocalityActivity^;0.20 .-6 (c)Performance With 50% Updating CostActivityLocality0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.01 0.08 0.10 0.10 0.09 0.07 0.07 0.07 0.11 0.03 0.10 0.000.20 0.83 0.80 0.84 0.83 0.81 0.89 0.78 0.86 0.77 0.79 0.080.40 0.92 0.94 0.89 0.90 0.90 0.84 0.78 0.92 0.73 0.71 0.040.60 0.72 0.71 0.71 0.66 0.65 0.72 0.60 0.49 0.51 0.39 0.110.80 0.36 0.35 0.37 0.33 0.35 0.36 0.34 0.30 0.28 0.23 0.100.99 0.01 0.02 0.00 0.02 0.03 0.02 0.01 0.02 0.02 0.02 0.02Table 5.22: The Adaptive Update Policy Total CostFigure 5.4: Adaptive Update Performance Graphs (a) Performance with 100% Updating. (b)Front View of (a). (c) Performance with 50% Updating. (d) Front View of (c).51Chapter 6Conclusions and Future Work6.1 ContributionAn extensive survey has been done on object and process migration. In the survey, we focus onthe methods used to track mobile objects or processes. Several systems have been discussed.The purpose of the survey is to find the limitations of the methods and their potential im-provements. Most of them have an ad hoc limitation. In our discussion, we assume that thereis no limitation on object migration. Emerald is one such system in which any object can bemigrated without limitations. Based on this assumption, we find that the methods used in thesurveyed systems are no longer what we expected.Using forwarding addresses to track mobile objects is not new. In [Fow85][Ju188], theauthors have discussed this topic in depth. In [Fow85], more theoretical analysis has been doneon different algorithms used to shorten the forwarding address chains formed by forwardingmessages. In [Jul88], a simple algorithm has been used to track mobile objects. In thisalgorithm, reply messages to an invocation are used to piggyback the new location of theinvoked object. The method used to cut forwarding address chains is called update policy.None of the algorithms mentioned above has taken into consideration an individual object'sbehavior. One of the contributions of this thesis is to link together the object behavior withthe update policy. This idea comes from the real world. For example, a tourist normally doesnot have to send mail frequently to his friends and relatives about his location. The reason isbecause he moves frequently and if he did that, it would cost him a lot of money. The policyused here is called lazy policy. But for a family, when they move to a new place, letters shouldbe sent to tell their friends and relatives the new location. It will not cost too much because afamily is more stable than a tourist. The policy used here is called urgent policy.The concept of activity has been defined to describe object behavior. In a distributed sys-52tern, we believe objects with different activities should use different update policies to optimizesystem performance. Different update policies have been compared. A new update policy hasbeen defined based on results from the lazy update policy and the urgent update policy. Thisnew policy is called an adaptive update policy.In the thesis, the lazy update policy and urgent update policy have been studied and tested.For urgent update policy, there are two variables. One is the dependent set size and the otheris the number of hops the dependent set will be maintained by an moving object. By adjustingthe two variables, three versions of urgent update policies are produced. Through experiments,we found that the lazier urgent update policy (version 1) has the best performance.The update message sending can be done partly or wholely in the background, whereasforwarding message sending has to be done entirely in the foreground. By assuming that 0%,20%, 50% or 80% of updating is done in the foreground, a better performance of urgent updatepolicy can be obtained. This assumption only affects the urgent update policy, since the lazyupdate policy does not send update messages.Locality has been defined and used to model object invocation in our experiments. Localityexists in reality. One purpose of our experiments was to examine the effects of the locality objectinvocations on the update policies. We found that for the urgent update policy, OFP has betterperformance with higher locality.Another contribution of the thesis is to define and implement two layered protocols fortransparent object tracking. The implementation includes 3500 lines of C source code. RDPis mainly used to support packet data transmission. It is similar to UDP, but with an extratime out and retransmission mechanism to guarantee reliable packet delivery. Its function isto send a data packet to a given machine. Object tracking is the responsibility of the OFPprotocol layer, which maintains distributed mapping tables. Through the distributed mappingtables, OFP can map an object id to its machine address. OFP is built on RDP. The functionof OFP is to send a message to a given object.For any update policy in OFP, one should remember that there is always a tradeoff betweenthe update message sending and the forwarding message sending. More update messages willhelp reduce the number of forwarding messages. Conversely, less update messages will increasethe sending of forwarding messages. One of the purposes of our experiments was to findan optimum combination which can make the total of updating and forwarding approach anoptimum balance. Adaptive update policy is our solution to this problem.536.2 Future WorkCurrently, our system only works on a local area network where broadcast is possible andefficient. To extend the system into an internet environment, we need to provide a local serveron each local network. This server can be located on a router. When OFP cannot locate anobject on the local area network, it will seek help from its local server. The local server willcontinue the object search on the internet. To make local servers more efficient, they couldkeep records on each object moving in or out of the local network.Each local server maintains a mapping table which is the same as the one on each node.If an object location cannot be resolved by its local server, a further searching request will beforwarded to the other remote servers. Each remote server will check its local network andreply with a confirm message.Another interesting direction is to experiment with update policies, especially the adaptiveupdate policy, on objects with different activities and different locality and to further provethat the adaptive update policy can still reach optimum performance.The next interesting issue is to integrate the adaptive update policy into a real system andmeasure the behaviors of real objects.54Bibliography[Aea89]^Yeshayahu Artsy and et al. Designing a process migration facility—the charlotteexperience. IEEE Computer, pages 47-56, Sep 1989.[And91]^Gregory R. Andrews. Paradigms for process interaction in distributed programs.ACM Computing Surveys, 23:49-90, Mar 1991.[BBea91]^R. Baiter, J. Bernadat, and et al. Architecture and implementation of guide, anobject-oriented distributed system. UNIX Computing Systems, 4:31-67, 1991.[BHJ+86] Andrew Black, Norman Hutchinson, Eric. Jul, Henry Levy, and Larry Carter. Dis-tribution and abstarct types in emerald. TR 86-02-04, University of Washington,Jul 1986.[BHJL86] Andrew Black, Norman Hutchinson, Eric Jul, and Henry Levy. Object structurein the emerald system. In OOPSLA '86 Proceedings, page 78, Sep 1986.[BL90]^Joseph Boykin and Alan Langerman. Mach/4.3bsd: A conservative approach.UNIX Computing Systems, 3:69, 1990.[B1a85]^Andrew P. Black. Supporting distributed applications. In Proceedings of the TenthACM Symposium on Operating Systems Principles, pages 181-193. Association forComputing Machinery, 1985.[Boo86]^Grady Booch. Object-oriented development. IEEE Transactions on SoftwareEngineering, 12:211-221, Feb 1986.[BST89]^Henri E. Bal, Jennifer G. Steiner, and Andrew S. Tanenbaum. Programminglanguages for distributed computing systems. ACM Computing Surveys, 21:261-321, Sep 1989.55[Car85]^Luca Carden On understanding types, data abstraction, and polymorphism.Computing Surveys, 17:471-522, Dec 1985.[CC91]^Roger S. Chin and Samuel T. Chanson. Distributed object-based programmingsystems. ACM Computing Surveys, 23:91-124, Mar 1991.[CD88]^George F. Coulouris and Jean Dollimore. Distributed Systems Concepts and De-sign. Addison-Wesley Publishing Company, 1988.[CDEG90] George A. Champine and Jr. Daniel E. Geer. Project athena as a distributedcomputer system. IEEE Computer, pages 40-51, Sep 1990.[Cea90]^L. Csaba and et al., editors. Computer Networking. North-Holland, 1990.[CG90]^M. Cosnard and C. Girault, editors. Decentralized Systems. North-Holland, 1990.[Che85]^David R. Cheriton. Distributed process groups in the v kernel. ACM Transactionson Computer Systems, 3:77-107, May 1985.[Che88]^David R. Cheriton. The v distributed system. Communications of the ACM,31:314, Mar 1988.[C1a89]^David D. Clark. An analysis of tcp processing overhead. IEEE CommunicationsMagazine, pages 23-29, Jun 1989.[CM89]^David Cheriton and Timothy P. Mann. Decentralizing a global naming servicefor improved performance and fault tolerance. ACM Transactions on ComputerSystems, 7:147-183, May 1989.[CW89]^David R. Cheriton and Carey L. Willaimson. Vmtp as the transport layer for high-performance distributed systems. IEEE Communication Magazine, pages 37-44,Jun 1989.[DCea90] P. Dasgupta, R. C. Chen, and et al. The design and implementation of the cloudsdistributed operating system. UNIX Computing Systems, 3:11, 1990.[D091]^Fred Douglis and John Ousterhout. Transparent process migration: Design al-ternatives and the sprite implementation. Software—Practice and Experience,21(8):757-785, Aug 1991.56[DPH91]^Peter Druschel, Larry L. Peterson, and Norman C. Hutchinson. Beyond micro-kernel design: Decoupling modularity and protection in lipto. TR 91-6A, TheUniversity of Arizona, Dec 1991.[Esk90]^M. Rasit Eskicioglu. Process migration in distributed systems: A comparativesurvey. TR 90-3, University of Alberta, Jan 1990.[Fid91]^Colin Fidge. Logical time in distributed computing systems. IEEE Computer,24:28-33, Aug 1991.[Fow85]^Robert Joseph Fowler. Decentralized object finding using forwarding addresses.TR 85-12-1, University of Washington, Dec 1985.[Gos91]^A. Goscinski. Distributed Operating Systems-The Logic Design. Addison-WesleyPublishing Company, 1991.[GS90]^Bezalel Gavish and Olivia R. Liu Sheng. Dynamic file migration in distributedcomputer systems. Communications of the ACM, 33:2, 1990.[HMPT89] Norman C. Hutchinson, Shivakant Mishra, Larry L. Peterson, and Vicraj T.Thomas. Tools for implementing network protocols. Software-Practice and Expe-rience, 19(9):895-916, Sept 1989.[Hoa85]^C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall, 1985.[HP91]^Norman C. Hutchinson and Larry L. Peterson. The x-kernel: An architecturefor implementing network protocols. IEEE Transactions on Software Engineering,17:64-76, Jan 1991.[Hut87]^Norman C. Hutchinson. Emerald: An object-based language for distributed pro-gramming. TR 87-01-01, University of Washington, Jan 1987.[Jai91]^Raj Jain. The Art of Computer Systems Performance Analysis - Techniques forExperimental Design, Measurement, Simulation and Modeling. John Wiley &Sons, Inc., 1991.[JLHB88] Eric Jul, Henry Levy, Norman Hutchinson, and Andrew Black. Fine-grained mo-bility in the emerald system. ACM Transactions on Computer Systems, 6:109-133,Feb 1988.57[Jul88]^Eric Jul. Object mobility in a distributed object-oriented system. TR 88-12-06,University of Washington, Dec 1988.[KP91]^Phil Karn and Craig Partridge. Improving round-trip time estimates in reliabletransport protocols. ACM Transactions on Computer Systems, 9:364-373, Nov1991.[KW89]^Stephen G. Kochen and Patrick H. Wood, editors. UNIX Networking. HaydenBooks, 1989.[Lis88]^Barbara Liskov. Distributed programming in argus. Communications of the ACM,31:300, Mar 1988.[Mey88]^Bertrand Meyer. Object-oriented Software Construction. Prentice Hall Interna-tional, 1988.[MvRT+88] Sape J. Mullender, Guido van Rossum, Andrew S. Tanenbaum, Robbert van Re-nesse, and Hans van Staveren. Amoeba a distributed operating system for the1990s. Communications of the ACM, 31, Mar 1988.[NBL+88] David Notkin, Andrew P. Black, Edward D. Lazowska, Henry M. Levy, JanSanislo, and John Zahorjan. Interconnecting heterogeneous computer systems.Communications of the ACM, 31:258, Mar 1988.[Nic87]^David A. Nichols. Using idle workstations in a shared computing environment.In Proceedings of the 11th ACM Symposium on Operating System Principles, Nov1987.[NW088] Michael N. Nelson, Brent B. Welch, and John K. Ousterhout. Caching in thesprite network file system. ACM Transaction on Computer Systems, 6:134-154,Feb 1988.[OD83]^Derek C. Oppen and Yogen K. Dalai. The clearinghouse: A decentralized agentfor locating named objects in a distributed environment. ACM Transactions onOffice Information Systems, 1:230-253, Jul 1983.[PHOR90] Larry Peterson, Norman Hutchinson, Sean O'Malley, and Herman Rao. The x-kernel: A platform for accessing internet resources. IEEE Computer, 23:23-33,May 1990.58[PM83]^M. L. Powell and B. P. Miller. Process migration in demos/mp. In Proceedings ofthe Ninth ACM Symposium on Operating Systems Principles, 1983.[PW85]^Gerald Popek and Bruce J. Walker, editors. The LOCUS Distributed SystemArchitecture. The MIT Press, 1985.[RC86]^K. Ravindran and Samuel T. Chanson. Process alias-based structruring techniquesfor distributed computing systems. In IEEE Sixth International Conference onDistributed Computing Systems. IEEE, 1986.[RTL+91] Rajendra K. Raj, Ewan Tempero, Henry M. Levy, Andrew P. Black, Norman C.Hutchinson, and Eric Jul. Emerald: A general-purpose programming language.Software—Practice and Experience, 21(1):91-118, Jan 1991.[Rus89]^D. Russell. The Principles of Computer Networking. Cambridge University Press,1989.[Smi88]^Jonathan M. Smith. A survey of process migration mechanisms. ACM OperatingSystems Review, 22(3):28-40, Jul 1988.[Sta84]^William Stallings. Local networks. Computer Surveys, 16:3-41, Mar 1984.[Str86]^Rob Strom. A comparison of the object-oriented and process paradigms. SIG-PLAN Notices, 21:88-97, Oct 1986.[Str88]^Bjarne Stroustrup. What is object-oriented programming IEEE Software, pages10-20, May 1988.[SW87]^Sol M. Shatz and Jia-Ping Wang. Introduction to distributed-software engineering.IEEE Computer, pages 23-31, Oct 1987.[Tan89]^Andrew S. Tanenbaum. Computer Net Works. Prentice Hall, 1989.[Tic85]^Walter F. Tichy. Rcs—a system for version control. Software—Practice and Experi-ence, 15(7):637-654, Jul 1985.[TLC85]^Marvin M. Theimer, Keith A. Lantz, and David R. Cheriton. Preempt able remoteexecution facilities for the v-system. In Proceedings of the Tenth ACM Symposiumon Operating System Principles. ACM, 1985.59[TR85]^Andrew S. Tanenbaum and Robbert Van Renesse. Distributed operating systems.Computing Serveys, 17:419-468, Dec 1985.[Wat90]^David A. Watt. Programming Language Concepts and Paradigms. Prentice HallInternational, 1990.[WPM90] Anthony I. Wasserman, Peter A. Pircher, and Robert J. Muller. The object-oriented structured design notation for software design representation. IEEE Com-puter, pages 50-63, March 1990.[Zay87]^Edward R. Zayas. Attacking the process migration bottleneck. In Proceedings ofthe 11th ACM Symposium on Operating System Principles. ACM, Dec 1987.60