UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Performance improvements for bFS Carton, Ross W. 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2001-0345.pdf [ 935.19kB ]
Metadata
JSON: 831-1.0051163.json
JSON-LD: 831-1.0051163-ld.json
RDF/XML (Pretty): 831-1.0051163-rdf.xml
RDF/JSON: 831-1.0051163-rdf.json
Turtle: 831-1.0051163-turtle.txt
N-Triples: 831-1.0051163-rdf-ntriples.txt
Original Record: 831-1.0051163-source.json
Full Text
831-1.0051163-fulltext.txt
Citation
831-1.0051163.ris

Full Text

Performance Improvements for bFS by Ross W. Caxton B.Comp.Sc, Concordia University, 1997 A THESIS S U B M I T T E D IN PARTIAL F U L F I L L M E N T OF T H E R E Q U I R E M E N T S F O R T H E D E G R E E OF Master of Science in T H E F A C U L T Y OF G R A D U A T E STUDIES (Department of Computer Science) We accept this thesis as conforming to #He requirqd standard The University of British Columbia October 2001 © Ross W. Carton, 2001 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the r e q u i r e m e n t s f o r an advanced degree a t the U n i v e r s i t y of B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g of t h i s t h e s i s f o r s c h o l a r l y purposes may be g r a n t e d by the head of my department o r by h i s or her r e p r e s e n t a t i v e s . I t i s u n d e r s t o o d t h a t c o p y i n g or p u b l i c a t i o n of t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be a l l o w e d w i t h o u t my w r i t t e n p e r m i s s i o n . Department of C&cvipufe-T Sc., Q<A <Q The U n i v e r s i t y of B r i t i s h Columbia Vancouver, Canada Date Abstrac t The traditional file system was designed on the premise that it would be deployed within a trusted and restricted administrative domain, i.e. the domain was restricted to a set of users managed by a trusted system administrator. The Internet is neither trusted nor restricted, so a file system to be deployed on the Internet challenges the traditional file system model. bFS [8] is a file system model and implementation that addresses the challenges raised by deploying a file system on the Internet. These challenges are very real, as Internet-based storage providers become more prevalent. The original bFS implementation was a completely functional prototype, however it suffered from very poor performance. This thesis describes an improved implementation, and shows that its performance now compares favourably to plain unencrypted NFS. ii Contents Abstract ii Contents iii List of Tables v List of Figures vi Acknowledgements vii Dedication viii 1 Introduction 1 2 Design 3 2.1 Certificates 4 2.2 Client 5 2.3 Agent 5 2.3.1 MetaData 6 2.3.2 Access Control and Sharing 6 2.3.3 bFS Protocol 8 2.3.4 Encryption 8 iii 2.4 Storage Provider 9 3 Implementation 10 3.1 Improved Implementation 10 3.1.1 Client Implementation 10 3.1.2 Agent Implementation 11 3.2 Implementation Differences 12 3.2.1 Co-Location of Client and Agent 13 3.2.2 Attribute Cache 13 3.2.3 File Modification Time 13 3.2.4 J V M 14 3.2.5 Storage Provider 14 3.2.6 Kernel NFS Client 15 3.2.7 Bug Fixes 15 4 Performance 16 4.1 Andrew Benchmark 16 4.2 Micro Benchmarks 18 5 Summary 20 Bibliography 22 iv List of Tables 4.1 Andrew Benchmark: bFS vs. NFS in Seconds 17 4.2 Andrew benchmark: bFS vs. NFS in Seconds (Original Implementa-tion) 18 4.3 bFS Micro-Benchmarks in Milliseconds 19 v List of Figures 2.1 bFS Architectural Overview [8] 4 2.2 Sharing Using Public Read-Only Access [8] 7 vi Acknowledgements I must begin by thanking Jacob (Kobi) Ofir. bFS was conceived by Kobi, and his elegant design and readable code made the original implementation very easy to work with. He was also quick to respond and very helpful whenever I had questions. Mike Feeley has been my advisor and supervisor since I started as a student in the department. He challenged me to challenge existing systems. He was amazingly patient with me, which allowed me to gain further understanding of the systems that I studied, and indeed, of myself. He also pushed me at the right times, which ensured that I was able to finish. I must also thank Norm Hutchinson, the second reader of my thesis, and Joyce Poon, the Graduate Program Coordinator. I could not have finished without their help! Before I became a student in the department, I was a member of technical staff. Without the support of Raymond Ng, Jack Snoeyink, and Bob Woodham, I may never have entered into the M.Sc. program. In a previous career, Mike Arczynski and Alar Poldma helped me immea-surably. They allowed me to explore my germinating interest in computer science, which ultimately led me to my present position. Finally, I'd like to thank the numerous friends that I have made during my time at UBC. Hopefully, you know who you are! Ross W. C A R T O N The University of British Columbia October 2001 vn Dedicated to Peter, Meredith, and Debbie. viii C h a p t e r 1 Introduct ion The traditional file system was designed on the premise that it would be deployed within a trusted and restricted administrative domain. The domain was trusted, so communication between client and server need not be encrypted, and data could be stored in plaintext on the file server. Furthermore, the server was trusted to perform all authentication and access control tasks. The domain was restricted to a set of users managed by the server, and these users could only access the server from within the domain. The Internet is neither trusted nor restricted, so a file system to be deployed on the Internet challenges the traditional file system model. Since the Internet is not trusted, all communication between client and server must be encrypted. The server may not be trusted, so data should be stored in ciphertext rather than plaintext. Since the Internet is not restricted, global file sharing is possible, thus server-side authentication and access control could become unmanageable. bFS [8] is a file system model and implementation that addresses the chal-lenges raised by deploying a file system on the Internet. These challenges are very real, as Internet-based storage providers become more prevalent. Rather than trust-1 ing the storage provider to perform authentication and access control tasks, a trusted agent is vested with these duties. The agent also performs encryption and decryp-tion so that files may be stored in ciphertext on the storage provider. The original bFS implementation suffered from very poor performance. This thesis presents an improved implementation which addresses the performance prob-lems. The thesis is arranged as follows. The next chapter discusses the design of bFS, which remains largely unchanged from the original implementation. The fol-lowing chapter discusses the improved implementation, and describes the differences between the improved and original implementations. The next chapter examines the performance of the improved bFS implementation, and compares it with the original performance. The thesis concludes with a brief summary, including further possible performance improvements. 2 Chapter 2 Design Unlike traditional file systems, the design of bFS was motivated by the premise that it would not be deployed within a trusted or restricted administrative domain. Instead, it is assumed that an untrusted storage provider may be used, and that global file-sharing should be possible. Additionally, bFS should be transparent to the storage provider, so that it may run without requiring modifications to the storage provider's file system. Indeed, storage providers need not even be aware that bFS is in use. bFS is implemented as a set of client-side agents that act as intermediaries to the client server model. Clients communicate with their agents using the bFS pro-tocol and agents communicate with the storage provider (server) using the protocol defined by the server. bFS certificates are used to authenticate users and agents. Figure 2.1 illustrates an architectural overview of bFS. This chapter begins with a discussion of bFS certificates, and continues with sections describing each of the three functional components of bFS; namely client, agent, and storage provider. 3 Bob's Certificate [Bob's Storage Provider (Alice's Storage Provider (Bob's Agent ) fPublic Key Authenticate V^^Authenticate A g e n t Locations & Keys Perfornyl/O \ A c c e 5 s - b £ S Access Data Public Key Agent Locations & Keys Figure 2.1: bFS Architectural Overview [8] 2.1 Certificates The bFS certificate is used to authenticate users and agents. To facilitate decen-tralization, certificates may be generated by anyone and shared in any manner. The certificate includes the following information: • the location and protocol of the storage provider • The user's storage provider authentication information • the user's public key, in X.509 [11] format, PGP [15] format, or any other agreed-upon format • the location of the user's agents • bootstrap information, such as encryption and file naming techniques A certificate is read by the client to determine the location of the agent, then passed to the agent. Since a certificate contains the user's public key, the agent can authenticate the user using standard public key authentication techniques. The agent can then determine the storage provider location and protocol, and authenti-cate with the storage provider. 2.2 Client As with other distributed file systems, users access bFS through a secure and trusted client layer. In a UNIX system, this may be at kernel-level or at user-level. In a Microsoft Windows system, for example, a new file system may be introduced using the Installable File Systems Ki t [4]. Once bFS has been mounted on the local local file system, the client provides transparent access through the agent to the storage provider. The client commu-nicates with the agent using the bFS protocol, which is discussed in the following section. 2.3 Agent Most bFS functionality is contained in the agent. The agent has three main respon-sibilities: • assume access control and authentication duties, which supports global file sharing • act as an intermediary between the client and storage provider • perform encryption and decryption, which allows the user to use untrusted storage providers To meet these responsibilities, the agent must maintain metadata and ac-cess control lists, support the necessary storage provider protocols, expose the bFS 5 protocol to clients, and support the required encryption and naming schemes. The following subsections elaborate on these concepts. 2.3.1 MetaData bFS must maintain its own set of metadata. The metadata is stored at the stor-age provider along with the data that it represents. The following metadata is maintained by bFS: • User database: holds a mapping of User ID (UID) to certificate for all sharing partners. • Directory Map: mapping of directory Object ID (OID) to remote name and parent OID. • File system key: symmetric key used to encrypt the user database and the directory map. • File metadata: contains the file's key, real and remote names, OID, and A C L . • Directory metadata: same as file metadata, plus map of OID to file metadata. 2.3.2 Access Control and Sharing bFS uses access control lists (ACLs) to manage file sharing permissions. bFS sharing policies are borrowed from Multics [9]. That is, a new entry inherits its parent's A C L . Once the new entry has been created, changes to its A C L or that of its parent do not affect one another. In the A C L , bFS maintains a list of UIDs of every user who currently holds the key to the object represented by the A C L . When permission to an object is 6 revoked from a user, the object's contents must be encrypted with a new key if the user ever held the old key. bFS file sharing is optimized when the storage provider supports public read-only access. Public read-only access allows the client to bypass the remote user's agent by communicating directly with its storage provider. Otherwise, file sharing would suffer from double-encryption, as all data would be separately encrypted between the agent and storage provider, and between the client and agent. Figure 2.2 illustrates how Alice can access Bob's storage provider using public read-only access. Figure 2.2: Sharing Using Public Read-Only Access [8] The decentralized nature of the key management system promotes a mutual sharing relationship among sharing parties. Bob and Alice must have copies of each other's certificates in order to facilitate file sharing. If Bob's certificate is acquired by a user unknown to him, this user would not be able to access Bob's files. 7 2.3.3 bFS Protocol The bFS protocol contains three sets of operations, which may operate on a file name, file handle, or UID: • File access operation: LOGON, GUEST, LOOKUP, READ, READRAW, WRITE, WRITERAW, CREATE, MKDIR, REMOVE, REMDIR, RENAME, GETATTR, SETSIZE, READDIR. • Cryptographic operations: REKEY, FSREKEY, GETREMOTEINFO. • Sharing operations: GETUSERS, GETUSER, ADDUSER, REMOVEUSER, UPDATEUSER, SETRIGHTS, GETUSERRIGHTS, GETALLRIGHTS, RESETACL. Communication between a client and agent may be over a trusted or un-trusted network. If the network is not trusted, the channel should be encrypted, which incurs double-encryption as discussed in section 2.3.2. The channel encryption parameters are negotiated during the client-agent handshake. 2.3.4 Encryption bFS uses a symmetric encryption scheme to encrypt files, since asymmetric encryp-tion is too expensive. Symmetric encryption can use stream ciphers or block ciphers. Stream ciphers process data one bit at a time, and each operation depends on all the preceding bits; thus encryption time varies with the length of a file. Since uniform encryption time is more desirable, a block cipher is used for bFS encryption. bFS uses a unique key for each object in the file system, to avoid the overhead involved in mapping key groups to ACLs. 8 2.4 Storage Provider The storage provider is the functional component of bFS which provides remote file storage facilities. The storage provider defines the communication protocol, and the agent is responsible for detecting and supporting this protocol. The protocol may be NFS [1, 12, 13], CIFS [2, 10], a recent network storage provider protocol (X:drive [14], netdrive [7], and driveway [3]), or any other protocol with a published specification. Since only encrypted data is stored at the storage provider, the administrator need not be trusted to respect the privacy of a user's data. Since the agent performs all access control and user authentication, and is transparent to the storage provider, no software installation or modification is necessary at the storage provider's end. 9 C h a p t e r 3 Implementation This chapter is divided into two sections. The first section discusses the improved bFS implementation. The second section describes the differences between the im-proved implementation and the original implementation, and how these differences improve the performance of bFS. 3.1 Improved Implementation Recall from section 2 that the storage provider, client, and agent are the three functional components of the bFS file system. In the improved implementation, the client and agent are co-located and run on Linux. The implementation supports only one type of storage provider: a standard NFS server which requires no further explanation. 3.1.1 Client Implementation Client-side access to bFS is provided by a user-level NFS server running on Linux. This server receives and interprets incoming NFS requests, translates them to the 10 bFS protocol, and passes them on to the bFS agent. The server communicates with the remote kernel's NFS client and with the bFS agent by TCP/IP . It was originally developed by implementing the server stubs generated by the R P C com-piler. The original implementation was ported to Linux and modified as described in section 3.2. When the bFS client starts, it does two things. First, it reads the bFS certificate passed on the command line, and opens a connection to the agent specified therein. Next, it registers itself with the kernel as an NFS R P C service. The socket associated with this service is used as an argument in the mount system call, as the bFS client mounts itself to the /bfs/home and /bf s/f riends directories, where the main user's and his sharing parties' files are accessible. The /b f s /f riends directory contains a subdirectory for each of the user's sharing parties. When the kernel NFS client detects a file access in one of the mounted subdirectories, it sends the R P C request to the bFS client. The bFS client contains a simple file attribute cache, so many of these requests may be satisfied without contacting the agent. For a cache miss or a request which requires more than file attributes, the agent is contacted. The response from the agent is passed back to the kernel NFS client, and all file attributes are cached. 3.1.2 Agent Implementation The agent is implemented in Java, for portability and rapid development purposes. The Java implementation is the Sun Microsystems Java 2 SDK, Standard Edition, Version 1.3.1 for Linux. This release of Java uses the Java HotSpot [5] server V M in lieu of a JIT compiler for improved performance. The remote storage facade and agent core are the two main functional com-11 ponents of the bFS agent. The remote storage facade is an abstract Java class that must be implemented for each network file system protocol that is supported by the agent. The current implementation provides a WebNFS [6] facade. WebNFS supports URL-style filenames, so remote storage can be an NFS server, F T P server, or local file system. As discussed previously, an NFS file server is used in this implementation. To support a new network file system protocol, only the facade must be modified. Facades are stackable, achieving a more modular design and richer func-tionality. In this implementation the WebNFS facade sits below a caching facade. The caching facade uses the WebNFS facade when it cannot fulfill a request. Dif-ferent caching facades may be plugged in to support multiple caching strategies without requiring modification to the file system facade backend. The agent core communicates with the chosen facade to perform all I/O on the storage provider. It handles authentication, manages keys and metadata, exposes the bFS protocol to the bFS client, and determines what needs to be read or written. The functionality of the agent core was discussed in section 2.3. The sharing privileges supported by the agent are: read, write, and adminis-ter. The owner holds non-revocable administration privileges, and only those with administrative privileges may modify ACLs or update the user database. 3.2 Implementation Differences As discussed in section 1, the original bFS implementation suffered from very poor performance. This section lists the differences between the original implementa-tion (01) and the improved implementation (II), and other sources of performance improvements. 12 3.2.1 Co-Location of Client and Agent In the II, the client and agent are co-located and run on a Linux system. In the 01, the client runs on FreeBSD and the agent runs on Windows 2000. This incurs an additional and unnecessary network hop. The Intel Pentium's 'read time stamp counter' (rdtsc) instruction revealed that network latency accounted for a significant amount of waiting time for the client during a synchronous operation. To reduce this latency, the client and agent would have to be co-located. Since a free and very fast J V M is available for Linux, it was the chosen platform. The agent was implemented in Java, so it was relatively easy to port. The client proved to be more problematic, since it was not POSIX-compliant. 3.2.2 Attribute Cache In the 01, the bFS client did not cache file attributes. Every time a file is accessed, the kernel NFS client does a GETATTR, which requires the bFS client to communicate with the agent. The II contains a simple attribute cache in the bFS client that greatly reduces the number of client/agent requests. When the client was instrumented to print out the name of each operation prior to processing, the additional GETATTRs were revealed. Although the attribute cache is relatively simple and was fairly easy to implement, it contributed signifi-cantly to the performance improvements. 3.2.3 File Modification Time The OI contained a bug in the agent code that caused the agent to change the file modification time after a file had been created. This caused unnecessary traffic 13 between the client and agent as they were forced to synchronize. The unnecessary traffic was revealed when the client was instrumented to print out the name of each operation. However, the cause of this traffic was very difficult to isolate. By reading and adding print statements to the kernel NFS client code, it was determined that the problem was due to unsynchronized file modification times. The code for both client and agent was carefully inspected until the bug in the agent was revealed. This bug only manifested itself during file creation, however it imposed a heavy penalty at this time. 3.2.4 J V M The 01 used the Microsoft V M to run the agent. Performance deteriorated rapidly as the file system grew. Preliminary investigations [8] suggested that the Microsoft V M and garbage collector were responsible. The II runs the agent on the Sun Microsys-tems Java HotSpot server V M , which is optimized for server-based applications, and includes enhanced garbage collection and thread handling. It is difficult to quantify how much improvement may be attributed to the J V M itself and how much is due to the switch in operating systems from Windows to Linux. The performance deterioration phenomenon was alleviated by one of these changes. 3.2.5 Storage Provider The 01 used a Samba server as a storage provider. The Samba server exported a file system which itself was imported from an NFS server, so the actual storage provider was located two network hops away. The II uses an NFS server as a storage 14 provider, located just one network hop away. This problem only existed during synchronous operations and cache misses, however the performance penalty was heavy during these times. In order to correct this problem, the Samba facade was replaced by a WebNFS facade. A freely available WebNFS library was used, which greatly reduced the implementation time. 3.2.6 Kernel NFS Client Casual observation of the requests between the kernel NFS client and the bFS client suggests that the Linux NFS client implementation performs more aggressive file caching than FreeBSD NFS client implementation. This has not been proven, but is noted here for completeness. 3.2.7 Bug Fixes Miscellaneous bugs were fixed which marginally improved the performance of bFS. 15 C h a p t e r 4 Performance The goal for the performance of bFS is that it compare favourably to plain unen-crypted NFS. It is expected that bFS will be slightly slower than NFS because of encryption costs and metadata management, especially since the encryption is per-formed in Java and contains no native calls. However, it is hoped that these impacts will be minimal. The bFS test environment consisted of two machines: • Linux: Pentium III 550MHz with 512MB R A M running Red Hat Linux 7.1. • Solaris: Sun Ultra 10 300MHz with 192MB R A M running Solaris 2.6. 4.1 Andrew Benchmark The Modified Andrew Benchmark (MAB) is a five-phase file system stress test. The benchmark operates as follows: • Phase I: Create several subdirectories. • Phase II: Copy source code files into the subdirectories created in Phase I. 16 Phase bFS NFS I 0.9 0.3 II 1.7 0.7 III 1.0 0.8 IV 1.1 0.9 V 8.8 6.5 Total 13.5 9.2 Table 4.1: Andrew Benchmark: bFS vs. NFS in Seconds • Phase III: stat every file contained in the subdirectories. • Phase IV: grep every file contained in the subdirectories. • Phase V: compile source files within the subdirectories. Table 4.1 compares performance of bFS and NFS, listing the elapsed time of each phase of the M A B in seconds. The benchmark was run five times on each file system and the mean is reported. During the NFS test, the kernel NFS client on Linux communicates directly with the NFS server on Solaris. During the bFS test, the kernel NFS client, bFS client, and bFS agent all run on Linux. The NFS client communicates with the bFS client, which communicates with the bFS agent, which communicates with the NFS server on Solaris. The bFS block size used was 4KB, and data and names were encrypted using 128 bit Blowfish. The NFS read and write sizes were set to 8KB. bFS meets its performance goals. The total running time of all phases of the M A B is 13.5 seconds on bFS and 9.2 seconds on NFS, a factor of 1.47. The individual phases of the M A B range from a factor of 1.22 (phase IV) to a factor of 3.0 (phase I). Table 4.2 shows how the performance of the original bFS implementation compared to NFS. Since the hardware differed between the original implementation 17 Phase bFS NFS I 1 0.4 II 14 0.8 III 5 0.4 IV 6 1.6 V 77 4.4 Table 4.2: Andrew benchmark: bFS vs. NFS in Seconds (Original Implementation) and the improved implementation, the NFS performance numbers are different. The total running time of all phases of the M A B is 77 seconds on bFS and 4.4 seconds on NFS, a factor of 17.5, compared to a factor of 1.47 in the improved implementation. 4.2 Micro Benchmarks Micro benchmarks give an indication of the cost of the individual file system op-erations performed during the M A B . Using the Intel Pentium's 'read time stamp counter' (rdtsc) instruction, a time stamp was recorded by the bFS client at four points in time for an individual NFS request: • W: Receipt of the NFS R P C request from the kernel NFS client. • tl: After translation of NFS request to bFS request and before forwarding bFS request to agent. • t2: After response from agent. • t3: After translation of bFS response to NFS response and before sending response to kernel NFS client. t3-t0 represents the total time to service a bFS request, and t 2 - t l represents the time spent in the agent. 18 Operation Total time Agent Client Read 16K (cache miss) 149.9 149.6 0.3 Read 16K (cache hit) 0.0 0.0 0.0 Write 16K 7.9 7.5 0.4 Create 23.8 23.6 0.2 MkDir 46.1 45.8 0.3 Lookup 0.8 0.6 0.2 Table 4.3: bFS Micro-Benchmarks in Milliseconds A simple test was used to determine the cost of file system operations. A directory was created, a 16KB file was copied into it, then the file was read. The bFS client was stopped and re-started, and the file re-read, to determine the cost of a Read during a cache miss. This ensured that the cache would be flushed. Table 4.3 summarizes end-to-end performance of the chosen file system oper-ation. As with the M A B , the test was run five times and the mean is reported. The first column of the table describes the file system operation. The second column is the total time of the operation in milliseconds, i.e. t3-t0. The third column is the time spent in the agent, i.e. t 2 - t l . The last column is the time spent in the client, i.e. (t3-t0) - ( t 2 - t l ) . The Create and MkDir operations are relatively expensive because they are synchronous to the storage provider and require metadata updates. The Read cache miss requires synchronous access by the agent to the storage provider and data decryption in the agent, which is a costly operation. The Read cache hit does not invoke any bFS operation, because it is the kernel NFS client cache that satisfies the request, not the bFS agent cache. The Write is relatively inexpensive, because it is asynchronous. 19 C h a p t e r 5 S u m m a r y The Internet is challenging the traditional model of file system access by enabling global file access, cross-domain sharing, and the use of untrusted Internet-based storage services. Some of these interests have been addressed in the past, but never together. File systems exist that store encrypted data on the server, thus allowing the user to use untrusted storage providers. Other file systems provide cross-domain sharing, but must use a centralized user authentication system. bFS is a unique file system that promotes the interests of server trust and cross-domain sharing by allowing the user to place his trust in a software agent. This agent permits the user to use untrusted storage providers, and it affords the user complete control over when, how, and with whom his files are shared. Moreover, the use of bFS is transparent to the storage provider. The original bFS implementation was a completely functional prototype, however it suffered from very poor performance. This thesis describes an improved implementation, and shows that its performance now compares favourably to plain unencrypted NFS. The source code to bFS will be made publicly available under an open-source 20 license. Since the agent is implemented in Java, it is highly portable and will allow anyone to get flexible, protected sharing from Internet storage. 21 Bibl iography [1] B. Callaghan, B. Pawlowski, and P. Staubach. NFS version 3 protocol specifi-cation. R F C 1831, Network Working Group, June 1995. [2] Microsoft Corporation. Microsoft Networks SMB File Sharing Protocol (Docu-ment Version 6.Op). Redmond, Washington. [3] driveway, http://www.driveway.com. [4] Microsoft. Windows 2000 IFS Kit. http://www.microsoft.com/HWDEV/ntifskit/. [5] Sun Microsystems. The Java HotSpot Virtual Machine. http:/1'] ava.sun.com / products/hotspot / . [6] Sun Microsystems. WebNFS. http://www.sun.com/software/webnfs/. [7] netdrive. http://www.netdrive.com. [8] Jacob Ofir. Sharing and privacy using untrusted storage. Master's thesis, University of British Columbia, Vancouver, Canada, August 2000. [9] Jerome H. Saltzer. Protection and the control of information sharing in multics. Communications of the ACM, (7):388-402, July 1974. [10] Samba, http://www.samba.org. 22 [11] Bruce Schneier. Applied Cryptography. John Wiley &; Sons, Inc., second edition, 1996. [12] R. Srinivasan. RPC: Remote procedure call protocol specification verion 2. R F C 1831, Network Working Group, August 1995. [13] R. Srinivasan. XDR: External data representation standard. R F C 1832, Net-work Working Group, August 1995. [14] X:drive. http://www.xdrive.com. [15] Phil Zimmermann. Pretty Good Privacy, http://www.pgp.com. 23 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051163/manifest

Comment

Related Items