UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Fast secure virtualization for the ARM platform Ferstay, Daniel R. 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_2006-0185.pdf [ 3.78MB ]
JSON: 831-1.0051591.json
JSON-LD: 831-1.0051591-ld.json
RDF/XML (Pretty): 831-1.0051591-rdf.xml
RDF/JSON: 831-1.0051591-rdf.json
Turtle: 831-1.0051591-turtle.txt
N-Triples: 831-1.0051591-rdf-ntriples.txt
Original Record: 831-1.0051591-source.json
Full Text

Full Text

Fast Secure Virtualization for the A R M Platform by Daniel R. Ferstay B.Sc, University of British Columbia, 2001 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M a s t e r of Science in T H E F A C U L T Y O F G R A D U A T E STUDIES (Computer Science) The University of Brit ish Columbia March 2006 © Daniel R. Ferstay, 2006 Abstract In recent years, powerful handheld computing devices such as personal digital assis-tants and mobile smart phones have become ubiquitous in home and office environ-ments. Advancements in handheld device hardware have driven the development of the software that runs on them. As these devices become more powerful, increas-ingly connected, and the tasks performed by their operating systems more complex there is a need for virtual machine monitors. Virtual machine monitors such as the Xen hypervisor developed at the Uni-versity of Cambridge could bring an increased level of security to handheld devices by using resource isolation to protect hosted operating systems. V M M s could also be used to place constraints on the resource utilization of-hosted operating systems and their applications. > Xen is closely tied to the x86 computer architecture and is optimized to work well on desktop personal computers. One of its design goals was to provide visualization on x86 computers similar to that which was previously found only on I B M mainframes designed to support virtualization. We aim to provide this same style of virtualization on mobile devices, the majority of which are powered by the A R M computer architecture. The A R M architecture differs considerably from the x86 architecture. A R M was designed with high performance, small die size, low power consumption, and tight code density in mind. By migrating Xen to the A R M architecture, we are interested in gaining insight into the capacity of A R M powered devices to support virtual machines. Furthermore, we want to know which of the StrongARM's architectural features help or hinder the support of a Xen-style paravirtualization interface and whether guest operating systems will be able to run without modification on top of a StrongARM based hypervisor. In this thesis, we describe the design and implementation issues encountered while porting the Xen hypervisor to the StrongARM architecture. The implemen-tation of a prototype has been carried out for SA-110 StrongARM processor and is based on the hypervisor in Xen version 1.2. ii Contents Abstract ii Contents . iii List of Tables vi List of Figures vii Acknowledgements vii i Dedicat ion ix 1 Introduction 1 1.1 Overview 3 1.2 Motivation 3 1.3 Methodology 8 1.4 Synopsis 9 2 Related W o r k 10 2.1 Virtual Machine Monitor Architectures 10 2.1.1 Nano-kernel systems 11 2.1.2 Full Virtualization Systems 13 iii 2.1.3 Paravirtualization Systems 14 2.1.4 Software Hosted Virtualization Systems 16 2.2 Taxonomy of Virtualization Techniques 17 3 Discussion: Intel x86 vs. S t r o n g A R M 20 3.1 Support for a secure V M M 21 3.1.1 x86 21 3.1.2 StrongARM 23 3.2 Paravirtualized x86 26 3.2.1 C P U 26 3.2.2 Memory Management 28 3.2.3 Device I /O 29 3.3 Paravirtualized StrongARM 29 3.3.1 C P U 30 3.3.2 Memory Management 32 3.3.3 Device I /O 37 3.4 Summary 37 4 Design and Implementation 39 4.1 Differences in architecture specific code 41 4.1.1 Hypervisor boot code 41 4.1.2 Hypervisor direct mapped region 43 4.1.3 Idle task . 46 4.1.4 Frame table 46 4.1.5 Hypervisor address space layout 47 4.1.6 Trap handling 48 iv 4.1.7 Rea l t ime clock 50 4.1.8 Scheduler 51 4.1.9 Task Management code 52 4.1.10 Doma in Execut ion Context Definit ion 54 4.1.11 Context switch code 57 4.1.12 Doma in B u i l d i n g and loading code 58 4.1.13 Hypervisor entry code 62 4.2 Differences i n paravir tual izing the architecture 67 4.2.1 Protect ing the hypervisor from guest OSes 67 4.2.2 Protect ing the guest OSes from applications . 67 4.3 Summary of Implemented Funct ional i ty 71 5 Performance 73 5.1 M i n i m a l OS 73 5.2 Exper imenta l Setup 74 5.2.1 Micro-benchmarks 74 5.3 Soft Evalua t ion of M e m o r y Consumpt ion 81 5.4 Summary 85 6 Conclusion and Future W o r k 86 6.1 Conclusion 86 6.2 Future Work 88 Bibl iography 93 List of Tables 2.1 A taxonomy of virtual machine monitor architectures 18 3.1 The A R M instruction set '. . 25 5.1 Micro-benchmark results for the A R M hypervisor prototype 80 5.2 Comparison of hypervisor memory consumption on A R M and x86. . 82 vi \ 1 List of Figures 2.1 The organization of a system that uses a hardware hosted virtual machine monitor to support a single domain 11 2.2 The organization of a system that uses a software hosted virtual ma-chine monitor to support a single domain 17 4.1 Memory mappings for the direct mapped region of the hypervisor. . 44 4.2 Hypervisor's virtual memory layout in the upper 64MB region. . . . 49 4.3 Memory layout of the hypervisor's stack 53 4.4 Assembly language routines for managing the hypervisor's stack. . . 55 4.5 Execution context definition for tasks managed by the hypervisor. . 56 4.6 Pseudo code for context switching in the hypervisor 57 4.7 Hypervisor protects itself from guest OSes using page level protection. 68 4.8 GuestOSes have access to application memory as well as their own. . 69 4.9 Applications have access to their own memory 69 6.1 Modes of operation for processors supporting the A R M TrustZone hardware extensions 90 6.2 Organization of a system using the A R M TrustZone API 91 vii Acknowledgements A tremendous thank you to Norm for making this work possible as my supervisor and mentor. Thank you to Mike for his time and input into this thesis. Thanks also to Andy Warfield for his thoughts on possible applications of the Xen hypervisor in a mobile environment and to the members of the DSG lab for their thoughts and for listening when I had something on my mind. Finally, thank you to my family for all of their love and support. D A N I E L R . F E R S T A Y The University of British Columbia March 2006 viii In loving memory of my grandfather. ix Chapter 1 Introduction In recent years, powerful handheld personal computing devices have become ubiqui-tous in home and office environments. Personal digital assistants and mobile smart phones are equipped with as much processing power as mainframes had fifteen years ago. They have also become increasingly connected with support for wireless net-working protocols such as Bluetooth and 802.11b built in. The advancements in handheld device hardware have driven the development of the software that runs on them. Users now enjoy applications such as streaming audio and video, audio and video capture, still image photography, and networked games. Such a variety of applications would not be possible without a powerful general purpose operat-ing system to manage the hardware and provide a standardized set of services to applications. A variety of operating systems are currently being used by different hardware vendors. 1 As handheld devices become more powerful and the tasks to be performed by the operating systems more complex there are opportunities to make use of 1Sharp makes PDAs that run Linux, HP makes PDAs that run WindowsCE, and Nokia smart phones run the Symbian OS. 1 Virtual Machine Monitors. VMMs could bring an increased level of security to handheld devices by using resource isolation to protect hosted operating systems. VMMs could also control the resource utilization of hosted operating systems and applications. VMMs are extremely useful for operating systems development and debugging where working with a virtual machine gives the systems developer many more development tools and more control over the execution of the system than is provided by bare hardware. For example, developing new drivers for the Linux operating system is easy using the Xen V M M . In Xen a module can be isolated into its own virtual machine [FHN+04]. If the driver crashes or hangs, its state can be immediately inspected by debugging the virtual machine. To restart testing, only the virtual machine that hosts the driver needs to be restarted instead of rebooting the entire machine, saving precious time. In this thesis we intend to provide a virtualization infrastructure for handheld devices. The A R M [SeaOO] architecture is the most widely used in the embedded de-sign space. In addition, A R M processors offer a mix of performance and low power consumption that makes them attractive as the basis for our virtualization infras-tructure. Thus, we ported the Xen hypervisor to the A R M computer architecture. Such a port interesting for several reasons. Most importantly, determining the capacity of the A R M architecture to support virtual machines and which of its architectural features help or hinder the support of a Xen-style paravirualization interface. We also hope to gain some insight into how portable the Xen hypervisor is by examining the issues that arise while attempting the port. Finally, we would like know whether operating systems hosted on the A R M port of the Xen hypervisor will be able to run unmodified (such as the PowerPC and IA 64 versions of Xen) or if they require further porting (such as the x86 version of Xen). These are the 2 interesting issues that we plan to address dur ing the course of this thesis. 1.1 Overview X e n was designed specifically for the x86 family of processors [Int05a]that power most desktop personal computers today. One of the goals behind Xen ' s development is to provide vi r tual iza t ion on x86 hardware that is similar to what was previously only found on I B M mainframe computers specifically designed to support vir tual iza-t ion such as the z-series [Fra03]. B y por t ing the X e n hypervisor to A R M processors we hope to achieve a similar style of vi r tual izat ion on small handheld devices. 1.2 Motivation It is the employment of general purpose operating systems on handheld devices that allows intruders to use the same attacks that have been proven successful against desktop and server systems. Moreover, as the power and connectivity of mobile devices increases and their use becomes more widespread, the opportunities and motivat ion for the creation of viruses, worms, and other malware w i l l also increase. A t the t ime of this wri t ing , a variety of worms have been created that affect the Symbian O S . For example, the Cabi r .a , M a b i r . a [Wro05], Meta lGear . a [Gar04], and C o m m W a r r i o r . a [Sun05] worms are a l l capable of propagating themselves v i a Blue tooth and/or M u l t i m e d i a Message Service ( M M S ) connectivity available on Symbian powered smart phones. Others, such as the Card t rap .a worm [Kaw05] can spread from smart phones to personal computers v i a a memory card when a synchronization operation between P C and mobile device is attempted. Moreover, the volume of mobile-device malicious software is rising rapidly. A s of September 3 2005, 83 different viruses have emerged within a 14-month period [Kaw05]. This clearly illustrates the vulnerability of mobile devices and the need for a host based security system that can harden the device from attack. Some companies such as SimWorks [Sim05] have attempted to achieve this by producing anti-virus applications software for mobile phones. However, these applications depend upon the integrity of the operating system. In other words, if a system is compromised the output generated by anti-virus software running on the system can no longer be trusted. For example, the payload carried by worms such as MetalGear.a simply disables any anti-virus software. . There has been a lot of previous work on host based security for general purpose computers. The most common security systems are applications that run on top of the host operating system that monitor the state of certain sensitive data on the system. The Tripwire intrusion detection system [Tri05] monitors the file system of the host for any suspicious activity and writes its observations to a log file protected by the host operating system kernel. Another such application is checkps [che05]. It checks the validity of the output of the ps program on Unix-like operating systems. This is effective since it is common for intruders to hide their activities by providing fake ps output. While these tools are useful, they do have their limitations. For instance, there are pieces of the system that cannot be inspected from the application level. For this reason security tools have been built into the kernel of various operating systems. One such tool is KSTAT (Kernel Security Therapy Anti-Trolls) [s0f05]. It lives inside of a Linux kernel module and as such it has all of the privileges afforded to other parts of the kernel. It can also perform sanity checks on kernel data structures to check for things such as system call modifications and Trojan 4 kernel modules. App l i ca t ion and kernel based security systems both provide detection of ma-licious applications and kernel intrusion. However, they both suffer from the same problem; they rely on the integrity of the host operating system. App l i ca t ion level security logs (as used in Tripwire) might be good for identifying intruders, but what if an intruder gained access to the host O S kernel? A t that point the intruder could remove or modify any logs wri t ten by an application-based security system. Similar ly, once an intruder is inside of the kernel they could check for kernel based security systems and either disable them or modify their output. It is for this reason that kernel integrity monitors have been developed. A kernel integrity monitor watches the operating system as it operates in an attempt to identify and correct any deviant behavior. One such system for general purpose computers is Copi lo t [ J F M A 0 4 ] . In Copi lo t , the monitor program runs not on the C P U and memory of the main system, but on a separate C P U and memory located on a P C I expansion card. T h e monitor program watches the kernel operate and is able to detect various forms of attack on the kernel. However, the key benefit to the Copi lo t system is that it is isolated from the host operating system. Thus , it is able to continue correctly monitor ing the system and detect intrusions even after the host operating system has been compromised. Unfortunately, most current handheld devices do not support expansion cards that can contain their own C P U and memory. Mos t of the expansion slots on handheld devices are for increasing the memory or persistent storage capabilities of the device. In fact, even if handhelds d id support more heavy weight expansion cards, there would s t i l l be drawbacks to using a system similar to Copi lo t . The main drawback being that using the extra C P U on the expansion card would consume energy. W h i l e most general purpose 5 computer systems are plugged into a wall outlet and do not have to worry about running out of batteries, handheld devices do not have such a luxury. It seems that if there was a software solution to the problem of monitoring an operating system kernel for intrusion detection that had the properties of the Copilot kernel monitor, then the result could be used with some success on a handheld device. Virtual machines provide a software abstraction of a real machine that op-erating systems software may be run on top of. In this way, the virtual machine isolates the operating system from the real machine. A virtual machine monitor is a thin layer of software that manages one or more Virtual Machines. The V M M is isolated from the Virtual Machines that run on top of it, can determine the inner workings of a hosted virtual machine (introspection), and can intercept information entering and leaving the virtual machine (interposition). Essentially, a V M M can provide in software what the Copilot kernel integrity monitor provides in hardware. Of course, there are tradeoffs between a hardware based system and a system that is software based. The hardware based monitor is less flexible but might have a smaller negative impact on the performance of the system. It is accepted that both hardware and software approaches to machine monitoring are useful for different application domains and that both are worthy of future research. This paper fo-cuses on the use of software based machine monitor systems because it is my belief that they are better suited to hand held devices and that their flexibility makes for easier adoption by vendors of said devices. It is the properties of isolation, introspection, and interposition that have lead researchers to examine the use of VMMs as the basis for kernel integrity monitors, intrusion detection systems, or secure services. Not only this, but a V M M is superior to a general purpose OS for trusted security applications because a V M M is much 6 simpler in terms of code size and complexity. The Disco V M M was implemented in about 13000 lines of C code. VMMs also present a small, simple interface to the guest operating systems that run on top of them: that of the hardware they are running on or something very similar. Compare this to the size and complexity of a Unix system call API and you will understand that a VMM's behavior is cleanly defined by the hardware/software interface whereas an OS system call API leaves many opportunities for corner cases and undefined behavior to be exposed. On the A R M processor there are 32 well defined instructions that can be executed- by the machine. In comparison, the Linux kernel contains a system call API that is made up of roughly 1100 different system calls. What follows are some examples of recent research that has used VMMs for host based security purposes. In ReVirt [DKC+02] a secure logging facility was built into the V M M for intrusion analysis. By placing the logging facility inside of the V M M the integrity of the logs and the logging system are not compromised even if the hosted operating system has been infected by a virus, cracked by an intruder, or has been otherwise rendered untrustworthy. In Livewire [GR03], an intrusion detection system was created out of a variety of services built at the V M M level. The usefulness of Livewire depended on its isolation from the hosted OS kernel, but it also used the property of introspection to examine the state of the OS as it ran. There are many resources that are common for intruders to acquire for the purpose of breaking into the system and Livewire simply watched for these types of resource acquisitions. For example, raw sockets are commonly used for two purposes: network diagnostics and low level network access with the intent to break and enter a computer system. Livewire simply watched for and flagged any access to raw sockets. In addition to watching for resource 7 acquisitions, the Livewire system could also interpose itself between the O S and the potential ly harmful applications. T h i s type of system-call interposition was useful for providing sanity checks on the arguments of system calls to avoid buffer overflow attacks. 1.3 Methodology Processors based on the S t r o n g A R M computer architecture are 32-bit R I S C style chips that offer the following features: • H i g h performance. • Smal l die size. • L o w power consumption. • T igh t code density. These features have lead to A R M being an attractive option for use in embedded systems. Indeed, many manufacturers use A R M in devices such as hard drives, routers, calculators, children's toys, mobile phones and P D A s . Today over 75% of all 32-bit embedded C P U ' s are based on the A R M design. Therefore, we chose the A R M as our target architecture to explore the use of V M M s in a mobile comput-ing environment. More specifically, we ported the X e n Hypervisor to the SA-110 implementat ion of the S t r o n g A R M architecture [Int98a]. It should be noted, that the SA-110 core is not widely used i n industry, wi th companies opt ing to use the SA-1100 core in its stead. However, the SA-110 core is identical to the SA-1100 processor sans integrated controllers. In affect, the SA-1100 can be thought of as an SA-110 processor surrounded by integrated controllers for memory, interrupts, 8 DMA, serial, and real-time clock [Int98b]2. While writing the port there was a great effort to modify only the architecture dependent portions of the Xen source code whenever possible'. 1.4 Synopsis In the following chapters we present the issues related to porting the Xen Hypervisor to the A R M architecture and explain the design decisions that we made. Chapter 2 presents related work and gives a background in resource virtualization. Chapter 3 presents a comparison of the Intel x86 and StrongARM computer architectures with respect to virtualization. Chapter 4 explains the issues encountered during the implementation of the Xen Hypervisor port to A R M . Chapter 5 evaluates the performance of our Xen hypervisor on a StrongARM based platform. We present conclusions in Chapter 6 along with suggestions for future work. 2We targeted the SA-110 CPU because we did not have access to hardware powered by the SA-1100 processor. 9 Chapter 2 Related Work In this chapter we present a variety of Virtual Machine Monitor Architectures and make a case for using paravirtualization on small mobile handheld devices. 2.1 Vir tual Machine Monitor Architectures This section presents a few different V M M architectures. However, before they are discussed in detail some basic terminology should be understood. The V M M is a software layer that manages one or more virtual machines. A virtual machine is a software abstraction of the real machine that is isolated from the V M M and is also referred to as a domain. Domains can contain any type of software application, but in the context of this paper the application will be an Operating System. The operating system running in a domain is referred to as the guest operating system. The applications running on top of the guest operating system are called the guest applications. The V M M itself can.be run on top of hardware or software; we name both the host architecture. Figure 2.1 shows the basic organization of a system that uses a V M M ; the hardware hosts the V M M while the guest OS and applications 10 make up a domain. Next, we present a few of the different V M M architectures being used today. f ines t jplicatioii • g u e s t a p p l i c f i t i o n ' gimst operating . - ^ s t e p i host h a r c t e a r e Figure 2.1: The organization of a system that uses a hardware hosted virtual ma-chine monitor to support a single domain. 2.1.1 Nano-kernel systems The basic idea behind Nano-kernels is that they are a separate, small bit of core functionality that resides within the OS to do some specialized processing (e.g. Real-time interrupt processing and scheduling). Nano-kernels can be considered a lightweight version of a V M M because typically they do not virtualize very many re-sources. Examples of nano-kernel systems are Jaluna [Jal05] and RTLinux [FSM05]. In both of these systems, the virtualized resource is interrupt handling and schedul-ing. In RTLinux, there is a small real-time core that is loaded as a Linux kernel module and runs alongside of the regular Linux kernel. When the RTLinux core is loaded, it takes over all of the interrupt processing and scheduling for the machine by interposing itself at the interrupt handler routine entry points. The RTLinux core then handles all interrupt processing and schedules the actions of the system -including the Linux kernel and its scheduler- according to hard, real-time require-ments dictated by the system administrator. 11 It is possible extend nano-kernel systems with monitoring capabilities for security purposes. For example, at a recent visit to UBC, Victor Yodaiken (CEO of FSMLabs; creators of RTLinux) stated that they were attempting to modify the RTLinux core so that it could monitor the Linux kernel and perform integrity checks of the system for security purposes. Essentially, the proposed scheme had the RTLinux core take a snapshot of the running Linux kernel. Then, between every real time scheduling decision, ( 1ms) the RTLinux core could perform a quick integrity check on the kernel snapshot, possibly in the form of a hash computation. There are two problems with this scheme. First, an intruder may be able to break into the system and cause damage before you can notice it. In other words, the intruder's actions could happen within the bounds of a 1ms scheduling window. This problem becomes exacerbated as the speed of CPUs increase since applications will be able to execute more instructions in the 1ms scheduling window. Thus, the scheduling period must be changed depending on the speed of the machine. Second, if an intruder does break into the system and is able to compromise the Linux kernel, it can also compromise the RTLinux kernel based monitor system. This is because the RTLinux kernel is not isolated from the Linux kernel in any way. Both kernels run alongside each other in the same address space (kernel space) with the same privileges (kernel mode execution). For the most part, nano-kernel architectures have been dubbed cooperative virtualization environments. This is because the V M M is not isolated from the domains that it hosts, and thus makes exchange of information between the V M M and any domain very efficient. However, it is the same cooperation that leaves nano-kernel based systems vulnerable to attack. Therefore a nano-kernel based monitor system may not be the best choice for a trusted V M M based security system. 12 2.1.2 Full Virtualization Systems Full virtualization systems virtualize every resource on a computer system. Guest operating systems do not have access to physical devices, physical memory, or phys-ical CPUs. Instead, the guest operating systems are presented with virtual devices, virtual memory, and virtual CPUs. In a full virtualization system, the virtualized interfaces presented to the guest operating system look and feel exactly like the interfaces of the real machine and therefore the guest OS and applications may run on the virtual hardware exactly as they would on the original hardware. One such system is VMware [SVL01, Wal02, VMw05]. Full virtualization systems have an-other benefit besides allowing guest operating systems to run unmodified on top of the V M M . The virtualized resources provide a layer of isolation that protects the V M M from the actions of guest operating systems. The virtual memory implemen-tation in the V M M protects it from the guest OS in the same way that the virtual memory implementation in the guest OS protects it from the guest applications [20]. In a similar way, the virtual CPU implementation in the V M M uses the processor's operating modes in the same way that the guest OS uses the CPU operating, modes to protect itself from guest applications [SS72]. In fact, VMMs built in the full vir-tualization style are considered the most secure of all V M M architectures because of the strong isolation they provide. The Livewire project is the first attempt to build a security service into the full virtualization V M M ; they used VMware as a base. The major negative characteristic of full virtualization systems is degraded performance. This is obvious as every privileged instruction, every interrupt, every device access, and every memory access (etc.) has to be virtualized by the V M M . The performance measurements carried out during the evaluation of the Xen V M M [DFH+03] confirmed that a guest OS running on top of VMware can be as much as 13 a one hundred times slower than the same OS running directly on hardware. 2.1.3 Paravirtualization Systems The architecture of paravirtualization systems h'as very much the same look and feel as that of a full system virtualization system but with major differences in de-sign decisions. Paravirtualization was born out of the observation that full system virtualization is too slow and complex on today's commodity hardware. For the most part, this is because the V M M must intervene whenever a domain attempts to execute privileged operation. Paravirtualization aims to retain the protection and isolation found in the full system virtualization approach but' without the imple-mentation complexity and associated performance penalty. The main idea behind paravirtualization is to make the V M M simpler and faster by relaxing the constraint that guest operating systems must run on the V M M without being modified. In a paravirtualization system, the guest operating system code is modified to access the V M M directly for privileged access, instead of going to the virtual resources first and having the V M M intervene. One V M M implementation that uses a paravir-tualization approach is the Xen hypervisor [DFH+03]. Xen runs on x86 computer systems and supports commodity operating systems (Linux, NetBSD, WinXP (in development)) once they have been ported to the Xen-x86 hardware architecture. The Xen-x86 architecture is very similar to x86, with extra function calls into the V M M needed for virtual memory page table management and other features of the x86 that are difficult or slow to fully virtualize. Operating systems ported to Xen-x86 must also port their device drivers to use Xen's lightweight event notification system instead of using interrupts for communications. In addition, Xen is very lightweight at roughly 42000 lines of code [BDF+03]. 14 Another V M M that uses a paravirtualization approach is Denali [WSG02]. Denali has the same basic goals as Xen, although Denali makes no attempt to support commodity operating systems. For this reason, Denali can give up certain features that are difficult to fully virtualize or paravirtualize such as virtual memory. In Denali, a domain is meant to support a single application (or OS). If you need two applications to be isolated from one another then you must run them in separate domains (virtual machines). Denali forfeits features needed for operating system support in order to gain simplicity and security. The main drawback to paravirtualization systems is that they cannot host commodity operating systems without first porting them to run on the V M M . To port an OS to a new architecture takes time, but work on the Xen V M M shows that if the target architecture is very similar to the original architecture and most of the major changes to the OS are architecture independent, then the cost of porting the OS is outweighed by the rewards of paravirtualization. Another worry for VMMs built using a paravitualization design is that the calls directly to the V M M must, remain secure. By providing direct calls into the V M M for guest operating systems to use instead of transferring control indirectly through hardware traps it is possible to expose a security hole. This also adds complexity to the VMMs thin interface to guest operating systems. It should be noted that some computer architectures have hardware support for virtualization which makes it possible for a V M M that utilizes paravirtualization to run hosted operating systems without modification. Processors that support Intel Virtualization Technology [Int05c, Int05b] or the AMD Pacifica Technology [Adv05] are two examples of such architectures. XenSource is currently working on migrating Xen to Intel VT. Hardware support for virtualization has also been injected into 15 the IBM PowerPC architecture in the form of IBMs Enterprise Hypervisor Binary Interface. IBM is working on migrating Xen to the PowerPC 970 processor which supports the hypervisor binary interface [BX06]. 2.1.4 Software Hosted Virtualization Systems While the Denali and Xen VMMs run directly on top of the hardware, there is another architecture which has the V M M run on top of a general purpose operating system. Depicted in figure 2.2, software hosted VMMs leverage the services of the general purpose operating system (the host) to simplify the process of providing virtual hardware abstractions to guest operating systems. The hosted V M M can use any virtualization technique to host guest operating systems (e.g. full virtualization or paravirtualization). UMLinux [KDC03] is an example of a software hosted V M M . In UMLinux, the guest OS and guest, applications run in a single process, the guest machine process. The guest machine process communicates with the V M M process via shared memory and IPC. As mentioned earlier, the key benefit to using a hosted V M M is that you get to use the abstractions and services of the host OS to provide virtualization to the guest OS. For example, in UMLinux, the guest-machine process serves as the virtual CPU; host files serve as virtual I/O devices; host signals serve as virtual interrupts; etc. The one area where hosted VMMs fall short is performance. There is extra overhead associated with using the host operating systems services and abstractions instead of working directly with the hardware [CDD+04]. Also, hosted VMMs do not make very much sense in a security setting as they rely on the services of the host OS to maintain their integrity in order to provide correct virtualization. 16 (Lie-$t: i i p a t i o ' n •guest a p p l i c s i l i o i guest host h a r d w a n e Figure 2.2: The organization of a system that uses a software hosted virtual machine monitor to support a single domain. 2.2 Taxonomy of Virtualization Techniques We have divided up the V M M s into a taxonomy based on their features and re-quirements. Table 2.1 shows this taxonomy where the type is one of: N K (Nano-kernel), F V (Full virtualization), or P V (Paravirtualization). Important aspects of V M M systems include the type of virtualization employed, how the V M M is hosted, whether it isolates the C P U and memory from hosted operating systems, whether virtualized devices are present, and how the V M M performs. Finally, it is impor-tant that we identify which V M M s are Open Source implementations and which are not. Choosing a V M M implementation to use as the basis for VMM-level research requires access to existing source code. 17 V M M Type Host C P U & Memory Isolation Device Virtual-ization Performance Open source RTLinux N K Linux Kernel No No Very Good Yes VMware F V General purpose OS Yes Yes Poor No UMLinux P V General purpose OS Yes Yes Mediocre Yes Xen P V Hardware Yes Yes Good Yes Denali P V Hardware Yes Yes Good No Table 2.1: A taxonomy of virtual machine monitor architec-tures. The performance penalty imposed by certain V M M architectures cannot be ignored., Even as processor speeds increase, the performance penalties will still be noticeable. This is because the performance of a system running a V M M is greatly dependent on the performance of the memory subsystem. Even in a high performance par-avirtualization environment such as Xen, memory performance becomes an issue because of the instruction cache, data cache, and T L B flushes that occur as a re-18 suit of switching context from one domain to another. Additional overhead is even more important for handheld devices where every operation is paid for with battery power. A paravirtualization environment allows us to make more efficient use of the processor and memory subsystem on a handheld device, saving its most important resource: its battery. Therefore, an open source paravirtualization architecture would be better than full virtualization alternatives on a mobile device. It would provide all of the isolation that is needed for a secure V M M while incurring minimal performance penalties and more efficient use of battery power. Currently, there are no hardware hosted paravirtualization-style VMMs available for the A R M architecture. A port of Xen from x86 to the A R M architecture would fill this void. Such a port would also be a good starting point for creating secure VMM-level security services. 19 Chapter 3 Discussion: Intel x86 vs. StrongARM This chapter provides an explanation of the features provided by the x86 and Stron-gARM CPU's and how they relate to the paravirtualized interface presented by the Xen hypervisor. The StrongARM is a RISC-style CPU [PD80] that originates from the em-bedded systems design space. As such, it was designed with small die size, low power consumption, and compact code density in mind. The origins of the x86 CPU de-sign can be traced back to the 8086 processor introduced by Intel in 1978 and it's descendent the 8088, introduced in 1979 and used in the first personal computers by I B M 1 . Al l future members of the x86 family are backwards compatible with the 8086. The evolution of the x86 family of CPU's has been dominated by this insis-tence on backwards compatibility and features that make it more attractive as the basis for a multi-tasking personal computer system. Although the 8088 was designed later, it used an 8-bit wide data bus instead of the 16-bit bus used in the 8086 as a way of reducing cost 20 It is obvious that these two processors were developed with different goals in mind and this is reflected in the feature sets provided by each. However, there are also quite a few similarities between how systems based on these two CPU's can be paravirtualized. 3.1 Support for a secure V M M First we examine the ability of the x86 and StrongARM CPUs to support a se-cure V M M . The key architectural features for supporting a V M M were outlined by Goldberg in [Gol72] as: • two processor modes of operation. • a method for non-privileged programs to call privileged system routines. • a memory relocation or protection mechanism such as segmentation or paging. • asynchronous interrupts to allow the I/O system to communicate with the CPU. 3.1.1 x86 The Pentium CPU has features that match each of the above requirements. It has four modes of operation known as rings that also represent privilege levels. Ring 0 being the most privileged, ring 3 the least privileged. The Pentium uses the call gate to control transfer of execution between privilege levels. It uses both paging and segmentation to implement protection. Finally, the Pentium uses both interrupts and exceptions to transfer control between the I/O system and the CPU. Despite these features, it was shown by Robin and Irvine [RI00] that the Pen-tium instruction set contains sensitive, unprivileged instructions. Such instructions 21 can be executed in unprivileged mode without generating an interrupt or exception. This opens the door for software running at a lower privilege levels (such as a hosted virtual machine) to undermine software running at a higher privilege level (such as a VMM) by reading or writing sensitive information. These instructions were placed into two categories: • sensitive register instructions. • protection system references. Sensitive register instructions are those that read or modify sensitive registers and/or memory locations. By executing these instructions it would be possible for hosted virtual machines to undermine the V M M controlling the system. For example, the PUSHF and POPF instructions push and pop the lower 16 bits of the EFLAGS register to and from the stack. Pushing these values onto the stack effectively allows the EFLAGS register to be read. Popping these values off of the stack allows the EFLAGS register to be written. This prevents virtualization because the bits in the EFLAGS register control the operating mode and state of the processor. Instructions that reference the storage protection system, memory, or address relocation system are also sensitive instructions. For example, the POP and PUSH instructions can be used to pop and push a general purpose register or a segment register to and from the stack. Pushing a segment register onto the stack effectively allows the segment register to be read. Popping a value off of the stack and into a segment register allows the segment register to be written. This prevents virtu-alization because the segment registers contain bits that control access to different memory locations. There are also instructions that fail silently when executed in unprivileged 2 2 mode. This prevents virtualization because the semantics of the instructions are different depending on the privilege level that the machine is executing in. 3.1.2 StrongARM The StrongARM CPU has features that match the requirements needed to support a V M M as outlined above. It has two modes of operation: user mode and supervisor mode. The StrongARM controls transfer of execution from user mode to supervisor mode by using software interrupts. It uses paging to implement protection. Finally, the StrongARM uses both interrupts and exceptions to transfer control between the I/O system and the CPU. In addition, the instruction set implemented by the StrongARM is extremely simple when compared to that of the Pentium. The StrongARM supports 32 in-structions whereas the Pentium supports approximately 250 instructions. More importantly, none of the 32 StrongARM instructions can be classified as sensitive and unprivileged. Table 3.1 shows the A R M instruction set. Mnemonic Instruction Action -ADC Add with carry Rd := Rn + Op2 + Carry ADD Add Rd := Rn + Op2 AND AND Rd := Rn AND Op2 B Branch R15 := address BIC Bit clear Rd := Rn AND NOT Op2 BL Branch with link R14 := R15, R15 := address C M N Compare negative CPSR flags := Rn + Op2 CMP Compare CPSR flags := Rn - Op2 23 E O R Exclusive O R Rd := (Rn A N D N O T Op2) O R (Op2 A N D N O T Rn) L D M Load multiple registers Stack manipulation (Pop) L D R Load register from memory Rd := [address] M L A Multiply accumulate Rd := (Rm + Rs) + Rn M O V Move register or constant Rd := Op2 MRS Move PSR status/flags to reg-ister Rd := PSR MSR Move register to PSR sta-tus/flags PSR := Rn M U L Multiply Rd := Rm * Rs M V N Move negated register Rd := N O T Op2 O R R O R Rd := Rn O R Op2 RSB Reverse subtract Rd := Op2 - Rn RSC Reverse subtract with carry Rd := Op2 - Rn - 1 + Carry SBC Subtract with carry Rd := Rn - Op2 - 1 + Carry S T M Store multiple registers Stack manipulation (Push) S T R Store register to memory [address] := Rd SUB Subtract Rd := Rn - Op2 SWI Software interrupt OS call SWP Swap register with memory Rd := [Rn], [Rn] := Rm T E Q Test bitwise equality CPSR flags := Rn E O R Op2 T S T . Test bits C P S R flags := Rn A N D Op2 C D P Coproc data operations CRn := (result of op) 24 MRC Move from coproc to A R M reg Rd := CRn MCR Move from A R M reg to coproc CRm := Rd LDC Load to coproc CRn := [Rm] STC Store from coproc [Rm] := CRn Table 3.1: The A R M instruction set. Al l instructions that read or modify sensitive registers. can only be executed in supervisor mode. For example, the MRS and MSR instructions that are used to read and modify the PSR (ARM Processor Status Register) can only be executed in supervisor mode. If either is attempted in user mode, an exception occurs. The PSR encodes the processor's operating mode, interrupt status, and state of condition code flags. Other instructions that modify sensitive registers are the Coprocessor access instructions such as MRC and MCR. These instructions move data to and from the A R M coprocessors which are used to control properties such as: the location of pagetables and the current access control privileges of the processor. Again, these instructions can only be executed in supervisor mode. It should also be noted that instructions such as CMP which modify the CPSR (Current Processor Status Register) are allowed to execute in both user or supervisor modes. This is safe because they only modify the condition code bits in the CPSR and not the mode or interrupt bits. Finally, there are no instructions in the StrongARM instruction set that ref-erence the storage protection system, memory or address relocation system. Taking these things into consideration it is clear that the A R M instruction set architecture 25 is easier to virtualize than its x86 counterpart and a much better fit for use in a secure V M M . 3.2 Paravirtualized x86 The Xen hypervisor provides a secure V M M on the Pentium architecture despite the presence of sensitive, unprivileged instructions discussed above. Xen achieves this by providing a paravirtualized x86 interface to hosted virtual machines. This interface is similar to the regular x86 interface but different in that it works around the instructions and features in the .architecture that cause problems for virtualization. Guest operating systems must be modified to run on top of Xen as a result of these work arounds. 3.2.1 C P U Protection CPU protection is virtualized by placing the hypervisor at the highest privilege level, while guest OSes and their applications run at lower levels. On x86 this is simple because there are four separate execution rings: Xen runs at ring 0, guest OSes run at ring 1, and applications run at ring 3. Guest OSes must be modified to run at ring 1, because they will no longer have access to all of the privileged instructions and will no longer be able to access sensitive, unprivileged instructions safely. In order to access privileged functionality a guest OS must ask the hypervisor to perform the instruction on its behalf via a hypercall. 26 Exceptions Exceptions are virtualized by requiring guest OSes to register a descriptor table for exception handlers with the hypervisor. For the most part, handlers may remain identical to their non-virtualized counterparts since Xen copies the exception stack frame onto the guest OS stack before passing it control. The only handler that must be modified is that which services page faults since it reads the faulting address from the CR2 register. The CR2 register is privileged and as such the guest OS will not be able to access it from ring 1. The solution used by Xen was to extend the stack frame to accommodate the contents of the CR2 register. The hypervisor reads the address from CR2 and pushes it on the stack; the modified page fault handler pops the address off of the stack. Exception safety is guaranteed by the use of two techniques. First, the han-dlers are validated by the hypervisor at registration time. This validation checks that the handlers code segment does not specify an execution privilege reserved by the hypervisor. Second, the hypervisor performs checks during exception propaga-tion to ensure that faults originate from outside of the exception virtualization code by inspecting the program counter on a subsequent fault. If the program counter contains an address inside the virtualization code then the guest OS will be killed. System Calls System calls are typically implemented on x86 OSes using software exceptions. Xen improves their performance by allowing each guest OS to register a set of fast ex-ception handlers. Fast handlers are executed directly by the processor without indirection through ring 0 and are validated by the hypervisor at registration time. 27 Interrupts Hardware interrupts are replaced by a lightweight events delivery system. These allow for asynchronous notifications to be delivered from Xen to a guest OS. The hypervisor checks a per-domain bitmask to see whether there are pending events for a domain. If there are events pending, the bitmask is updated and a domain-specific event-callback can be executed. Events can be disabled by using a mask specified by the guest OS. This is similar to the way that interrupts can be disabled on a CPU. Time Each OS exposes a timer interface and is aware of 'real' and 'virtual' time. Time is available in granularities all the way down to the cycle count on the Pentium architecture. 3.2.2 Memory Management Paging Translation lookaside buffer (TLB) misses are serviced by the x86 processor by walking the pagetable structure in hardware. The TLB is not tagged with address-space identifiers (ASID) to associate mappings with the current addressing context. Therefore, the TLB must be flushed on every context switch. To avoid context switches when entering the hypervisor, Xen exists in the. top 64MB segment of every address space. However, this mapping is not accessible by the guest OS because the segment is marked as accessible only by ring 0 (discussed above). A similar technique is used to avoid context switches for system calls in a standard OS without virtualization. Also, guest OSes allocate and manage the hardware page 28 tables as usual with the restriction that all pagetable writes are validated by the hypervisor. Xen amortizes the cost of this validation by allowing guest OSes to batch pagetable updates. Validation requires that guest OSes only map pages that they own and do not allow for writable mappings of pagetables. Segmentation Segmentation is virtualized in a similar way by validating updates to the segment descriptor tables. Validation requires that the updates have lower privilege than the hypervisor and they do not allow access to the top 64MB of the address space where Xen resides. 3.2.3 Device I/O In order to support asynchronous device I/O, Xen makes use of the lightweight events described above to notify guest OSes of device status. In addition, data is transferred via shared-memory in asynchronous buffer rings. This abstraction makes it possible to efficiently move data from the hypervisor to guest OS's. It also provides a level of security because it allows the hypervisor to validate certain properties of the data transfer for safety. 3.3 Paravirtualized S t r o n g A R M When porting the Xen hypervisor to the StrongARM architecture it was beneficial to adopt a scheme similar to that used in Xen x86. We found that this was easiest in many cases because of the similarities of certain features in the StrongARM and Pentium architectures. Sometimes paravirtualization was needed to enforce 29 safety. In other cases, paravirtualization was not needed for safety, but we opted to paravirtualize anyway as a way of improving the performance of the system. 3.3.1 C P U Protect ion CPU protection is virtualized by placing the hypervisor at the highest privilege level, while guest OSes and their applications run at the lower level. On A R M CPU protection is not simple because there are only two execution modes: user and supervisor. Therefore, Xen runs in supervisor mode while guest OSes and their applications both run in user mode. Not only does this require that guest OSes be modified to run at a lower privilege level (same as in Xen x86), but it also requires that control be passed from guest OS to applications indirectly through the hypervisor because the guest OS must protect itself from applications by living in a separate address space. The hypervisor will then be responsible for switching address spaces and maintaining the virtual privilege level of the executing entity, be it guest OS or application. The hypervisor protects itself from guest OSes in a similar way on the x86/64 architecture. The lack of segment limit support on x86/64 makes it necessary to protect the hypervisor using page-level protection. Exceptions Exceptions can be paravirtualized in exactly the same way they are in Xen on the x86 processor. In addition, none of the exception handlers in the guest OS need to be modified because they do not access any sensitive registers to gain fault status information. This includes page faults. 3 0 Exception safety is maintained during exception propagation in a manner identical to the fixup's performed by Xen on the x86 processor. In addition, there is no need to validate exception handlers at registration time because a handler cannot specify execution at a more privileged level without causing an exception. System Calls System calls cannot be paravirtualized in exactly the same way they are in Xen on the x86 processor because of the way protection is implemented at the page level (discussed in the memory management section below). More specifically, because application pagetables do not contain mappings for the guest OS, the application must call into guest OSes indirectly via the hypervisor. The hypervisor then per-forms a context switch to change the virtual address space. Again, fast application system calls are implemented in a similar way on Xen for the x86/64 architecture. On both A R M and x86/64 the context switch requires a TLB flush. However, it should be noted that TLB flushes on the x86/64 can be avoided using the AMD64s TLB flush filtering capabilities [KMAC03]. Avoiding the TLB flush improves the performance of system calls dramatically. On StrongARM CPUs there is a feature of the architecture that can save flushing the TLB on every context switch between applications and guest OSes. This feature is called Domain Access Control and it is discussed later in this chapter. Interrupts Interrupts can be paravirtualized in exactly the same way they are in Xen on the x86 processor, by replacing them with the lightweight event notification system. 31 Time Time can be paravirtualized in the same way that it is in Xen on the x86 processor. The major difference is in the granularity of timing available. For example, some StrongARM systems do not provide a cycle counter. One of these is the SA-110 based system known as the DNARD Shark [Dig97] which only provides a real-time clock. As a result, OSes that rely on access to cycle counter for important computation will encounter problems. One such OS is the Linux kernel, which makes use of the cycle counter on the Pentium architecture to maintain jiffies used in the scheduler and other places as a timing unit. One solution is to simulate a cycle counter in the hypervisor on systems that don't support one in hardware. However, more recent versions of the StrongARM CPU have become increas-ingly more integrated and support a wider variety of features. An example of such a newer revision is the Intel 80200 [Int03] which supports a cycle counter. 3.3.2 Memory Management Paging The A R M architecture features virtually-addressed LI instruction and data caches2. Unfortunately, entries in the ARM's TLB are not tagged with an ASID. To com-pound matters, TLB misses are serviced by the walking the pagetable structure in hardware similar to what is done on the x86. As a result, the TLB must be flushed on a context switch. In addition, data in the caches may be stale after a context switch. As a result, unless the kernel is certain that no stale data exists in the 2 L 1 caches on the ARM are virtually-indexed and virtually-tagged, but do not make use of ASIDs to differentiate between memory from different address spaces. 32 caches, they must be flushed on context switch as w e l l 3 . T h e A R M C P U ' s paging is so similar to that of the x86 that we opted to use the same paravir tual izat ion techniques in the A R M port. Namely, the hypervisor exists in the top 6 4 M B of every address space. However, because the A R M does not support segmentation, we needed to use page-level protection to isolate the different memory regions in the system. There are two different cases for mapping the different memory regions in the system. T h e first memory mapping strategy handles mapping the hypervisor region. Here, the top 6 4 M B of every address space is mapped wi th read and write access in supervisor mode only. T h e fact that the hypervisor is co-mapped wi th guest OSes means that switching between a guest OS and the hypervisor can occur without a page table switch. The result is low latency performance for common operations such as the delivery of events and the execution of hypercalls. The second memory mapping strategy handles mapping guest operating sys-tems and their applications. A guest O S and its applications bo th share user mode, however the O S must remain isolated from the applications it hosts. To achieve this the guest OSes pagetables map memory belonging to the guest O S and its applica-tions w i t h read and write access in user mode. T h e applications own their own set of pagetables that map the memory that they own wi th read and write access in user mode. However, the application pagetables do not co-map the guest O S memory region. W i t h this organization, the guest OS can access and modify the memory of the applications that it hosts, but the hosted applications cannot see or access the guest O S . A s a result, a context switch must occur when execution transfers from guest O S to application or vice versa. T h e context switch has a negative impact 3 The ARM Linux kernel flushes the TLB and I and D caches whenever a context switch occurs 33 on the performance of instructions that cause applications to trap into the guest OS because they can only enter the guest OS indirectly after the hypervisor has performed the appropriate page table switch. Despite the performance issues, this approach is similar to what has been adopted for Xen on the x86/64 architecture. Pagetable writes done by a guest OS must be validated by the hypervisor for the same reasons as in x86. In addition, guest OSes may only map pages that they own and do not allow for writable mappings of pagetables. A R M Domains As described above, the A R M uses a two-level hardware-walked TLB. Each entry is tagged with a four-bit domain ID. The domain access control register (DACR) can modify the access rights specified in TLB entries to have either: • access as specified in the TLB entry. • no access at all. • full page access (regardless of what the TLB protection bits specify). It was shown in [WHOO] that it is possible to use A R M domains as a way of allowing mappings from different address spaces to co-exist in the TLB and caches as long as the mapped address spaces do not overlap. If two address spaces overlap, then after a context switch an access might be attempted to a page that is mapped with a domain ID that belongs to another process. Such an access will generate a fault which is then handled by the kernel by flushing the TLB and caches before updating the fast address space switching (FASS) paging data structure and continuing execution. After the flushes, only address spaces that do not overlap will co-exist in the TLB and caches until another conflict occurs. 34 In the FASS scheme, one of the 16 domain IDs is allocated to each process, in effect becoming an ASID. On a context switch the TLB and caches are not flushed. Instead, only the DACR needs to be modified to grant access to only the newly executing processes pages and deny access to all others. This is done by reloading the DACR with a mask that allows access only to pages tagged with the domain id associated with the newly running process. Further work on FASS for the StrongARM platform [WTUH03] showed that the above technique is used to create what is almost equivalent to software managed TLB for the A R M processor. However, the scheme is limited because a domain ID is more restricted in the values it can take when compared to a classical ASID, This also leads to boundary conditions in the implementation where domain IDs must be reclaimed from mappings belonging to non-executing processes; domain IDs must be recycled. Despite these drawbacks, it was determined that the TLB and caches would only need to be flushed when: • there exist mappings for two different address spaces that overlap. • the system runs out of domain IDs and must recycle one that was previously used. A similar FASS technique would not be useful at the hypervisor level because of the requirement for non-overlapping address-spaces. Switching between the hypervisor and a guest OS already avoids a context switch due to the fact that Xen is mapped to the top 64MB of every address-space. Switching between two guest OSes will require a TLB flush even when using the FASS technique unless their address spaces do not overlap. Non-overlapping address spaces are not likely when executing different 35 versions of the same OS or even different OSes of the same family4. As a result, it is required that a pagetable switch and the associated TLB flush takes place whenever the hypervisor switches between domains. However, the work on FASS shows that making use of domains is useful when context switching between an OS and it's applications. We could make use of the same technique in Xen to avoid a full pagetable switch when moving from an application to a guest OS and vice versa. Instead, when such a context switch occurs we would reload the DACR with a mask that permits access to the appropriate set of pages. StrongARM PID Relocation The StrongARM architecture provides a way for small address spaces -those con-taining addresses less than 32MB- to be relocated to another 32MB partition in a transparent manner. Which partition they are relocated to is determined by the PID register. There are 64 partitions available. The use of PID relocation in combi-nation with A R M domains was used in FASS to reduce the number of address space collisions. This was effective because whenever an address space collision occurred, the TLB had to be flushed. However, a similar PID relocation technique would not be useful in the hy-pervisor across domains because many of the virtual addresses mapped for the guest OSes are too large to be relocatable; they fall outside of the lowest 32MB region. However, we could apply the FASS work to relocate the applications hosted by guest OSes to reduce the number of address space collisions and associated TLB flushes. Applying the FASS techniques that utilize A R M domains and PID relocation to manage a domains pagetables could potentially be a big win for performance 4Linux, PreeBSD, and NetBSD are all Unix-like, and would likely produce many address-space collisions. 36 because of the page-level protection needed. However, we did not implement such a scheme and it could be considered useful future work. 3.3.3 Device I/O Devices would best be paravirtualized in exactly the same way they are in Xen on the x86 processor. 3.4 Summary Systems powered by the A R M architecture lend themselves well to virtualization. When compared to the x86 ISA, the A R M ISA is much easier to virtualize. In addition, the features of the A R M architecture are a nice fit for Xen-style paravir-tualization. There are a few key differences in the x86 and A R M feature sets that re-quire changes to the paravirtualization interface of the hypervisor. For instance, protecting the hypervisor from guest OSes must be done using page-level protection because the A R M C P U does not support segmentation. Protecting guest OSes from the applications they host must also be done using paging, but is complicated by the fact that there are only two modes of operation on A R M CPUs compared to the four that are available on x86 hardware. There are also similarities between the two architectures that allow reuse of the x86 paravirtualization interface with little or no change. Paging is one such feature because the T L B s on both the A R M and x86 are walked by hardware. For this reason we adopt the technique of validating guest OS pagetable updates in the hypervisor. Other parts of the paravirtualization interface that can be reused with little or no change are interrupt virtualization via lightweight events and device 37 virtualization using events for notifications and asynchronous data rings for the transfer of data. 38 Chapter 4 Design and Implementation Porting the Xen hypervisor from the x86 to the StrongARM architecture was diffi-cult for a few reasons. Since Xen is a V M M that sits directly on top of the hardware there is some architecture specific code for bootstrapping, hardware interrupt han-dling, device management, and low level domain handling that had to be ported to the StrongARM. There is also some architecture independent code that contained assumptions about the underlying architecture that required changes. We have produced a version of the Xen hypervisor capable of loading a test operating system as Domain 0, servicing hypercalls from the domain, and delivering events to the domain. This chapter details the key issues and implementation de-cisions that were made in porting the Xen hypervisor to the StrongARM platform. Our prototype implementation is based on the Xen 1.2 codebase and it runs on the Digital Network Appliance Reference Design ( D N A R D ) 1 . The D N A R D makes for a good reference platform because of its simplicity and the fact that its C P U and memory specifications are comparable to that found in many low-power handheld devices. It utilizes the first generation StrongARM 1The DNARD units are also commonly referred to as "Sharks". 39 processor, the SA-110 C P U running at 233 MHz and has access to 32 M B of S D R A M . Our choice to use the Xen 1.2 codebase was born out of necessity. Since Xen 1.2 was released in 2003, activity surrounding its development has been intense. The current stable releases are Xen 2.0.7 and Xen 3.0. While it would have been desirable to work with the current Xen ,3.0 codebase this was not possible due to compiler compatibility restrictions. The Xen team is moving away from G C C versions 2.95.x; Xen 2.0 and 3.0 build under G C C versions 3.3.x, 3.4.x, 4.0 only. However, the A R M cross compiler of choice in the A R M Linux [arm05b] kernel community is G C C version 2.95.3. This is because the A R M code generation in the G C C 3.x series of compilers is less than perfect. In fact, we verified this by building various versions of the G C C toolchain for the A R M platform. To automate the task we used the crosstool toolchain generation script [Keg05]. Using the generated toolchains we attempted to build Linux for the D N A R D Shark [sha05]. We confirmed that only G C C version 2.95.3 built the Shark Linux kernel without issue. More recent G C C versions failed in nefarious ways. G C C 3.4.3 came closest to building a complete kernel but generated a segmentation fault during the last stage of the build; compiling a portion of the Shark Linux boot code that interfaces with the D N A R D firmware. Since we planned to use this code to bootstrap our prototype hypervisor, the decision was made to use the version of G C C which could correctly compile it: G C C 2.95.3. Thus, we were restricted to the only version of Xen that builds under G C C 2.95.3: Xen 1.2. Despite the fact that we are basing our port on a version of the hypervisor that is nearly three years old, our work is still relevant since it should not require major changes to pull into the Xen 2.0/3.0 world. This is because the changes to Xen from 1.2 to 2.0.x focus mainly around a redesign of the I /O architecture which 40 will not have a large affect on our work. Similarly, the changes from Xen 2.0 to 3.0 focus on support for other architectures in the hypervisor (i.e. x86/64 and IA64), SMP capable Guest OSes, and running unmodified guest OSes via the Intel VT-x and AMD Pacifica extensions [PFH+05]. The feature which has the potential to affect our work the most is support for SMP capable Guest OSes. The following sections describe the details of our port. Section 4.1 details the changes necessary to architecture specific portions of the code. Section 4.2 details the changes necessary to the paravirtualization architecture. 4.1 Differences in architecture specific code There were many bits of architecture specific code that needed to be ported in order to have a functioning hypervisor. Much of this was written in x86 assembly and needed to be ported to the equivalent in A R M assembly. 4.1.1 Hypervisor boot code The Xen boot code is quite simple for x86. It is 266 lines of assembly (LOC) in xen / a r ch / i 386 /boo t /boo t . S. It requires the GRUB boot loader [gru05] to load the hypervisor image file into the upper 64MB segment of virtual memory and additionally load any modules specified as arguments. Once loaded the hypervisor boot code (written in x86 assembly) is free to perform CPU type checks, perform CPU initialization, setup the initial pagetable entries, start paging, initialize .the BSS, copy all of the modules to a well known address, and then jump to the cmain entry point. The only wrinkle is that the hypervisor's image file must be loaded into the upper 64MB region of virtual memory; a problem because the boot loader cannot access to the high addresses directly. To work around this, the image file 41. is modified after it is compiled by a tool named elf-reloc. This tool rewrites the segment offsets in the code to allow for the load to succeed. On the DNARD, things are slightly more complicated as GRUB is not avail-able and the A R M architecture does not support segmentation. Therefore, to load the hypervisor in to the upper 64MB region of memory we needed to adopt a tech-nique used in the Linux kernel. We wrap the compressed kernel image with a low memory bootstrapper that can relocate the kernel image to the upper 64MB region of memory. At boot time, the following steps are taken: 1. DNARD uses TFTP to grab the image file. 2. open firmware loads the image into memory and jumps to the start of the low memory boot loader 3. the low memory boot loader unzips the compressed kernel image to the upper 64 MB region and jumps to the start of the relocated image. 4. the hypervisor boot code takes control and initializes the hypervisor. The hypervisor boot code on the Shark performs initialization similar to that on the x86 with the exception of processing kernel modules. Loading modules is important because the first module specified will be the guest OS to run on top of Xen as domain 0. In Xen/x86, modules are loaded into memory by GRUB and copied by the Xen boot code to a well known address where they can then be accessed by the hypervisor. On the DNARD, the boot loader does not have the ability to load modules, hence the hypervisor boot code would have to be modified to do so. To complicate matters the Sharks have no fixed storage, so modules have to be obtained from network accessible storage. We had the following choices for how to gain access to a module on the DNARD. 42 • Obtain modules the same way that diskless systems do when running Linux • Link the modules directly into the Xen binary image Diskless systems that run Linux obtain modules by mounting a network accessible device as root using NFS. We could do something similar in Xen, but we would have to import all of the NFS functionality into the Hypervisor. To simplify the implementation we chose to link the modules into the Xen kernel image file as data. Thus, once booted the hypervisor reads the module information out of the image file without needing to go back to the network. The hypervisor boot code for the DNARD spans two files. The low mem-ory boot loader is found in x e n / a r c h / a r m / b o o t / c o m p r e s s e d / h e a d - s h a r k . S and is 243 LOC. It was taken almost directly from the Linux Kernel implementation with small modifications. The main boot code for the hypervisor is found in x e n / a r c h / a r m / b o o t / h e a d - a r m v . S and is 603 L O C A majority of the bulk comes from the hard coded page table setup of the direct mapped region, which is discussed in the following section. 4.1.2 Hypervisor direct mapped region Once the hypervisor hypervisor boot code has initialized the processor, it creates initial mappings for the hypervisor's direct mapped region in the level 1 pagetable. After the mappings are created the boot code can turn on paging and jump to the cmain entry point to continue start of day processing. Each of the areas in the direct mapped region are mapped into physical memory as shown in Figure 4.1. Note that the DNARD's physical memory is divided into four memory banks, each 8MB in size. Also note that the physical addresses are i 43 0 x 0 8 8 0 0 , 0 0 0 Monitor [Bankl] f,i maeftihe^^ mapping Jtaii^je [Bank2] [Bank3] Unused 0 x 0 8 0 0 0 0 0 0 OxOASOOOOO 0x0A400 000 OxOAOOOOOO O x o c e o o o o o OxOCOOOOOO O x O E 8 o d o r j : d OxOEOOOOOO Figure 4.1: Memory mappings for the direct mapped region of the hypervisor. 44 non-contiguous. The virtual addresses for the different regions are shown in Figure 4.2 The direct mapped region consumes 16MB of physical memory. The first part of the region is an 8MB slice that is consumed by the monitor's code, data, and dynamic data structures. This is followed by 4MB for the read-write machine-to-physical mapping table. The final part of the direct mapped region is dedicated to 4MB for the frame table. It should be noted that all of these mappings are mapped with read-write access in supervisor mode and no-access in user mode, providing protection from user space processes. This memory layout leaves 16MB of free physical memory on the device for use by domains. However, it should be possible to optimize this memory layout even further. For example, the frame table is used to track the allocation of each machine page being used by VMs in the system. For mobile devices supporting small amounts of physical memory, the number of machine pages available will be small. Thus, we could reduce the size of the frame table accordingly (see Chapter 5 for an evaluation of how much smaller we could make the frame table on the Shark). There were some restrictions on how we chose these memory mappings. For example, the hypervisor's page allocator claims some of the physical memory allo-cated to the bottom of the monitor region. The page allocator assumes that the physical pages it manages are contiguous in memory. Thus, to be safe, we make sure the monitor region does not span multiple memory banks. In fact, the assumption that physical memory addresses are contiguous is present in more than one place in the hypervisor code. 45 4.1.3 Idle task After the hypervisor has initialized the direct mapped memory region, turned on paging and jumped to the cmain entry point it initializes its data structures under the guise of the idle-task. The hypervisor uses a task to represent each of the domains that it can execute. It also keeps the idle-task as something that it can run when no other domain is capable of running. Each task is associated with its own pagetables and its own stack among other things. Thus, a task has its own address space and its own thread of execution. 4.1.4 Frame table The first data structure that the hypervisor initializes is the frame table. It is used to track the allocation and usage of each machine page being used by VMs in the system. The discontiguous physical memory of the DNARD (described above) caused the x86 frame table initialization and management code to break. Essentially, the frame table is a linked list of pfnJnfo nodes. Each node in the list corresponds to a physical page of memory and contains various information about how the physical page is being used by a domain. In Xen/x86 it is assumed that the frame table follows directly behind the monitor and machine-to-physical mapping table sections in physical memory. In addition, it is assumed that the physical memory region following the frame table -that to which the frame table nodes correspond- is also contiguous. Thus, the issue is that the list of frames can only track the use of a contiguous range of physical memory. We patched this up by hard coding the frame table to allocate memory from physical memory bank 2. Thus, whenever a physical 46 page is allocated from the table we can determine the physical address by perform-ing the following calculation. phys_addr = frame_num * page_size + phys.offset Where 'page_size' is 4096 (the size of one page) and 'phys_offset' is OxCOOOOOOO (the starting address of physical memory bank 2). The downside of this workaround is that the last 8MB of memory is inaccessible by the system (guest OSes included). However, 8MB is more than enough for us to load a simple test OS as a proof of concept. A description of how the frame table could be modified to support discontiguous physical memory can be found in the discussion of future work in Section 6.2. Our modifications to the frame table amount to 5 LOC being added to xen/common/domain.c. 4.1.5 Hypervisor address space layout During hypervisor initialization the memory mappings for the rest of the hypervisor's virtual address space are added to the hypervisor's page table in the paging-init function. The address space needed to be modified to make use of the limited memory available on the DNARD. The layout implemented is shown in Figure 4.2. The read only machine-to-physical mapping table occupies the first 4MB of virtual space. It is mapped with supervisor read-write and user read-only permissions to the same chunk of physical memory as the read-write machine-to-physical mapping table. The direct mapped region follows and is as we specified earlier. The next part of the address space is used for linear page table mapping. It is nothing more than an alias from which to access the hypervisor's LI pagetable. The next 1MB 47 section is used for per domain mappings and is only used by domains, thus the hypervisor leaves the mapping for this region empty. The penultimate 1MB is used for temporarily mapping a physical page allocated to a domain into the hypervisor's address space and is dubbed the map cache. The memory that this region maps to is allocated by the hypervisor's page allocator. The final 1MB of the address space is reserved for ioremap(). Again, the memory that this region maps to is allocated by the hypervisor's page allocator. As such it resides somewhere inside the range of physical addresses in the monitor direct mapped region managed by the page allocator. 4.1.6 T r a p h a n d l i n g After finalizing its address space the hypervisor begins initializing other data struc-tures. The first of these is the hardware trap table. The physical memory for the table is allocated by the hypervisor's page allocator and mapped to virtual address 0x0. On the SA-110, the trap table must be located at 0x0 and is not relocatable2. The trap table contains entries for handlers for undefined instructions, software in-terrupts (used for hypercalls), hardware interrupts (IRQs), fast hardware interrupts (FIQs), prefetch aborts, data aborts, and address exceptions. We use the A R M Linux code to create the trap table but modify the entries to point to our own handlers. The trap table entry for software interrupts points to the code that handles hypercalls. The entry that handles prefetch aborts deals with faults when fetching code while the entry that handles data aborts deals with faults when accessing data. The undefined instruction abort is executed when the processor attempts to 2More recent versions of the StrongARM support trap table relocation; the trap table is not restricted to living at virtual address 0x0. 48 I,G>".: Remap Map cache Per' domain-Linear page table Frame table rw machine-1o-.phy s i p a l . mapping- ..table Monitor ^rb '^ itfaeh irie^ t o ^ p h ^ ^ table: ;0:KF^ 7;0:0:0:0f •OxFDGOO.d.QQ OxFDBOOOOO 0xFD400000 OxFDOOOOOO OxFCGOOOOO .uxFC'400000' OxFCOOOOOO Figure 4.2: Hypervisor's virtual memory layout in the upper 64MB region. .49 execute part of memory that does not contain a valid A R M instruction. In each of the handlers we can determine whether the fault occurred in the hypervisor or in a domain by checking the mode bits in the saved CPSR (current processor status register) and the address in the PC (program counter)3. Once this is done we can take the appropriate action: forwarding the fault to the offending guest OS in the form of an event, terminating the guest OS, or executing a panic in the hypervisor. The trap table entries that do not get used are the ones for FIQs and address exceptions. The hypervisor does not install any FIQ handlers and address exceptions never occur in 32-bit operating mode; they are a remnant of the old 26-bit operating mode. The x86 implementation of trap handling and initialization is 822 LOC in the xen/arch/i386/traps. c file which is quite large due to the a fair amount of inline assembly code. The A R M implementation of trap handling is 361 LOC in xen/arch/arm/traps. c with all of the assembly pushed into xen/arch/arm/entry-armv. S which is 712 LOC. 4.1.7 Real time clock The real time clock is the DNARD's only method of measuring the passing of time and the only device that our prototype makes use of on the system. We use code from the A R M Linux RTC driver and the code needed to install the interrupt handler for the device on IRQ 8 to initialize the device.- After initialization, we program the real time clock to tick at a frequency of 128Hz, or once every 7.8ms. The RTC tick handler does the following: • reads the RTC's interrupt stats register to acknowledge the interrupt. 3Xen/x86 determines whether the fault occurred inside the hypervisor by checking the CS (code segment) register. 50 • updates the hypervisor's notion of wall-clock time. • updates the system time (number of milliseconds since boot). • raises a softirq corresponding to the accurate timers. We discuss why raising the softirq is necessary in the following section. The clock initialization and clock tick handler are found in xen/arch/arm/time. which is 441 L O C . Time management code on x86 can be found in xen/arch/i386/time and is 394 L O C . 4.1.8 Scheduler The functionality of the hypervisor scheduler depends on the use of accurate timers which in turn depend upon the use of the APIC timer on x86 systems. However, the D N A R D does not have an APIC timer. The solution we implemented was to periodically fire the softirq corresponding to the accurate timers in an effort to poll for its expiration time. As mentioned above, we use the real time clock hardware interrupt handler to post the softirq for the timer. One wrinkle with raising the softirq from the interrupt handler is that it uses the architecture dependent and atomic set .bit () routine to set the corresponding flag in the hypervisor's softirq status word. On x86, set_bit() uses the B T S L instruction to atomically set the bit. There is no corresponding instruction on A R M . On A R M , set .bit () must load the required word from memory to a register, set the bit in the register, then store the result back to memory. To make this routine atomic, interrupts would need to be disabled at the beginning and re-enabled at the end of the routine. The code which disables interrupts assumes that the processor is in supervisor mode. This assumption is false in the real time 51 clock tick handler because the processor is in IRQ mode when handling an interrupt. Thus, when interrupts are re-enabled at the end of set.bitQ, the processor will no longer be in the correct mode. Things get worse when the interrupt handler exits and the system attempts to switch back to where it was interrupted assuming that it is currently in IRQ mode. Chaos ensues. To solve the problems that using the architecture specific set_bit() routine creates we simply set the bit with regular C code. This does not introduce race conditions since irqs are already disabled when executing the handler. As a rule of thumb, interrupt handlers should not want or need to play with the CPSR to change the mode of the processor. The only thing that can preempt an IRQ on an A R M system is an FIQ (fast interrupt). 4.1.9 Task Management code Once traps, timers, and the scheduler are initialized the hypervisor prepares to load a guest OS into domain 0 . Before we can describe how this is done we must first understand how the hypervisor manages its tasks. In Xen, a task corresponds to a domain. The task management code in the hypervisor is used to manage the state of the currently executing task. This state is contained on the hypervisor's stack and is accessed by the scheduler during context switches and the hypervisor entry code during hypercalls and event delivery. Both of these pieces of code are performance critical and must be as efficient as possible. Thus, the task management routines are written in assembly language and had to be ported to run on the A R M . Figure 4.3 shows the layout of the hypervisor's stack. The address of the structure describing the state of the current task is stored on the top. Below this, the state of the currently executing task is saved. As the hypervisor 52 current.task struct's address current task's saved execution context stack grows 'down'; i : i>:FC4280CiO 0 x F C 4 2 7 F F C 0 : - : F G 4 2 7 F B 4 -0 x F C 4 2 7 0 0 0 Figure 4.3: Memory layout of the hypervisor's stack. 53 executes, the stack grows down. The five routines that manage this information are written in A R M assembly and are shown in Figure 4.4. Converting these from their x86 equivalents was straightforward. The only difference was that the execution context for a task in Xen/ARM contained one register more than in Xen/x86. Thus, the size of the execution context grew by four bytes to account for the extra word of information. It should also be noted that these routines are presented in a simplified form. For example, the schedule-tail routine has the capability of continuing either a non-idle task (i.e., a domain), or the idle_task if there is no domain that can be executed. The task management routines for A R M are found in include/asm-arm/current .h which is 89 LOC. The corresponding routines for x86 are found in include/asm-i386/current. which is 46 LOC. 4.1.10 Domain Execution Context Definition The execution context definition is architecture dependent because it depends upon the register set available on the system being paravirtualized. The execution con-text for A R M is shown in Figure 4.5. The field -unused is used internally by the hypervisor during the handling of hypercalls; there is an equivalent field in the x86 execution context definition. As stated earlier, the execution context for the A R M processor contains one more register than the execution context of the x86 processor. The execution context definition is found in include/hypervisor-if s/hypervisor-if where roughly 20 LOC were modified. 54 get_s t•ack_t/bp••:: mc•v rO, #4092' or r rO, sp, rO and r0„ rO., #~3 get_cui-rent: mov r'O-,. #40 92. "@ hyp;;stack i s one page or r rO, sp, rO @ top of stack and;:r0,; rO.. #~3 @ round to word boundary ' ld.r rO.. [rO] @ addr "• of cur_ta.sk <- • stack_top set_current:. mov rO .. #4092 . orr rO, sp, rO and r:Q,... r0;, #~3 s t r rl.. ;: [rO] .@ addr of cur_task •.-> stack_top get_execut ion_eontext mov r l , #'"-40 95 mov r-2, #4 096-76 and r l , sp.. . r l : a d d r l , , r l , ' r2 mov rO.. r l @ rO <- addr.. of, exec ctxt schedu 1 e_ t a i 1:: adr r l , continue_nonid1e_t ask mov rO, #~4095 mov r2, #4096-76 and rO, sp; rO add rO, rO, r2 mov sp, rO @ setup stack pointer . mov pc, r l @ jump to continue_nonid1e_task Figure 4.4: Assembly language routines for managing the hypervisor's stack. 55 t ypede f s t.xup. t • {. ., , 'unsigned: long rO'; /*woxking regs* / unsigned long r 1 ; "unsigned long -r2; unsigned long x3'; unsigned long x4; unsigned long r5 ; unsigned long x : 6; unsigned Ipiig rx^.; unsigned long x8.: "unsigned, long :r9i;;. unsigned lp'rfg,v-rl'Q/;. . . .. unsigned long f p.: / * r l l , . • I•ram"e;y:'p!tx'«;'^ ' unsigned long i p ; / * r l 2 , i n t r a - p r o c c a l l s c r a t p h * / • unsigned long sp;. ••/*.x.l3, s tack ptr*/ unsigned long l.r; / * r i ; 4 , i:ih]<;.;xegr'*^  unsigned ipng^ pp:; ;.^*xi'5•, ^rog-c'r l^r*/ ; unsigned, long p s r ; / * s t a t u s reg ~; (mode bits/f 1 ags j */•' unsigned long _.unused ;:. } "eKecut ;ion^con :tex.t_t ;; Figure 4.5: Execution context definition for tasks managed by the hypervisor. 56 v o i d s w i t c h _ t o ( t a s k _ s t r u c t *prev , . t a sk_s tr .uc t *n'ext / * get c u r r e n t t a s k s c o n t e x t * / f , , , ..... execut; ibn_cbntext_t ; *ec = get_execu,t|i-on_con-t.ex),••• / * save p r e v s t a t e to t a s k _ s t r u c t * / • memcpy ( pr.ev—>ec, e c , s i z e o f (*ec) ) j / * r e s t o r e next s t a t e to hyp s t a c k . memcpyX' ec , nex^t-^ee, size.of;('*e,c). ),,' / * f i u s h I cache and T L B ; c l e a n D cache; * s w i t c h page t a b l e s */-w r i te_pgdbase.( next-v>mm . p a g e t a b l e -)•';> '/* hyp s t a c k top c o n t a i n s r e f to c u r task * / s e t _ c u r r e n t ( next ) ; } Figure 4.6: Pseudo code for context switching in the hypervisor. 4.1.11 C o n t e x t swi tch code The hypervisor makes use of the task management code and the execution con-text definition when switching between tasks. The task switching happens in the switch_to() method, pseudo code for which is shown in Figure 4.6. First the execu-tion context of the current task is retrieved from the hypervisor's stack and saved to the task struct representing it. The execution context for the next task to be sched-uled is then placed in hypervisor's stack and the pagetables for the newly scheduled task are installed. When this operation takes place the instruction cache and T L B on the SA-110 processor are flushed; its data cache is cleaned as well. Finally, the address of the task struct being scheduled for execution is placed on the hypervisor's stack using set .current (). 5 7 The context switch code lives in xen/arch/arm/process. c which is 312 L O C compared to corresponding code for x86 that is 296 L O C 4.1.12 Domain Building and loading code The domain loading code handles the details of loading a domain into its own, isolated paravirtualized environment. The code for building domain 0 is found in do_createdomain() function. The steps taken are outlined below and remain similar to those taken in the x86 version of Xen: task_st ruct* do_createdomain( i n t dom_id ) { a l l o c a t e a task s t ruc tu re and f i l l i n i t s domain i d ; a l l o c a t e a page fo r the shared_info p o r t i o n of the task ; mark the shared_info page as shared between the hyperv i so r and the domain i n the frame t a b l e ; a l l o c a t e a page fo r the domain's per domain page t a b l e s ; add the t a s k . s t r u c t to the schedu le r ' s l i s t of runnables; i n i t i a l i z e the domains l i s t of a l l o c a t e d p h y s i c a l pages to 0 ; map dom_id to the t a s k . s t r u c t p t r i n the task_hash map; r e t u r n the t a s k . s t r u c t p t r ; } This code is quite straight forward. The one thing that might not be entirely obvious is the use of the taskJiash map. It is used so that the hypervisor can find the task structure representing a domain in constant time given its domain id. This 58 is useful for performing operations on domains, such as killing them. It is also useful for finding the domain for which data on a pseudo network device is destined. The code for loading domain 0 is found in the setup_guestos() function. The steps taken are outlined below and remain similar to those taken in the x86 version of Xen: i n t setup_guestos( task_st ruct* t sk ) { i f ( s p e c i f i e d domain to be loaded i s not 0 ) r e t u r n e r r o r ; i f ( s p e c i f i e d task_st ruct a l ready i n ' c o n s t r u c t e d ' s ta te ) r e t u r n e r r o r ; i f ( f i r s t 8 bytes of guestOS image != 'XenoGues' ) r e t u r n e r r o r ; / * a l l o c a t e pages i n the frame t ab l e fo r the domain * / i f ( tsk->phys_page_list = alloc_new_domain_mem0 ) r e t u r n e r r o r ; / * not enough free mem * / / * b u i l d the page t ab l e s fo r the VM * / a l l o c a t e a phys page from tsk->phys_page_list to be the domains LI p t ; map the phys page i n t o h y p e r v i s o r ' s address 'space us ing the map cache; create e n t r i e s i n the LI pt fo r the perdomain.pt and l i n e a r . p t r eg ions ; copy the h y p e r v i s o r ' s e n t r i e s i n t o the guestOS's L I p t , make them read-only zero out a l l domain e n t r i e s ( a l l e n t r i e s below upper 64MB); fo r ( each of the pages a l l o c a t e d to the domain ) { 59 create L2 pt entr ies s t a r t i n g at v ir t_ load_start (OxCOOOOOO) ; t h i s involves a l l o c a t i n g pages for the L2 tab les ; update pages usage f lags and ref counts i n the frame tab le ; mark the page number down i n the machine_to_phys mapping tab le ; } / * now mark the pages used as page tables read-only * / for ( a l l pages used as L2 page tables ) { modify pgtb l entry ' s permissions to be read-only by the VM. update page's usage f lags i n the frame table to be of type 'pagetable } setup shared_info area for the domain, r e s e t t i n g v i r t u a l time to 0; unmap the domain's LI pagetable from the map cache; i n s t a l l the guestOS pagetables; copy guest OS image to v i r t _ l o a d - S t a r t (OxCOOOOOOO) ; setup s t a r t - i n f o area with number of pages, the address of the shared_info s t r u c t , the page table base address, the domain i d and i t ' s f l ags ; r e ins ta t e hyperv i sor ' s page tables mark the task_struct associated with the domain as ' cons tructed ' ; create a new thread for the task_struct; re turn success; 60 } One thing that we removed was the creation of virtual network interfaces and their addition to the start-info structure. The network VIFs have not been ported to the A R M due to time constraints and as such we cannot include them in the domain loading. For the same reason we have not give Domain 0 access to any of the real block devices via the virtual block device interface. Another major difference in the domain loader code is that the flags values for page protection used in the page tables are different on A R M then they are on X86. In addition, the LI and L2 pagetables are different in structure on A R M compared to x86. For example, the x86 LI pagetables contains 1024 entries each mapping a 4MB region, whereas the A R M LI pagetables contains 4096 entries each mapping a 1MB region. The x86 L2 pagetables contain 1024 entries each mapping a 4KB region, whereas the A R M L2 pagetables contain 256 entries each mapping a 4KB region. We had to be aware of these details because the physical pages were allocated to the domain using the frame table and are each 4KB in size. Thus, we needed to use the L2 pagetables to access memory at the correct granularity. Other things the x86 version does such as storing the code segment selector for the event handler in the domain's task_struct are not needed. Finally, in the x86 version the guestOS image needed to be mapped into the hypervisor's address space using the map cache before the verification of the guest OS image could take place. In our version, the guest OS is linked into the hypervisor image and is already mapped as part of the monitor region at boot time, thus we don't have to do any extra mapping via the map cache to gain access to it. 61 Currently, additional domains are not supported. Further porting is required before loading additional domains is possible. First, the user space domain builder would need to be built into the guest OS in domain 0. Also, the final^setup_guestos() function needs to be ported. It is responsible for loading domains built by the domain builder into their own paravirtualized environment. The code for domain loading resides in xen/common/domain, c where we modified 34 L O C to build pagetables specific to the A R M processor. Roughly 30 L O C were removed that had to do with initializint virtual block devices for the newly created domain. 4.1.13 Hypervisor entry code The hypervisor entry code deals with low-level details that occur when moving be-tween the hypervisor and a domain. This includes dispatching event callbacks and handling hypercalls. In Xen/x86 it is written in roughly 700 lines of x86 assembly language and needed to be ported to it's A R M equivalent. The following is a pseudo code summary of the hypervisor entry code. continue_nonidle_task: tsk = get.current_task 0 ; goto-test_all_events; /* execute a l i s t of 'nr .calls ' hypercalls pointed at by ' c a l l - l i s t ' */ do_multicall: for (i=0; i < nr_calls; i++) { hypervisor_call_table [ cal l_l is t [i] . op ]( ca l l - l i s t [i] . args [0] , 62 c a l l - l i s t [ i ] . a r g s [ l ] , . . . ) ; } goto ret_f rom_hypervisor_cal l ; res tore_al l_gues t : / * pop the t a s k ' s f rozen s ta te off the hype rv i so r s tack . ** r e s to re program counter and f l a g s i n one i n s t r u c t i o n ** to jump back to user mode. * / movs pc , l r / * s i m i l a r to i r e t i n x 8 6 assembly * / t es t_a l l_events : / * t e s t fo r pending s o f t i r q s fo r t h i s CPU * / cpu id = t s k . p r o c e s s o r ; i f ( sof t i rq_pending( i rq_s ta t [cpu id] ) ) { d o _ s o f t i r q s ( ) ; goto tes t_a l l_events ; } / * t e s t fo r pending hyperv i so r events fo r t h i s task * / i f ( hype vent .pending ( t s k . hyp _e vents ) ) { doJiyp_events () ; goto tes t_a l l_events ; } 63 /* test for pending guest events for this task */ i f ( ! guestevent_pending( tsk. shared_info. events ) ) goto restore_all_guest; /* process the guest events */ guest_trap_bounce [cpuid] .pc = tsk.event_callback_addr; create_bounce_frame(); goto restore_all_guest; create_bounce_frame: /* construct a complete stack frame that is a copy of the ** most recent saved task frame and ca l l i t new_stackJrame. */ hyp_stack [sp] = new_stack_f rame; hyp_stack[pc] ' = guest_trap_bounce [cpuid] .pc; return; hypervisor_call: /* get the hypercall number */ hcno = get Jiypercall_no(); /* get the hypercall arguments ' a rg l ' , 'arg2', etc. ** then ca l l the function through the table of function pointers. */ 64 hypervisor_cal l_table [hcno] ( a r g l , a r g 2 , . . . ) ; re t _fromJiypervisor_cal l : goto test_all_events; There are different ways of entering the hypervisor entry code. The first is through the hypervisor scheduler. Whenever a task is scheduled to be executed the Xen scheduler calls the schedule-tail function discussed above. This function drops into the top of the entry code at continue-nonidle-task. The second way to gain entry is through a guest OS making a hypervisor call. Hypervisor calls are made through software interrupts initiated via the SWI instruction. The portion of the hypervisor that handles software interrupts does architecture specific processing such as saving the caller's state on the stack and zeroing out the frame pointer register before jumping to the hypervisor-call label. Returning from the hypercall happens at the restore-alLguest label where the saved state of the guest OS is popped off of the hypervisor's stack and user mode execution continues. The mechanics of delivering an event are unique as well. The idea behind events is that they are the method by which interrupts are paravirtualized. Thus, an event can interrupt the execution of the guest OS at any time. In order to achieve this, the hypervisor is clever about how it constructs exception frames before delivering the event. First, a clone of the guest OSes saved state is created in the create-bounce-frame function. This cloned frame contains the state of the guest OS at the time it was interrupted, including the working registers, stack pointer, and program counter. Next, the hypervisor ties the new frame to the current stack 65 frame by modifying the current frame's stack pointer to point to the new frame. In addit ion, the hypervisor modifies the current frame's program counter to be the address of the event callback handler in the guest O S . Once this is done, the event can be delivered and execution can be transferred to the guest O S v i a restore-alLguest which pops the current frame off of the stack. W h e n this occurs, execution is transferred to the guest O S when the program counter is loaded wi th the address of its event handler. In addit ion, the stack pointer is loaded wi th the address of the cloned frame that was created i n create-bounce-frame. Next , the event handler executes. W h e n execution is complete, the event handler pops the cloned frame off of the stack and jumps directly to the spot where the guest O S was interrupted at the t ime the event occurred. T h e main benefit of using the bounce frame to deliver events is that it cuts down on context switches between user and supervisor modes. Us ing the bounce frame, execution switches from the hypervisor to the guest O S event handl ing code and then direct ly to the spot where the guest O S was interrupted when the event occurred. A n alternative approach would be to have the hypervisor create an event handler frame but not tie it to the current frame. Then , after the event is handled the guest O S would have to issue a software interrupt (maybe i n the form of a special hypercall) to return to hypervisor which would then pop the current frame off the stack to return to the place where the guestOS was interrupted. T h e do-multicall entry is actually the body of the hypervisor_multicall hy-percall . It provides a way for a guest OS to execute a batch of hypercalls to reduce the overhead associated wi th entering and leaving the hypervisor on each cal l . T h e hypervisor entry code resides in x e n / a r c h / i 3 8 6 / e n t r y . S for x86 and is 703 L O C i n size. T h e entry code for the a rm hypervisor is in x e n / a r c h / a r m / e n t r y - c o m m o n 66 and is 753 L O C large. 4.2 Differences in paravirtualizing the architecture There are a few major differences in the paravirtualized interface presented by the A R M compared to that of the x86 which we discussed in Chapter 3. In this section we give more details on the implementation. 4.2.1 Protecting the hypervisor from guest OSes Guest OSes run in user mode while the hypervisor runs in supervisor mode. In order to make moving between the guest OS and the hypervisor possible without switching pagetables and flushing the T L B the guest OS pagetables map the hypervisor into the top 64MB of the address space. This is important to make operations that are executed frequently such as hypercalls and event delivery efficient. However, the hypervisor mappings must not be accessible from the guest OS, so we protect them with page-level protection as shown in Figure 4.7. Note that the hypervisor mappings are only accessible in supervisor mode while the guest OS mappings are accessible in both user and supervisor mode. Hypercalls use software interrupts to enter supervisor mode before jumping into the hypervisor. No page table switch is necessary since the hypervisor's memory mappings are included in the guestOS's pagetables. 4.2.2 Protecting the guest OSes from applications This functionality is not currently implemented as we do not support applications. However, we outline a possible implementation here because of the importance of application support. 67 Xeri - jsvc[,r.w] usr [.na] Kernel' — svc[rw] usr[rw] Figure 4.7: Hypervisor protects itself from guest OSes using page level protection. As stated in Chapter 3, both guest OSes and applications run in user mode. For this reason we cannot use the access permission bits in their page mappings to protect a guest OS from its applications. Thus, each must have their own set of page tables and a pagetable switch along with accompanying T L B and cache flushes must occur when switching context between the two. The guest OS pagetables will contain mappings for itself and its applications as shown in Figure 4.8. Application pagetables hosted by guest OSes contain mappings for themselves and not for guest OSes as shown in Figure 4.9. This prevents applications from accessing guest OS memory. As shown above, all pagetables contain mappings for the hypervisor that are only accessible through supervisor mode. System calls To make a system call, program control must be transferred from the application to the guest OS. Along with the transfer of control, the addressing context will have to 68 Xen - /s.vri[;rw.] :usr[na:]:' Kernel - svc[rw] usr[rw.] Appiicat fori; • — svc [.r-w ] usr[rw] Figure 4.8: GuestOSes have access to application memory as well as their own. Xen" — svo [• rw ]'' usr [ na.};' Appl i c a t ion — svc[.rw ] usr [r?] Figure 4.9: Applications have access to their own memory. 69 change from the application's context to the guest OS ' s context. Thus , the system call must enter the guest O S indirect ly through the hypervisor where the page table mappings can be switched. ; One possible implementat ion could be similar to that used in the x86/64 implementat ion of X e n . T w o extra hypercalls could be added to the hypervisor named syscall and sysreturn. T h e syscall hypercall could take a call number and an array of arguments as parameters. Appl ica t ions would use this hypercall to specify what system call they would like to execute. W h e n handl ing syscall, the hypervisor could switch context from the application's pagetables to the guest OS ' s pagetables before bouncing off of a t rampoline into the guest O S at the specified system cal l entry point w i t h the specified arguments. T h e t rampoline could be buil t at guest O S ini t ia l izat ion t ime wi th the domain registering the addresses of system cal l handlers w i th the hypervisor. Once the execution of the system cal l is complete the guest O S has to return control to the hypervisor. Th i s could be done using another hypercall , sysreturn. W h e n handl ing sysreturn, the hypervisor would switch context from the guest OS ' s pagetables back to the application's pagetables before returning control back to the applicat ion. T h e problem wi th the above implementation is that applications w i l l have to be modified to use the new hypercall before they could run on the hypervisor. Another solution is that the hypervisor could keep track of what is executing in the domain, O S or applicat ion. T h e n an application would be able to make system calls using the regular system call numbers and the hypervisor could check upon receiving a software interrupt whether it is a system cal l or a hypercall by first determining what issued it. If the software interrupt was issued by an application then the hypervisor could translate it into the appropriate system call and deliver it 70 to the guest OS via a trampoline as discussed earlier. If the software interrupt was issued by a guest OS then the hypervisor could execute the corresponding hypercall as it normally would. All of the context switch overhead that occurs in the above schemes during a system call is costly. As we mentioned in Chapter 3, the cost of context switching could be reduced from a pagetable switch and T L B flush to a simple reload of the Domain Access Control Register if we use the techniques for Fast Address Space Switching on the A R M . Another way to reduce the overhead would be to allow applications to batch system calls. The hypervisor already provides the functionality to batch hypercalls using the hypervisor-multicall hypercall. Applications could execute hypervisor-.multicall with a list of syscall operations as arguments. This is a simple way of amortizing the cost of system calls, but the applications would have to be modified to make use of the multicall functionality. 4.3 Summary of Implemented Functionality In this section we present a brief summary of the features that are implemented in the hypervisor, as well as a listing of the features are not supported. Supported Features The features that the hypervisor supports are: • hypervisor boot on the D N A R D . • output via the serial console. • real time clock hardware initialization. 71 • software interrupts and accurate timers. • notion of virtual and wallclock time. • hypervisor events used to asynchronously pass messages to the hypervisor. • guest events used to virtualize interrupts. • hosting a single guest OS. • isolation of the hypervisor from the hosted guest OS using page level protection in conjunction with the two physical execution modes on the A R M cpu. Unsupported Features The features that the hypervisor does not support are: • frame table management of discontiguous physical memory. • hypervisor heap allocator management of discontiguous physical memory.. • isolation of guest OSes from their hosted applications using page level protec-tion in conjuction with the two physical execution modes on the A R M cpu plus a virtual execution mode in the hypervisor. • hosting more than one guest OS. • ethernet hardware initialization. • virtual block devices. • virtual network interfaces. • transfering data between the hypervisor and guest OSes using Xen's asyn-chronous I /O rings. 72 Chapter 5 Performance This chapter presents performance measurements of our port of the Xen hypervi-sor for the A R M platform. We evaluate the performance of our system by micro-benchmarking four of its operations: handling hypercalls, handling batched hyper-calls, delivering events, and loading a domain. We also compare the memory con-sumption of our prototype to the memory consumption of the Xen 1.2 hypervisor forx86. 5.1 Min imal OS In order to measure the performance of our hypervisor we needed a guest operat-ing system to exercise the operations it supports. The Xen 1.2 codebase contains such a test operating system named Mini-OS. We ported Mini-OS to our prototype hypervisor for the A R M ; it demonstrates how a guest OS performs the following actions. • parsing the start_info struct at boot time. • registering virtual interrupt handlers for timer interrupts. 73 • handling asynchronous events. • enabling and disabling asynchronous events. • it also includes a simple page and memory allocator as well as minimal libc support. The port was straightforward. For the most part, once the code compiled it ran correctly. The biggest challenge was converting the architecture dependent boot and entry code for the Mini-OS from x86 assembly to A R M assembly. The entry code was the most complicated as it handles asynchronous event callbacks from the hypervisor. 5.2 Experimental Setup The experiments were run on a single D N A R D Shark device with a 233MHz SA-110 StrongARM processor and 32MB of memory [Dig97]. For each of the micro-benchmarks the Mini-OS was loaded as Domain 0 on top of the hypervisor. 5.2.1 Micro-benchmarks For each of the operations below we modified the hypervisor so that the cost of entering the scheduler would not affect our results. To achieve this, we modified the R T C interrupt handler in the hypervisor to only update its notion of the system time and the timing information in the shared jnfo struct. The R T C interrupt handler does not post the softirq that triggers the scheduler. Once the modifications were made, we used the hypervisor's notion of the 74 system time to measure how long a particular operation would take1. However, because the R T C interrupt only fires at a frequency of 128Hz as dictated by the real time clock on the D N A R D , we could only measure time at a 7ms granularity. Thus we could not measure the time it took to perform a single operation. accurately. Instead, we measured the time it took to perform 106 repeated operations and calculated the average time per operation. Hypercal ls In order to measure the time consumed while making a hypercall we needed to issue hypercalls that did not spend time doing any hypercall-specific processing. Fortu-nately there is already such a hypercall dubbed the no-op syscall, its body is shown below: a s m l i n k a g e l o n g s y s _ n i _ s y s c a l l ( v o i d ) { r e t u r n -ENOSYS; } We modified the Mini-OS code to call the no-op syscall repeatedly to calculate the average time it took to perform a single hypercall. The code to do this is shown below: s t a t i c c o n s t i n t n r _ c a l l s = 1000000; i n t i = 0 ; H Y P E R V I S O R _ p r i n t _ s y s t i m e ( ) ; 1The hypervisor stores system time as an unsigned 64-bit integer that contains the num-ber of nano-seconds that have passed since the hypervisor was booted. 75 for (i=0; i < nr-calls; i++) { HYPERVISOR-noopO ; } HYPERVISDR-print_systime(); We added the. HYPERVISORjprint.systime hypercall to the hypervisor and use it to display the system time before and after a batch of operations takes place. Its body is shown below: asmlinkage long do_print_systime( void ) { s_time_t t = get_s_time(); printk("! ! ! system_time='/.llu", t ) ; return 0; } The getsMme function is only accessible from inside the hypervisor. It returns the hypervisors notion of the system time. Batched Hypercalls We took a similar approach to measuring the performance of hypercall batching. The code to perform a large number of hypercalls in a batch and measure the time it took is shown below: 76 s t a t i c m u l t i c a l l _ e n t r y _ t c a l l s [1]; c a l l s [0]. op = -JiYPERVISOR-noop; H Y P E R V I S O R _ p r i n t _ s y s t i m e ( ) ; H Y P E R V I S O R - m u l t i c a l l ( c a l l s , n r . c a l l s ) ; H Y P E R V I S O R _ p r i n t - S y s t i m e ( ) ; It should be noted that in order to perform such a large number of batched hy-percalls we needed to hack the way HYPERVISOR-multicall works. Notice that the number of multicalLentry structs declared is 1, where usually it would usually equal the number of hypercalls to be batched. On the DNARD, it is not possible for us to declare an array of 106 multicall structs because there is not enough phys-ical memory2. As a result, the implementation of do-multicall in the hypervisor was modified to repeat the first operation in the multicalLentry array the specified number of times for the purpose of evaluation. Delivering Events We took a similar approach to measuring the performance of event delivery and processing. We added a hypercall to the hypervisor that caused the notification of an event to the Mini-OS. We then used this new hypercall to repeatedly send events to the Mini-OS. The code to perform a large number of event deliveries and measure the time it took is shown below: add_ev_act ion (EV .TIMER, &timer_handler); 2Each multicalLentry_t requires 32 bytes. An array of 106 multicalLentry structs would therefore require 31250KB (just over 30.5MB) of memory. 7 7 enable_ev_action(EV_TIMER) ; enable_hypervisor_event (EV-TIMER); HYPERVISOR_print_systime(); for (i=0; i < n r _ c a l l s ; i++) { HYPERVISOR_set_timer_event() ; } HYPERVISOR_print_systime() ; The add-ev-action function associates an event handler with an event number in the Mini-OS. The enable-ev-action function sets the 'enable' bit in the local mask associated with the timer event. This tells the Mini-OS to allow the timer callback function to be called when an event occurs. The enable-hypervisor-event function sets the bit corresponding to the timer event in the 'events_mask' field of the Mini-OS's shared_info struct. This effectively tells the hypervisor to enable timer event delivery for the Mini-OS. The timer callback function that we used for our measurements does not perform any event-specific processing. Its body is shown below: s t a t i c vo id t imerJ iand ler ( in t ev, . . . ) { re turn; } Finally, we needed to add functionality to the hypervisor that caused the notifi-cation of timer events. The required functionality took the form of the HYPERVI-78 SORsetMmer-event hypercall3. Its body is shown below: asmlinkage long do_set_timer_event (void) { unsigned long f l ags ; s truct task_struct *p; / * send v i r t u a l timer in terrupt * / read_lock_irqsave(&taskl is tJ lock, f lags) ; p = &idleO_task_union.task; do { i f ( is_idle_task(p) ) continue; test_and_set_bit(_EVENT_TIMER, &p->shared_inf o->events); } while ( (p = p->next_task) != &idleO_task_union.task ) ; read_unlock_irqrestore(&taskl istJLock, f lags) ; ' r e turn 0; } This function iterates through the task list and sets the bit corresponding to the timer event for each domain. As soon as the hypercall's processing is complete the hypervisor tests for events to that need to be delivered (as shown in Chapter 4's discussion of the 'Hypervisor Entry Code'). When the hypervisor discovers that there is a timer event to deliver and that timer events are enabled in the Mini-OS's events_mask it creates a bounce frame on the Mini-OS's stack and delivers the event to the Mini-OS. The Mini-OS performs event specific processing by calling 3It should be noted that our measurement of the time taken to deliver an event includes the time to execute the hypercall which triggers the event 79 the timer event callback function. Once the event has been delivered the Mini-OS pops the bounce frame off of its stack to return back to where the HYPERVI-SORsetMmer-event function was called. D o m a i n Loading Since we only support loading a single domain we can only measure loading domain 0. There are two routines in the hypervisor that build and load domain 0; we simply measure the time it takes to execute them using the following code: before = get_s_time() ; new_dom = do_createdomain( 0, 0 ); setup-guestos( new_dom, . . . ); after = get_s_time(); Table 5.1 shows the results of our micro-benchmarks. Operation Time per 106 operations (ms) Average Time per operation (fj,s) Hypercall 504 0.504 Batched Hypercall 98 0.098 Event Delivery 1960 1.960 Domain Loading n/a 49.0 Table 5.1: Micro-benchmark results for the A R M hypervisor prototype. 80 Note that loading a domain requires the most time at 49/Lis. This is mostly due to the fact that the domain loader must switch to the guest OSes addressing context before copying the guest OS image to the correct location in memory, and swith back to its own addressing context after the copy is complete. Switching addressing contexts is expensive because it reloads the pagetables to be used, requiring T L B and cache flushes. Copying memory from one location to another is inherently expensive. Delivering events is the next most expensive operation and takes almost 2/is per event. This is due to the creation and use of the bounce frame and to the extra processing done in the event handling code in the Mini-OS. Creating the bounce frame is not bad on its own. It only requires that 72 bytes be pushed onto the guest OSes stack. Popping an exception frame is potentially expensive as it requires jumping to a completely different region of code, possibly thwarting cache locality. The extra processing done in the event handling code is a result of demultiplexing the event to the correct handler. Hyercalls are the next most expensive operation as they take just over 0.5/us. This is due to issuing the software interrupt and returning to the guest OS by popping the exception frame off of the stack. With our measurements it is possible to see how issuing hypercalls in batches saves execution time since we do not have to enter and leave the hypervisor repeatedly. A hypercall executed in a batch is over 0.4/is faster than a standalone hypercall on average. 5.3 Soft Evaluation of Memory Consumption It is interesting to compare the memory consumption of our prototype to the hy-pervisor in Xen 1.2 for x86. Table 5.2 shows this comparison with a break down of the direct mapped hypervisor regions. 81 Region (MB) A R M x86 Monitor 8 16 Machine-to-physical Mapping Table 4 4 Frame Table 4 20 Total Memory Consumed 16 40 Table 5.2: Comparison of hypervisor memory consumption on A R M and x86. The memory consumption of the monitor region was reduced by one-half. It contains code, static data, and space for dynamic memory allocation for the hypervisor. Since the implementation of our hypervisor is very stripped down in terms of features and we currently only support a single domain without applications we could reduce the monitor region safely. However, if our implementation was feature complete and could host the same number of operating systems that is possible on the x86 implementation then it would need to use 16MB of memory. The reason for this increased memory usage is not because of increased code size, but because of the space that the data structures internal to the hypervisor require when supporting multiple domains. The memory consumption of the machine-to-physical mapping table was not reduced at all because we do not change the implementation of the mapping table from its x86 counterpart. The mapping table is used by the hypervisor to produce the illusion of contiguous physical memory to guest operating systems. It maps a machine page address to a physical page number. A page is the same size on both 82 the x86 and A R M implementations at 4KB. Given that the maximum amount of physical memory on an A R M device is 4GB we use the following calculation to de-termine the size of the mapping table: mpt_size = ( max_phys_addr / page_size ) * entry_size Given that an entry in the table is an integer (the page number) the entry size is 4 bytes and the machine to physical mapping table must be 4MB in size. This calculation assumes that a simple linear mapping from machine page address to physical page number is used. We could have modified the machine to physical mapping table implementation to take the physical memory present on the machine into account instead of using the maximum theoretical physical address. There was no need to change the implementation since we can afford to use the required 4MB easily. It is also worth noting that the current implementation provides for efficient page number lookups. Retrieving a physical page number given a machine page address is done in the following way. phys.pagei_no = mpt [ phys_page_addr » PAGE_SHIFT ] Where P A G E - S H I F T is 12; this gives us the entry in the mapping table for a 4KB page (NB: 1 << 12 = 4096). It is important that the above lookup operation can be performed quickly since it becomes an extra level of address translation that is performed by the hypervisor when presenting the illusion of contiguous physical memory. 83 The memory consumption of the frame table region was reduced by a factor of six from 24MB to 4MB. This was possible because the frame table is used to track the allocation and use of physical pages by domains in the system. As such, its size could be reduced given the amount of physical memory available on a device. The frame table uses the pfn_info structure to maintain information about a physical page. This structure requires 20bytes of storage. In our implementation, we mod-ified the frame table to maintain allocation information for physical memory bank #2 which is 8MB in size. Therefore, the size of the frame table needed to maintain information can be calculated as: f t _ s i ze = ( phys_mem_size / page_size ) * pfn_info_size Where phys_mem_size is 8MB, page_size is 4KB, and pfn_info_size is 20 bytes. As a result we find that a frame table of 40KB in size is sufficient to track the use of 8MB of memory4. If we were to modify the frame table to track the allocation of physical memory from both bank #2 and bank #3 then we would still only require 80KB. Clearly the 4MB of space allocated to the region is more than enough and could be optimized further should other parts of the system require more memory as it evolves. It should also be noted that by changing the size of the frame table region we do not affect the efficiency of lookups in the table since the underlying linked list implementation is left untouched. 4The x86 frame table size of 20MB is enough to track the use of 4GB of physical memory. 84 5.4 Summary Overall the results of our measurements are encouraging. The micro-benchmarks of the operations supported by the hypervisor were held back by the accuracy of the timing hardware on our test devices. However, we were able to repeatedly execute operations a high enough number of times to obtain a reasonably accurate measure of the average time per operation. In addition, the evaluation of the memory consumed by the hypervisor confirms the lightweight implementation used by the Xen team and that Xen could be used on a handheld device supporting a limited amount of memory. 85 Chapter 6 Conclusion and Future Work 6.1 Conclusion As a result of this thesis, we have an implementation of the Xen 1.2 hypervisor ported to the first generation StrongARM processor, the SA-110. The hypervisor is capable of loading guest operating systems, servicing hypercalls, and delivering events. The hypervisor uses page-level protection to isolate itself from guest operating systems but does not support applications. The hypervisor paravirtualizes the C P U and memory but does not support device paravirtualization Porting the hypervisor to a different architecture was difficult for a few rea-sons. As a hardware hosted virtual machine monitor, the hypervisor sits directly on bare metal. This restricted our options for debugging when things went wrong. In fact, the debugging environment consisted of sending characters to the serial con-sole. Debugging virtual memory problems constituted dumping pagetable entries out to the serial console. Similarly, debugging hypercalls and event delivery required dumping the stack(s) to the serial console. The final reason that porting the hypervisor was a challenge is because its 86 implementation is x86 specific. The implementation of features for this project followed a three step process. 1. Figure out what the x86 hypervisor is doing. 2. Figure out why the x86 hypervisor is doing what its doing. 3. Write the corresponding feature for the A R M version of the hypervisor. In some cases step 2 could be more time consuming than the other steps due to the eclectic nature of the x86 architecture. Other times steps 1 and 3 were the most time consuming due to the fact that the code was written in assembly language. We have supported the implementation of the Xen hypervisor for StrongARM with a port of the Mini-OS found in the Xen 1.2 codebase. In addition, we present some hard performance numbers of execution time as well as a soft evaluation of the hypervisor's memory usage. Of particular interest is that the StrongARM version of the hypervisor consumes less than half of the memory that the x86 version does. This was possible because of the clever way the Xen team designed the hypervisor and the ability to resize the regions of memory managed by the hypervisor depending on the amount of physical memory available. Our results suggest that our hypervisor could perform comfortably on a StrongARM powered handheld device supporting a limited amount of memory and processing power. This thesis has presented a case for the use of small and fast virtual machine monitors such as Xen on mobile devices powered by the StrongARM architecture. The implementation of a prototype hypervisor and an evaluation of its performance show that the use of such a paravirtualization architecture is attractive and feasible on the StrongARM. Finally, we have shown that while Xen style paravirtualization of the A R M architecture requires guest operating systems to be ported to run on top 87 of the hypervisor, the A R M instruction set architecture is far more accommodating than its x86 counterpart. 6.2 Future Work The prototype implemented in this thesis supports only the base functionality to load a test operating system. There are many features needed to support a. fully fledged general purpose OS that would need to be implemented, and as such the opportunities for future work are great. The frame table can currently only manage a contiguous range of memory. This lead to the final 8MB bank of physical memory on our D N A R D test devices being unusable by our prototype. There are two possible solutions to this problem. We could maintain one free frame list in the frame table for each region of physical memory to be managed. Under this scheme clients of the frame table could adjust frame addresses by an offset corresponding to the free list (i.e., region) that the frame belongs to. The other solution is to maintain a single free frame list and store a physical offset as part of each node in the list. In either case, the frame table management code needs to associate a physical offset with each frame being managed instead of assuming that all of memory is contiguous and keeping the offset constant. Once the hypervisor is ready it would be interesting to try embedding a few simple security services for it such as the secure logging facility used in ReVirt. Once a secure logging service is built it will be worthwhile to think about what security services could be most useful in a mobile environment where there is no hard disk to store log files and the limited amount of physical memory makes certain types of processing difficult. Perhaps a system where mobile devices running secure 88 logging services stream their logs to a central server for processing could be explored. There may be interesting ways that the log files from different devices could be cross referenced in the spirit of intrusion detection systems to backtrack network worm propagation to its source. In a similar vein, capabilities from the Snort network intrusion detection system [sno05] were recently embedded in the Xen hypervisor and demonstrated by XenSource [Xen05]. Other services that could be useful in a mobile environment could be explored as well. The quality-of-service tools in Xen 3.0 could be used as a basis for providing even more control over the resource utilization of hosted virtual machines. Such resource control would be useful in a .mobile environment for applications such as file sharing. For example, users might be more inclined to run a file sharing daemon on their handheld device if they could place accurate restrictions on the amount of network bandwidth, C P U time, and, probably most importantly, battery power being consumed. Moving our implementation into the Xen 3.0 codebase would make such a project easier as this would make the high-level QoS tools available to us. In order to move to the Xen 3.0 codebase a version of the G C C 3.4 or 4.0 cross compiler for A R M would need to become more stable. . At the very least, the bug that caused compilation of the D N A R D boot code to fail with a segmentation fault would need to be fixed. Once an acceptable version of the compiler is available it is predicted that the biggest changes to our implementation of the hypervisor would be due to the support for multi-threaded guest operating systems that was introduced in Xen 3.0. A port of a general purpose operating system such as Linux to the A R M port of the hypervisor would also be worthwhile. It would require the implementation 89 l—secure P r i v i l e g e d AL No'ri-s'ecu're User P r i v i l e g e d Secure-User Figure 6.1: Modes of operation for processors supporting the A R M TrustZone hard-ware extensions. of features that we could not complete due to time constraints such as support for guest OS applications. Once these features are implemented the performance of switching between the context of guest operating systems and their applications could be measured. In addition, it would be interesting to implement a fast address space switching scheme similar to [WHOO] in the hypervisor and investigate the speedup it provides when switching between guest OSes and their applications. One other direction of future work could be to make use of the TrustZone hardware extensions that were recently added to the A R M architecture [AF04]. Using these extensions, software can be run in one of the four execution modes shown in Figure 6.1: unsecure user, unsecure supervisor, secure user, and secure supervisor. There are a few possibilities for how these extensions could be used. We could use the TrustZone software provided by A R M to have the hypervisor make use of the TrustZone API for building secure services [ARM05a]. Using the TrustZone 90 Normal Normal OS 7K Normal OS . A P R ' o -p G Secure Secure Kernel If Services Figure 6.2: Organization of a system using the A R M TrustZone API. software the system would be organized similar to Figure 6.2. The monitor, secure kernel, and secure services operate in the secure mode with the monitor and secure kernel software supplied by A R M . Using the API the hypervisor would execute in unsecure supervisor mode with applications operating in unsecure user mode; any secure services we write would execute in secure user mode. This would give us flexibility deciding where we would like to place the code for the services. For example, we could embed the service's code inside the hypervisor or place the code inside of a guest OS and the code would still execute in secure mode on the processor. Alternatively we could eschew the provided TrustZone API altogether and work directly with the hardware extensions themselves. Doing this would make it possible to organize the system in many different ways with different organizations providing parts of the system with varying levels of control and performance. For example, it would be possible to have the hypervisor operate in secure supervisor mode with the guest OSes running in unsecure supervisor mode and their applica-tions running in unsecure user mode. Any security services could then operate in secure user mode. Under this organization, a system call could be issued by a guest 91 OS executing an SMI (secure mode interrupt) instruction. It would be interesting to see how the performance of the system is affected by the different organizations. For example, switching context to and from secure mode may have adverse affects on performance. Switching to and from secure mode will necessitate T L B and cache flushes unless processors supporting the TrustZone extensions provide tagged T L B s and caches. 92 Bibliography [Adv05] Advanced Micro Devices. AMD64 Virtualization Codenamed "Paci-fica" Technology Secure Virtual Machine Architecture Reference Man-ual, May 2005. [AF04] T . Alves and D. Felton. Trustzone: Integrated hardware and software security, July 2004. [ARM05a] A R M . TrustZone Software API Specification, July 2005. [arm05b] Arm linux project website, .2005. http://www.arm.linux.org.uk/. [BDF+03] P. Barham, B. Dragovic, K . Eraser, S. Hand, T . Harris, A . Ho, E . Kotso-vinos, A. Madhavapeddy, R. Neugebauer, I. Pratt, and A. Warfield. Xen 2002. Technical Report 553, University of Cambridge, Computer Lab-oratory, January 2003. [BX06] H. Blanchard and J . Xenidis. Xen on powerpc. In linux.conf.au, 2006. [CDD+04] Bryan Clark, Todd Deshane, Eli Dow, Stephen Evanchik, Matthew Fin-' layson, Jason Heme, and Jeanna Neefe Matthews. Xen and the art of re-peated research. In USENIX Annual Technical Conference, FREENIX Track, pages 135-144, 2004. [che05] checkps project website, 2005. http://sourceforge.net/projects/checkps/. [DFH+03] B. Dragovic, K . Fraser, S. Hand, T . Harris, A . Ho, I. Pratt, A. Warfield, P. Barham, and R. Neugebauer. Xen and the art of virtualization. In Proceedings of the ACM Symposium on Operating Systems Principles, October 2003. [Dig97] Digital Equipment Corporation. DIGITAL Network Appliance Refer-ence Design: User's Guide, November 1997. 93 DKC + 02] George W. Dunlap, Samuel T . King, Sukru Cinar, Murtaza A. Basrai, and Peter M . Chen. Revirt: enabling intrusion analysis through virtual-machine logging and replay. SIGOPS Oper. Syst. Rev., 36(SI):211-224, 2002. FHN+04] K . Fraser, S. Hand, R. Neugebauer, I. Pratt, A. Warfield, and M . Williamson. Safe hardware access with the xen virtual machine monitor. In OASIS ASPLOS, Workshop, 2004. Fra03] K. Fraser. Post to linux kernel mailing list, October 2003. http://www.ussg.iu.edU/hypermail/linux/kernel/0310.0/0550.html. FSM05] FSMLabs. Rtlinux website, 2005. http://www.fsmlabs.com/. Gar04] Larry Garfield. 'metal gear' symbian os trojan dis-ables anti-virus software. Info sync World, December 2004. http://www.infosyncworld.com/news/n/5654.html. Gol72] R. Goldberg. Architectural Principles for Virtual Computer Systems. PhD thesis, Harvard University, Cambridge, M A , 1972. GR03] Tal Garfinkel and Mendel Rosenblum. A virtual machine introspec-tion based architecture for intrusion detection. In Proc. Network and Distributed Systems Security Symposium, February 2003. gru05] Gnu grub project website, 2005. http://www.gnu.org/software/grub/. Int98a] Intel. SA-110 Microprossor: Technical Reference Manual, September 1998. Int98b] Intel. SA-1100 Microprocessor: Technical Reference Manual, September 1998. Int03] Intel. Intel 80200 Processor based on Intel XScale Microarchitecture: Developer's Manual, March 2003. Int05a] Intel. IA-32 Intel Architecture Software Developer's Manual, Vol I.-Basic Architecture, September 2005. Int05b] Intel. Intel Virtualization Technology for the Intel Itanium Architecture, April 2005. Int05c] Intel. Intel Virtualization Technology Specification for the IA-32 Intel Architecture, April 2005. 94 [Jal05] Jaluna. Jaluna website, 2005. http://www.jaluna.com/. [JFMA04] Nick L . Petroni Jr., Timothy Fraser, Jesus Molina, and William A. Ar-baugh. Copilot - a coprocessor-based kernel runtime integrity monitor. In USENIX Security Symposium, pages 179-194, 2004. [Kaw05] Dawn Kawamoto. Cell phone virus tries leap-ing to pes. ZDNet Asia, September 2005. http://www.zdnetasia.eom/news/security/0,39044215,39257506,00.htm. [KDC03] Samuel T . King, George W. Dunlap, and Peter M . Chen. Operating system support for virtual machines. In USENIX Annual Technical Conference, General Track, pages 71-84, 2003. [Keg05] D. Kegel. Crosstool project website, 2005. http://kegel.com/crosstool/. [KMAC03] Chetana N. Keltcher, Kevin J. McGrath, Ardsher Ahmed, and Pat Con-way. The amd opteron processor for multiprocessor servers. IEEE Mi-cro, 23(2):66-76, March/April 2003. [PD80] David A. Patterson and David R. Ditzel. The case for the reduced instruction set computer. SIGARCH Comput. Archit. News, 8(6):25-33, 1980. [PFH+05] I. Pratt, K. Fraser, S. Hand, C. Limpach, A . Warfield, D. Magenheimer, J. Nakajima, and A. Mallick. Xen 3.0 and the art of virtualization, 2005. Ottawa Linux Symposium 2005 presentation. [RI00] J . Robin and C. Irvine. Analysis of the intel pentium's ability to support a secure virtual machine monitor, 2000. [s0f05] sOftprOject. Kstat website, 2005. http://www.sOftpj.org/en/site.html. [SeaOO] David Seal. ARM Architecture Reference Manual. Addison-Wesley, 2 edition, December 2000. [sha05] Shark linux project website, 2005. http://www.shark-linux.de/shark.html. > [Sim05] SimWorks. Simworks anti-virus website, 2005. http://www.simworks.biz / sav / Antivirus.php?id=home. [sno05] Snort project website, 2005. http://www.snort.org/. 95 [SS72] Michael D. Schroeder and Jerome H. Saltzer. A hardware architec-ture for implementing protection rings. Commun. ACM, 15(3): 157-170, 1972. [Sun05] Jorgen Sundgot. First symbian os virus to repli-cate over mms appears. InfosyncWorld, March 2005. http://www. infosyncworld.com/news/n/5835. html. [SVL01] Jeremy Sugerman, Ganesh Venkitachalam, and Beng-Hong Lim. Vir-tualizing i/o devices on vmware workstation's hosted virtual machine monitor. In Proceedings of the General Track: 2002 USENIX Annual Technical Conference, pages 1-14, Berkeley, C A , USA, 2001. USENIX Association. [Tri05] Tripwire. Tripwire website, 2005. http://www.tripwire.org/. [VMw05] VMware. Vmware website, 2005. http://www.vmware.com/. [Wal02] Carl A. Waldspurger. Memory resource management in vmware esx server. SIGOPS Oper. Syst. Rev., 36(SI):181-194, 2002. [WH00] A. Wiggins and G. Heiser. Fast address-space switching on the stron-garm sa-1100 processor, 2000. [Wro05] Jay Wrolstad. Mabir smartphone virus targets symbian-based mobile phones. Contact Center Today, April 2005. http://www.contact-center-today.com/ccttechbrief/story.xhtml?story_id=32327. [WSG02] Andrew Whitaker, Marianne Shaw, and Steven D. Gribble. Denali: A scalable isolation kernel. In Proceedings of the Tenth ACM SIGOPS European Workshop, St. Emilion, France, September 2002. [WTUH03] A. Wiggins, H . Tuch, V . Uhlig, and G. Heiser. Implementation of fast address-space switching and tlb sharing on the strongarm processor, 2003. [Xen05] XenSource. Xensource showcases secure xen hypervisor, August 2005. http://www.xensource.com/news/pr082505.html. 96 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items