Open Collections

UBC Undergraduate Research

Defenses for main memory systems using memory controllers Wang, Bolin 2019-12-03

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


52966-Wang_Bolin_ELEC_499_Defenses.pdf [ 1.32MB ]
JSON: 52966-1.0386728.json
JSON-LD: 52966-1.0386728-ld.json
RDF/XML (Pretty): 52966-1.0386728-rdf.xml
RDF/JSON: 52966-1.0386728-rdf.json
Turtle: 52966-1.0386728-turtle.txt
N-Triples: 52966-1.0386728-rdf-ntriples.txt
Original Record: 52966-1.0386728-source.json
Full Text

Full Text

Defenses for Main Memory Systemsusing Memory ControllersbyBolin WangUnder the guidance ofProf. Prashant J. NairA THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFBACHELOR OF APPLIED SCIENCEinThe Faculty of Applied Science(in ECE)ELEC499THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 3, 2019© 2019Table of Contents1 Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 2.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . 52.2 2.2 BACKGROUND AND MOTIVATION . . . . . . . . . . 92.2.1 Main Memory System: Organization . . . . . . . . . . . . . 92.2.2 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2.3 Scheduling Policies: Secure vs Not Secure . . . . . . . . . . . 112.2.4 Motivation: Potential Row-Buffer Hit-Rates . . . . . . . . . 122.3 2.3 DESIGN: THE PLUMBER SCHEDULING POLICIES 132.3.1 Plumber-R: A scheduler that tackles Drip-R vulnerability . . 132.3.2 Plumber-R: Security Analysis . . . . . . . . . . . . . . . . . 152.3.3 Plumber-R: Row-Buffer Hit-Rate . . . . . . . . . . . . . . . . 162.3.4 Plumber-R: Queue-Epoch Size vs Read-Queue Occupancy . . 172.3.5 Plumber-Q: A scheduler that tackles Drip-Q vulnerability . . 192.3.6 Plumber-Q: Security Analysis . . . . . . . . . . . . . . . . . 212.3.7 Plumber-Q: Row-Buffer Hit-Rate . . . . . . . . . . . . . . . 222.4 2.4 EXPERIMENTAL METHODOLOGY . . . . . . . . . . . 232.5 2.5 RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.5.1 Performance Impact on Plumber-R . . . . . . . . . . . . . . 2422.5.2 Performance Impact on Plumber-Q . . . . . . . . . . . . . . 252.5.3 Power Consumption . . . . . . . . . . . . . . . . . . . . . . . 262.5.4 Sensitivity to Channels . . . . . . . . . . . . . . . . . . . . . 262.5.5 Sensitivity to the Number of Cores . . . . . . . . . . . . . . . 272.6 2.6 RELATED WORK . . . . . . . . . . . . . . . . . . . . . . . 282.6.1 Row Buffer Conflict Vulnerabilities . . . . . . . . . . . . . . 292.6.2 Read Queue Contention Vulnerabilities . . . . . . . . . . . . 292.6.3 General Timing Channel Mitigation . . . . . . . . . . . . . . 302.7 2.7 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323AbstractMain memories are a key shared resource within modern computing systems. Thisthesis shows that memory controllers are prone to side/covert-channel vulnerabili-ties. The first vulnerability, called Drip-R, exploits the fact that row-buffer hits andmisses incur different latency for the memory controller. The second vulnerability,called Drip-Q, leverages the read queue contention within the memory controller tofabricate differential latency. These differential latencies act as side/covert -channelsand can be used to leak or receive data from other processes.To overcome these vulnerabilities, this thesis proposes two secure and high-performance scheduling policies called Plumber-R and Plumber-Q, respectively. Thesepolicies work on the insight that request scheduling can be split into isolated epochs.We show that epochs can prevent the creation of side-channels within the memorycontroller and prohibit the attacker processes from leaking or receiving data usingside-channels. Furthermore, within each isolated epoch, the memory requests cantake advantage of row-buffer hits and improve performance. Our experiments showthat, on average, Plumber-R and Plumber-Q provide 29% and 41% speedup overthe prior state-of-the-art scheduling policies Close Page and Fixed Service.42.1 INTRODUCTIONModern multi-core systems enable users to concurrently use the same machine bysharing its resources. Unfortunately, sharing vital resources can introduce securityvulnerabilities [1]. For instance, the memory access behaviours of one user can in-fluence the memory latency experienced by other users. These differences in latencycan be used as “side-channels” and “covert-channels” to leak data [2,3]. Recently, at-tacks like Spectre [4], Meltdown [5], and Foreshadow [6] have exploited side-channelswithin caches. Furthermore, researchers from academia and the industry have sug-gested hardware-software mitigation [7, 8, 9, 10, 11]. However, even if we mitigatethe side-channels within caches, other levels of the memory-hierarchy can provideavenues for creating side/covert-channels 1. This thesis aims to develop secure andhigh-performance memory scheduling algorithms that mitigate these side-channels.Main memory systems typically consist of modules that use high-capacity Dy-namic Random Access Memories (DRAM). Each module is managed by a memorycontroller. Unfortunately, while DRAM systems provide very large capacities, theyalso have high access latencies [12]. For instance, an access to an arbitrary Address-A (RDA) can take nearly 40ns [13]. Memory modules try to leverage the spatiallocality of workloads to reduce access latency. On an access, memory modules typ-ically prefetch a row of DRAM cells (4KB-16KB long) onto a row-buffer [14]. Ifthe workload has high spatial locality, only the first memory access, RDA, wouldencounter a latency of 40ns. Subsequent accesses to neighboring addresses, likeRDA+1, would only encounter a latency of nearly 13ns (row-buffer hit). To ensurea high row-buffer hit-rate, the memory controller maintains read and write queuesto capture accesses to neighboring addresses.Unfortunately, row-buffers and read-queues may not always reduce the accesslatency. For example, an access to an address that is not present within a row-1This thesis targets both side-channels and covert-channels in its threat model. For simplicity,we only use the term side-channel to describe one of these threats in detail. The ideas in this thesisare broadly applicable to enable or mitigate both side-channels and covert-channels5buffer will require that the old DRAM row within this row-buffer be closed anda new DRAM row to be opened (row-buffer miss). Therefore, this access wouldencounter an additional latency of 40ns. Furthermore, each pending request in theread-queue would also experience an additional queuing delay. An attacker canexploit the difference in latencies between row-buffer hits and misses. Similarly, anattacker can also exploit the difference in latencies due to queuing delays. Theselatency differences can be used to create side-channels.Figure 2.1(a) shows two side-channel vulnerabilities Drip-R and Drip-Q, thatexploit differences in row-buffer hit/miss latencies and read-queue contentions re-spectively. To prevent Drip-R vulnerability, prior work proposes the “Close-Page”(CP) policy that closes row-buffers after every access. To prevent Drip-Q vulner-ability, prior work recommends maintaining a constant memory access rate, called“Fixed-Service” (FS) policy, by actively inserting fake requests. Figure 2.1(b) showsthe normalized slowdown of these policies with respect to a “not-secure” first-readyfirst-come first-serve (FRFCFS) policy. On average, these policies show a slowdownof 34% and 60% respectively. To address these performance concerns while provid-ing security this thesis proposes the Plumber-R and Plumber-Q scheduling policies.100150200250300350400Drip-R Drip-QLatency (ns)Read-Queue PressureRow-Buffer MissRow-Buffer HitNo Read-Queue Pressure(a) Weighted Slowdown(b)Figure 2.1: (a) Shows the differential latency due row-conflict (Drip-R) and readqueue contention vulnerabilities (Drip-Q). (b) Shows two secure prior works: ClosePage prevents against Drip-R, Fixed Service protects against Drip-Q. As comparedto an unsecured baseline of FR-FCFS scheduling policy, the secure schedulers are35%-60% slower. The goal of this thesis is to propose a secure high-performancescheduling policy.6The Plumber-R and Plumber-Q policies maximize row-buffer hit-rates while pro-tecting against Drip-R and Drip-Q.A) Drip-R Vulnerability: The Drip-R vulnerability tries to cause row-bufferconflicts and transmit information via side-channels. This can be orchestrated evenif the malicious process that creates the side-channel and the attacker do not sharetheir address space. The attacker would simply keep a row-buffer open and wait forthe malicious process to either close the opened row-buffer or keep the row-bufferopen. After some time, the attacker would probe another location from the openedrow-buffer and measure the access latency. If the row-buffer is open, then attackerwould see a lower latency, otherwise the attacker would see a higher latency.B) Plumber-R Scheduling Policy: DRAM memory modules have high accesslatencies. As such, the read and write queues in the memory controller are usuallyfilled at a rate faster than they can be drained. Therefore it is likely that, evenbefore the completion of the first memory request within the read queue, the readqueue fills up to have several pending memory requests. Plumber-R avoids Drip-R by creating time-epochs within the read-queue. A time-epoch begins when thefirst access is issued. The time-epoch takes a snapshot of the read-queue as soonas they begin. This snapshot captures the total number of requests present in theread-queue while the first access was being issued. Plumber-R issues request(s) onlywithin its current time-epoch. Plumber-R ignores all requests which lie outside itscurrent time-epoch (possibly pending in the read-queue). The time-epoch ends onlywhen the last request in the snapshot has completed accessing the memory.For instance, if at time t0, the read queue has only 1 request, then the epochbegins at time t0 and ends when this request is serviced (say at time t39). Let usassume that the read queue is filled with 10 pending requests when the time-epochends (at t39). Plumber-R waits for the current epoch to end at t39. It then closesall opened row-buffers and creates a new epoch that starts at time t40. This newtime-epoch ends only when all the 10 pending requests are serviced. All requests7within the same time-epoch are rescheduled by Plumber-R to maximize row-bufferhit-rates and improve performance.The attacker must issue two memory requests in order to use the Drip-R vulner-ability. In the first request, the attacker opens a row-buffer. In the second request,the attacker checks if the opened row-buffer is still open or not. For the Plumber-Rpolicy, the two memory requests of the attacker must not be issued in two differenttime-epochs. This is because, while switching between these two time-epochs, allthe row-buffers would be closed by Plumber-R. Therefore, the attacker is forced toissue all its memory requests in the same time-epoch. Unfortunately, in this sce-nario, these two requests will be issued back-to-back by the Plumber-R scheduler(to maximize row-buffer hit-rates). Due to this, the attacker will always see a lowlatency if both its requests lie in the same time-epoch.C) Drip-Q Vulnerability: An attacker can also create a side-channel using read-queue contention. For instance, a naive user (victim) can execute a malicious processthat overwhelms the read queue with multiple read requests. This contention canbe probed by the attacker by simply issuing one additional read request to the readqueue and measuring the queuing delay. We call this side-channel vulnerability asDrip-Q. Like Drip-R, Drip-Q does not require the attacker and the victim users toshare address space.D) Plumber-Q Scheduling Policy: Plumber-Q is a high-performance schedulingpolicy that is designed to prevent the Drip-Q vulnerability. As Drip-Q relies on thenumber of pending memory requests in the read-queue, Plumber-Q redefines thetime-epoch from Plumber-R to be independent of the number of requests. Therefore,the time-epoch for Plumber-Q is chosen as an arbitrary slot of time. The lengthof the time-slots are dynamically tuned to maximize the row-buffer hit-rate whilemaintaining a high IPC.Plumber-Q allots the same amount of time for each process in the read-queue.Like Plumber-R, all requests within the same time-epoch of Plumber-Q are resched-8uled to maximize row-buffer hit-rates and improve performance. If all requests withinthe time-epoch are completed before the time-slot ends, then the current process(in-charge of this time-slot) simply closes all opened row-buffers and remains idle.Once the time-epoch ends, the memory controller moves to the next process ina round-robin manner. Therefore, all processes experience the same queuing de-lay irrespective of the number per-process requests in the read queue. Therefore,Plumber-Q prevents the creation of side/covert-channels using Drip-Q. Further-more, as Plumber-Q fundamentally uses time-epochs, it can also prevent againstside/covert-channels using Drip-R.Plumber-R and Plumber-Q scheduling policies provide an average NormalizedWeighted Speedup of 29% and 41% over prior state-of-the-art policies. These sched-ulers improve security by preventing the creation of side/covert-channels.2.2 BACKGROUND AND MOTIVATIONWe provide a brief background on the organization of main memory systems andthe threat model.Main Memory System: OrganizationMain memory systems are composed of DRAM modules that operate on independentbuses called memory channels. Each channel is managed by a memory controller.Typically, memory channel consists of multiple ranks. Each rank consists of severalbanks and each bank consists of multiple 4KB to 16KB sized rows of DRAM cells.Each bank also contains a 4KB to 16KB sized row-buffer that stores the mostrecently accessed row of DRAM cells [15]. Each memory request typically addressesonly 64-Bytes of data. Therefore, a large row-buffer can hold several consecutive64-Byte data elements (64 to 256 data elements). Accessing addresses within arow-buffer is typically 4x-5x faster than accessing addresses that are not within therow-buffer [14, 16]. A memory controller issues read and write requests and helps9maintain the timing constraints. Typically, memory controllers maintain separateread and write queues. To reduce access latency, the memory controller tries tomaximize row-buffer hit-rate by issuing requests to open row-buffers.Threat ModelWe focus on threat scenarios in which processes from distrustful security domainsare simultaneously executed on a hardware system with a trusted Operating Systemor Hypervisor [17]. Our threat model assumes that, except the memory controller,the hardware system is fully trusted and secure. These threat environments canoccur in shared data-centers, workstations, or the cloud. The users in these en-vironments can be executing on a bare-metal machine or simply using co-locatedVirtual Machines (VM). Adversarial processes can actively try to leak sensitive datafrom a victim user to an attacking user using a side-channel. A malignant process,in the pretext of performing useful tasks, can also leak sensitive data to the attack-ing user. This type of channel is called a covert-channel. While this thesis discussesside-channel mitigation, our threat model and our proposal also mitigates againstcovert-channels. This is because, our aim is to relieve the user from classifyingthe trustworthiness of applications (including legacy applications) and therefore ourthreat model categorizes all processes with the same threat perception.The differential latency between a row-buffer hit and a row-buffer miss withina DRAM system can be used by a malignant process to potentially leak sensitivedata. Prior work, DRAMA [18], uses hardware and software probing to determinethe address-mapping (bank and row information) and leak data using row-bufferconflicts on targeted banks. To the best of our knowledge, there is no prior workthat showcases this side/covert-channel vulnerability without somehow determiningthe address-mapping a priori.Alternately, rather than relying on row-buffer conflicts, a malignant process canalso cause read queue contention to leak data. To the best of our knowledge, no10prior work has showcased this hypothetical side/covert-channel vulnerability on amemory controller. To orchestrate this, a victim user executing such a malignantprocess should be able to overwhelm the read queues of the memory controllersbased on the value of the data it wants to leak. The attacking user can then issuea single read request to probe the read queue. A filled read queue would offer highcontention and thereby showcase a much longer read latency as compared to anempty read queue.Scheduling Policies: Secure vs Not SecurePrior work such as “First-Ready First Come First Serve” (FR-FCFS) schedulersscan the read and write queues for requests to open row-buffers [19]. The FR-FCFSscheduler issues these requests one after another. While this reduces the latencyof memory requests, it does not mitigate side/covert-channels that can be createdusing row-buffer conflicts.A secure scheduling policy can be designed to close the row-buffer as soon asit open. This is called a “Close-Page” (CP) scheduling policy and prevents thecreation of row-buffer conflict side/covert-channels [20]. To prevent potential read-queue contention side-channels, prior work has proposed using a “Fixed Service”(FS) scheduling policy [17]. In the FS policy, requests from each core are scheduledin a round-robin fashion at the memory controller. If a core has no request to beissued, then the memory controller issues fake requests. This ensures that all coresexperience the same contention. Unfortunately, the FS policy eliminates row-bufferlocality while also consuming additional bandwidth for fake requests. Thus priorwork have primarily focused on policies that eliminate row-buffer hit rates to providesecurity against the creation of side-channels.Figure 2.2 shows the read-queue delay in processor cycles. Similar to prior work,we simulate a system with 8-cores and one main memory channel. We evaluateagainst a total of 43 SPEC2006 and MIX workloads. On average, the read-queue11delay of Fixed Service scheduling policy is 4600 cycles. On the other hand, ClosePage reduces the average read-queue latency to 1300 cycles. The “not secure” FR-FCFS policy outperforms these secure schedulers by showcasing a read-queue delayof 500 cycles. Therefore, secure schedulers within memory controllers usually paysignificant performance costs.1000200030004000500060007000HMPKI-SPECLMPKI-SPECHMPKI-MIXMMPKI-MIXLMPKI-MIXAMEANRead Queuing Delay(cycles)Fixed ServiceClose-PageFR-FCFSFigure 2.2: The effective read queuing delay for Fixed Service, Close Page and FR-FCFS schedulers. As we move from “not secure” schedulers to “secure schedulers,the queuing delay increases. On average, the read queuing delay increases from 500cycles to 4700 cycles.Motivation: Potential Row-Buffer Hit-RatesFigure 2.3 shows the potential row-buffer hit-rates by simulating a single-core systemwith a single channel executing a single copy of SPEC2006 benchmark suite. Onaverage, if we can design a secure scheduling policy that captures row-buffer localitythen we can potentially improve the hit-rate to 62%.12  0 20 40 60 80100mcflbmsoplexmilclibquantumomnetppbwavesgccsphinxGemsFDTDleslie3dwrfcactusADMzeusmpbzip2dealIIxalancbmkhmmerperlbenchh264astargromacsgobmksjengnamdtontocalculixgamesspovrayAmeanRow BufferHit-Rate(%)Figure 2.3: The effective row-buffer hit-rate SPEC2006 benchmarks when they arerun on a single-channel in isolation using FR-FCFS policy. On average, we observea relatively high row-buffer hit-rate of 62% and potentially we can design a securescheduler that improves the row-buffer hit-rate.2.3 DESIGN: THE PLUMBER SCHEDULING POLICIESThis section describes insights in the design of the Plumber-R and Plumber-Qscheduling policies.Plumber-R: A scheduler that tackles Drip-R vulnerabilityThe Plumber-R scheduler tackles the Drip-R vulnerability by ensuring that therow-buffer conflict information is not transmitted. To this end, Plumber-R usesqueue-epochs and uses a “marker” to identify the end of a queue-epoch.Begin a Queue-Epoch using a MarkerThe Marker is a hardware register that is implemented within the memory con-troller to track the last request of the current queue-epoch. Figure 2.4(a) shows theimplementation of a marker within the memory controller. By default, the headof the read/write queue is the first request within the queue-epoch (e.g. Addr-A13BanksRow BufferMemory ControllerRead QueueMarkertimeQueue-Epoch: 0Queue-Epoch: Point of ViewRead Queue: Point of ViewAddr-AQueue-Epoch: 1Addr-A Addr-AAddr BAddr CAddr  B+1Addr BAddr CAddr  B+1Close Open Rows Close Open RowsAddr BAddr CAddr  B+1Addr CAddr  B+1Addr CQueue-Epoch: 2Close Open RowsAddr BAddr CAddr  B+1Addr CAddr  B+1Addr CAddr  EAddr  E+1Addr  EAddr  E+1Addr  XAddr  EAddr  E+1Addr  XAddr  E+2Addr  EAddr  E+1Addr  XAddr  E+2Row-Buffer HitAddr  FAddr-A(a) (b)Figure 2.4: (a) A memory controller implementing the queue-epoch scheduler alongwith a marker to identify the last request for the queue-epoch. (b) A timeline ofthe working of queue-epoch. Within the epoch window, the queue-epoch tries tomaximize hit-rates and improves performance. Across epoch windows, the queue-epoch closes all row-buffers and provides security.for queue-epoch:0). To implement a queue-epoch marker, say for a 96 entry readand a 96 entry write queue, the memory controller requires two 7-bit registers. As7-bits can address up to 128 potential marker locations within these queues. Thefirst 7-bit register tracks the location of the queue-epoch marker request within theread queue. The second 7-bit register tracks the location of the queue-epoch markerwithin the write-queue. Thereafter, the memory controller will try to issue the firstpermissible request between the head of the queue and the request signifying thequeue-epoch marker.High-Performance Scheduling within a Queue-EpochFigure 2.4(b) shows how Plumber-R improves its performance. After issuing thefirst request, the memory controller switches its scheduling mode to the FRFCFSpolicy. For instance, after queue-epoch:1, the memory controller has multiple choiceof requests. Before issuing each request, the memory controller consults the Marker14to verify that the current queue-epoch has not ended. If the queue-epoch has notended, the memory controller prioritizes requests to open row-buffers and tries tomaximize the row-buffer hits to reduce the average access latency. For instance,in queue-epoch:1, Addr-B and Addr-(B+1) were scheduled one after another. Ifthe memory controller is unable to find any requests to an open row-buffer, it thenissues requests that potentially would have to close old DRAM rows and open newDRAM rows into the row-buffers (e.g. Addr-C).Securely Ending a Queue-EpochEventually, the Marker will notify the memory controller that the present requestdenotes the end of the current queue-epoch. The memory controller then issuesthe current request and thereafter closes all open row-buffers in the channel. Onceall row-buffers are successfully closed, the Plumber-R scheduler notifies the Markerthat Plumber-R will begin of a new queue-epoch now. As all row-buffers are closedbefore the new queue-epoch, requests in the new queue-epoch are unaware of theprevious state of the row-buffers (open or closed).Plumber-R: Security AnalysisThe flowchart in Figure 2.5 can be used to explain the security features of thePlumber-R policy. If an malicious user (attacker) wants to create side/covert-channels using Drip-R, then the attacker has three possible options. In the firstoption, the attacker inserts a request into the read-queue in any single queue-epoch.Thereafter, in the same queue-epoch the malignant process in the victim userspaceinserts another request into the read queue to close the opened row-buffer. Sub-sequently, in the same queue-epoch, the attacker then inserts another read requestinto the read queue to check if the older row-buffer is open or not. Unfortunately forthe attacker, as all three requests are in the same queue-epoch, the two requests tothe same row-buffer by the attacker are issued one after another by Plumber-R. In15this scenario, the second request of the attacker always experiences a row-buffer hit.As the attacker request was the first to enter the read-queue, it lies at the beginningof the read queue. Hence, the pending request from the victim user is issued onlyafter the pending requests by the attacker are complete.In the second option, the attacker inserts a request into the read-queue in aqueue-epoch. In the same queue-epoch, a malignant process in the victim userspaceinserts another read request to close the row-buffer that was opened by the attacker.In the next queue-epoch, the attacker inserts a probing read request into the readqueue to check if the row-buffer is still open. Fortunately, as all the previous requestsby the attacker and the victim user were in the previous queue-epoch, the secondrequest by the attacker will experience a row-buffer miss latency. This is because,the Plumber-R policy guarantees that it will close all open row-buffers betweenepochs.In the third option, the attacker inserts a request into the read-queue in anyqueue-epoch. Thereafter, in the subsequent queue-epoch, the malignant process inthe victim userspace inserts another request into the read queue. As the queue-epochs have changed, the row-buffers that were open by the attacker is now closed.Therefore, as row-buffers closures ensure that the side/covert-channels was destroyedeven though the attacker has not issued its second request. This is because, Plumber-R closes all open row-buffers between epochs.Plumber-R: Row-Buffer Hit-RateFigure 2.9 shows the row-buffer hit-rate of Plumber-R scheduling policy. To findout the hit-rate for each workload, we use a single-core configuration that connectsvia a single channel to the memory system. This ensures that the accessing corehas complete memory bandwidth. On average, in spite of providing security againstside-channels that target row-conflicts, Plumber-R still shows a row-buffer hit-rateof 23%.16epoch changeda read requestAttacker inserts to open/close row buffera read requestVictim insertsThe AttackerRow−BufferWas ClosedWhen Epoch EndedWill Attackerissue the 2nd request?YesNoQueue−Epoch:0 (current) Queue−Epoch:1row−buffer thatthe attacker openedis closed because to open a row bufferBoth attacker requestswill be issued one afteranother. The attackercannot probe the side−channelStartto open/close row buffera read requestVictim insertsObserves that theStopIn CurrentEpoch?Yes NoIn CurrentEpoch?Yes NoStopStopNo AttackFigure 2.5: The flowchart of the mitigations of Plumber-R against Drip-R vulner-abilities. Plumber-R prevents the side-channels from being formed and therebyprotects the victim user from leaking data via the memory controller.Plumber-R: Queue-Epoch Size vs Read-Queue OccupancyFigure 2.7 shows the read queue occupancy vs the queue-epoch size for the mcfworkload. The workload, mcf, has a high memory intensity and its memory intensity17 020406080100mcflbmsoplexmilclibquantumomnetppbwavesgccsphinxGemsFDTDleslie3dwrfcactusADMzeusmpbzip2dealIIxalancbmkhmmerperlbenchh264astargromacsgobmksjengnamdtontocalculixgamesspovrayAmeanRead Buffer Hit Rate(%)Figure 2.6: The effective row-buffer hit-rate SPEC2006 benchmarks when they arerun on a single-channel in isolation using Plumber-R policy. As compared to othersecure scheduling policies that tend to show 0% hit-rate, we observe an averagerow-buffer hit-rate of 23%.varies rapidly over time. We observe that the number of requests in the queue-epochclosely tracks the read queue occupancy. Figure 2.7 helps provide an intuition as towhy the row-buffer hit-rate is high while using the Plumber-R scheduling policy.  0 20 40 60 80100     0 500000 1000000 1500000 2000000Read Queue OccupancyCycles(a)  0 20 40 60 80100     0 500000 1000000 1500000 2000000Queue-Epoch SizeCycles(b)Figure 2.7: (a) Shows the read-queue occupancy for the mcf benchmark from theSPEC2006 suite. Even over an interval of 2-Million processor cycles, the occupancyof the read-queue varies between 50 entries and 10 entries (b) Shows the trend forthe queue-epoch size for the mcf benchmark. Over an interval of 2-Million processorcycles, the queue-epoch size closely tracks and follows the read-occupancy values.Therefore, if the benchmark has a high row-buffer hit-rate, the queue-epoch sizeswill ensure that the hit-rate is maximized.18BanksRow BufferMemory ControllerRead QueueTimertimeTime-Epoch: 0Process 0: Virtual QueueProcess 1: Virtual QueueAddr-PTime-Epoch: 1Addr-A Addr-AAddr BAddr CAddr  B+1Addr BAddr CAddr  B+1Close Open Rows Close Open RowsAddr LAddr MAddr  L+1Addr MAddr  L+1Addr MTime-Epoch: 2Close Open RowsAddr BAddr CAddr  B+1Addr CAddr  B+1Addr CAddr  EAddr  E+1Addr  EAddr  E+1Addr  XAddr  EAddr  E+1Addr XAddr  E+2Addr  EAddr  E+1Addr  XAddr  E+2Row-Buffer HitAddr  FAddr-Z(a) (b)Addr BAddr CAddr  B+1Process 1: Time-EpochStop issuing requests as they cannot be completed before the epoch ends Process 0: Time-EpochFigure 2.8: (a) A memory controller implementing the time-epoch scheduler alongwith a timer to identify the completion time of time-epoch. (b) A timeline of theworking of time-epoch. Within the epoch window, the time-epoch tries to maximizehit-rates and improves performance at a process-level. Across epoch windows, thetime-epoch closes all row-buffers and provides isolation between processes.Plumber-Q: A scheduler that tackles Drip-Q vulnerabilityThe Plumber-Q scheduler tackles the Drip-Q vulnerability by ensuring that readqueue contention information is not transmitted between processes. To this end,Plumber-Q uses time-epochs of arbitrary lengths of time and uses an “epoch timer”to identify the end of the epoch.Evolving a Queue-Epoch as a Time-EpochUnlike Drip-R which uses row-buffer conflicts to leak sensitive data, Drip-Q usesread-queue contentions. Therefore, Plumber-Q should try to maintain the samequeuing delay for all requests across processes so as to leak no data to any probingprocess. Plumber-Q enables this by re-defining the epoch into arbitrary units oftime called time-epochs. The key point is that the length of the time-epochs do not19reflect the number of requests in the read queue. However, the time-epoch slots arevaried dynamically to improve performance.As shown in Figure 2.8(b) Plumber-Q isolates processes by maintaining a sepa-rate virtual read and write queues for each process. These read and write queues areaccessed in a round-robin fashion such that each process gets its fair share of mem-ory bandwidth. The length of time a virtual queue can exclusive rights to use thememory bandwidth is determined by the time-epoch. After the time-epoch expires,the current process must relieve the memory bandwidth, close all open row-buffers.A new process now gets exclusive rights for the memory bandwidth for a duration oftime-epoch. The key invariant for Plumber-Q is that processes can use the memorybandwidth only during their epoch-slots.The time duration of the epoch slots are varied as per the row-buffer hit-rates.However, if the memory controller wants to update the value of the time-epoch, itmust wait until the round-robin process across all cores is complete. This ensuresthat no one process can see a drop in contention or higher priority. Therefore, thetime-epoch changes are only reflected when the processes in the first core regainexclusive access to the memory bandwidth for their virtual queues. At this stage,the length of time for the time-epoch is set based on the row-buffer hit-rate. Inour studies, we empirically determined that we can increase the length of time forthe time-epoch by ‘x’ nanoseconds, if the row-buffer hit-rate also increases by x%.Therefore, the maximum length of time for the time-epoch slots can be 200ns.Begin a Time-Epoch in Plumber-Q: Epoch TimerAs shown in Figure 2.8(a) The epoch timer is implemented within the memorycontroller and tracks the ending time of the current time-epoch for the virtual queueof a process. By default, the head of the virtual read/write queue at the beginningof the epoch-slot is the first request of this time-epoch. To keep track of the timeleft, each process per core is equipped with an epoch timer. When the process20is idle, this timer may be stored in its virtual memory and can be paged into thememory controller by the Operating System when the process starts executing. Thenumber of bits in the timer depends on the maximum length of the epoch slot. Forour studies, our initial epoch slot can be 100ns and it can extend to a maximumof 200ns. Therefore, for each virtual read and write queue, the memory controllerrequires an 8-bit register for each process.High-Performance Scheduling within a Time-EpochAfter issuing the first request, the memory controller switches its scheduling mode tothe FRFCFS policy. Before issuing each request, the memory controller consults theepoch-timer to verify that the current epoch slot has not ended. If the time-epochhas not ended, for subsequent requests, the memory controller prioritizes requeststo open row-buffers from within its own virtual queues. If the memory controller isunable to find any requests to an open row-buffer, it then issues other requests fromits virtual queue.Plumber-Q: Security AnalysisIf an malicious user (attacker) wants to create side/covert-channels using Drip-Q,then the attacker has only one option. During its turn in the round robin schedulingfor Plumber-Q scheduler must establish a side-channel or a covert channel with amalicious process that is run on the victim userspace. Therefore, the attacker candesign the malignant process to “flood” the memory controller with memory requestsif it wants to transmit a binary “1”. Thereafter, the attacker will try to insert aprobing request at memory controller and determine the read-queue latency.Fortunately, such an action would be thwarted by the Plumber-Q scheduler.Even during the time-epoch of the malignant process, the Plumber-Q schedulerrequires that the process finish executing their memory requests before the “epochtimer” expires. Therefore, once the malignant process issues the allotted set of21requests, the memory controller would simply stall this process. Plumber-Q wouldnot allow additional requests to be issued until the previous requests are returnedfrom the memory. Furthermore, at the end of the epoch, the Plumber-Q schedulercloses all open row-buffers.Plumber-Q: Row-Buffer Hit-RateFigure 2.9 shows the row-buffer hit-rate of Plumber-Q scheduling policy. To findout the hit-rate for each workload, we use a single-core configuration that connectsvia a single channel to the memory system. This ensures that the accessing corehas complete memory bandwidth. On average, in spite of providing security againstside-channels that target read contentions, Plumber-Q shows a row-buffer hit-rateof 32%. 020406080100mcflbmsoplexmilclibquantumomnetppbwavesgccsphinxGemsFDTDleslie3dwrfcactusADMzeusmpbzip2dealIIxalancbmkhmmerperlbenchh264astargromacsgobmksjengnamdtontocalculixgamesspovrayAmeanRead Buffer Hit Rate(%)Figure 2.9: The effective row-buffer hit-rate SPEC2006 benchmarks when they arerun on a single-channel in isolation using Plumber-Q policy. As compared to othersecure scheduling policies that tend to show 0% hit-rate, we observe an averagerow-buffer hit-rate of 32%.222.4 EXPERIMENTAL METHODOLOGYTo evaluate the performance benefits of Plumber-R and Plumber-Q scheduling poli-cies, we develop a trace-based simulator based on the USIMM [21]. USIMM providesa detailed memory system model and was used for the memory scheduling champi-onship. We extended USIMM to model the processor core and construct a detailedcache hierarchy. Our processor model supports the out-of-order (OoO) executionwith 4-wide issue. The baseline system configuration is described in Table 2.1. Wealso implement Close-Page and Fixed Service secure schedulers [17]. We computethe weighted IPC for each workload. To do this we compute the IPC of the work-load on a single-core system while running FR-FCFS and using all its resources. Forpower consumption, we incorporate the internal power calculator of USIMM [21].Table 2.1: Baseline System ConfigurationNumber of cores (OoO) 8Processor clock speed 3.2GHzIssue width 4Last Level Cache (Shared) 4MB, 8-Way, 64B linesLLC Tag Access Latency 35 cyclesLLC Data Access latency 5 cyclesMemory DDR3-800Memory channels 1Ranks per channel 2Banks 8Rows per bank 64KColumns (cache lines) per row 128DRAM Access Timings: TRCD-TRP -TCAS 22-22-22DRAM Refresh Timings: TRFC 350nsWe chose all benchmarks from the SPEC CPU2006 suite. We warm up the cachesfor 2 Billion instructions and execute 2 Billion instructions. To ensure adequaterepresentation of different phases and regions, the 2 Billion instructions are collectedby sampling 200 Million instructions per 1 Billion instructions over a 20 Billioninstruction window. We execute all benchmarks in rate mode, in which all fourcores execute the same benchmark. As shown in Table 2.2, we also create fifteen8-threaded mixed workloads by forming three categories of SPEC2006 Benchmarks,23low MPKI, medium MPKI, and high MPKI by randomly picking benchmarks fromeach category to form mixed workloads. We perform timing simulation until all thebenchmarks in the workload finish execution.Table 2.2: Workload Mixesmix1 gamess, tonto, perlbench, calculix, povray, hmmer, namd, h264mix2 gromacs, sjeng, povray, calculix, astar, perlbench, hmmer, namdmix3 tonto, sjeng, gobmk, namd, povray, astar, calculix, perlbenchmix4 astar, calculix, gamess, gromacs, povray, namd, h264, hmmermix5 sjeng, gobmk, povray, hmmer, calculix, perlbench, gromacs, h264mix6 sphinx, mcf, xalancbmk, bzip2, gcc, omnetpp, GemsFDTD, dealIImix7 milc, cactusADM, zeusmp, xalancbmk, omnetpp, soplex, dealII, GemsFDTDmix8 soplex, lbm, bwaves, omnetpp, sphinx, gcc, xalancbmk, cactusADMmix9 bwaves, cactusADM, omnetpp, milc, xalancbmk, wrf, leslie3d, mcfmix10 leslie3d, bzip2, libquantum, soplex, GemsFDTD, sphinx, zeusmp, lbmmix11 gamess, sphinx, mcf, tonto, xalancbmk, bzip2, perlbench, gccmix12 omnetpp, GemsFDTD, bzip2, dealII, milc, cactusADM, zeusmp, calculixmix13 xalancbmk, milc, calculix, povray, omnetpp, soplex, hmmer, namdmix14 dealII, soplex, GemsFDTD, h264, gromacs, lbm, sjeng, povraymix15 bwaves, omnetpp, calculix, astar, sphinx, perlbench, gcc, povray2.5 RESULTSIn this section, we showcase the performance, power, and sensitivity results.Performance Impact on Plumber-RFigure 2.10 shows the speedup of Plumber-R when compared to a baseline systemFR-FCFS thatis not secure against Drip-R vulnerability. On average, Plumber-Rhas a slowdown of only 15%. Close Page policy that prevents row-conflict side-channels has a slowdown of 36%. Our analysis shows that mcf and libquantumbenefits the most Plumber-R performance optimization. Among low MPKI bench-marks, povray and tonto benefit the most from using Plumber-R. Furthermore, noneof our benchmarks underperform as compared to close page policy. The memoryintensive benchmarks reap most of the benefits from employing an open row-buffer. Weighted SlowdownClose-Page Plumber-R FRFCFSFigure 2.10: The performance of Plumber-R policy as compared to a baseline systemthat uses FR-FCFS scheduling policy. On average, Plumber-R as a slowdown of only15% while providing mitigation against Drip-R vulnerabilities and providing row-buffer hit-rates. On the other hand, Close Page encounters a slowdown of 36%.Performance Impact on Plumber-QFigure 2.11 shows the speedup of Plumber-R when compared to a baseline systemFR-FCFS thatis not secure against Drip-Q vulnerability. On average, Plumber-Qhas a slowdown of only 45%. The Fixed Service policy that prevents read con-tention side-channels has a slowdown of 60%. Our analysis shows that lbm andlibquantum benefits the most Plumber-Q performance optimization. Among lowMPKI benchmarks, povray and dealII benefit the most from using Plumber-Q. Fur-thermore, some of the benchmarks like mix14, outperform Plumber-Q. This is be-cause, Plumber-Q may sometimes converge on inefficiently large or small numbersof time-epoch. Weighted SlowdownFixed Service Plumber-Q FRFCFSFigure 2.11: The performance of Plumber-Q policy as compared to a baseline systemthat uses FR-FCFS scheduling policy. On average, Plumber-R as a slowdown of only45% while providing mitigation against Drip-Q vulnerabilities and providing row-buffer hit-rates. Fixed Service encounters a slowdown of 60% as it does not allowany row-buffer hit-rates.25Power ConsumptionFigure 2.12 shows the normalized power consumption of memory systems that usePlumber-R and Plumber-Q as compared to a system that uses FR-FCFS. On av-erage, Plumber-Q consumes the lowest amount of power as it does not issue fakerequests as compared to Fixed Service. Overall, Plumber-Q consumes 50% lowerpower as compared to FR-FCFS. Furthermore, Close Page consumes 30% lowerpower. As Plumber-R performs well, it also expends power to service multiple readsand writes while keeping the row-buffer open. Therefore, Plumber-R consumes al-most 90% of the power as compared to FR-FCFS. Memory PowerFixed ServicePlumber-QClose-PagePlumber-RFR-FCFSFigure 2.12: The effective memory power consumption. On average, Plumber-R andPlumber-Q consume 10% and 50% lower power as compared to FR-FCFS.Sensitivity to ChannelsFigure 2.13 shows the sensitivity to the number of channels with performance. Asthe number of channels increase, the performance of the machine increases. This is26because these greater number of channels expose bandwidth. This bandwidth canbe wasted with much smaller performance penalties. For instance, Close Page andPlumber-R both show an increase in performance as bandwidth improves. However,Fixed Service and Plumber-Q show a reduction in performance. This is because,relatively, the FR-FCFS improves as channels increases. This causes the relative(normalized) performance of Fixed Service and Plumber-Q to start reducing. WeightedSlowdown1-Channel 2-Channel 4-ChannelFigure 2.13: The performance of scheduling policies as the number of channels in-creases. On average, Close Page, Plumber-R, and FR-FCFS benefit from increasingbandwidth. However, Fixed Service and Plumber-Q reduce their relative perfor-mance as bandwidth increases. The performance of Fixed Service reduces the most,by nearly 75%.Sensitivity to the Number of CoresFigure 2.14 shows the sensitivity of different scheduling policies as the number ofcores are increased. All policies benefit from having a lower number of cores as thisreduces the pressure on the memory system. Furthermore, FR-FRCFS and Plumber-27R consistently perform well for 4 core and 8 core systems. The performance of ClosePage, Plumber-Q and Fixed Service deteriorate as the number of cores increases.This is because, as Fixed Service and Plumber-Q try to reduce memory contention,an increased number of cores only increases the contention. Similarly, the ClosePage policy ends up frequently closing most pages in 8 core system. WeightedSlowdown4-Core 8-CoreFigure 2.14: The effect of the number of cores on scheduling policies. As the num-ber of cores increase, policies that help mitigate contention side channels like FixedService and Plumber-Q perform poorly. However, policies like Plumber-R and FR-FCFS that help maximize row-buffer hits perform well. The performance of FixedService policy reduces by the largest margin of 60%.2.6 RELATED WORKWe highlight some of the prior work in the area of memory security, covert channelsand side-channels28Row Buffer Conflict VulnerabilitiesSome prior works have proposed solutions to mitigate this vulnerability. For in-stance, Wang et al. design a memory scheduler with temporal partitioning [22].This scheduler introduces dead time slots between requests. Furthermore, to in-crease isolation, the temporal partitioning scheduler implements a per security do-main based queuing structure within the memory controller. In similar vein, priorwork have proposed bank triple alternation (BTA) to utilize bank-level parallelism.BTA ensures that two memory requests to the same bank from different consecu-tive domains need to delay and wait for a predetermined worst-case period. Thishelps the BTA scheduler remove dead-time as it can keep all its banks busy [17].Unlike these prior work, our Plumber-R policy does not require security domains.Furthermore, to utilize row-buffer hit-rates, we also introduced an epoch-based.When compared to prior work, our approaches in Plumber-R help provide dramaticperformance improvement.Read Queue Contention VulnerabilitiesSimilar to row-conflicts, our work also shows that read queue contentions can createside/covert channels. Prior work like fixed-service [17] have solved this by shapingthe memory access pattern, resulting in a high performance cost. As we can see fromFigure 2.1(b), the read queuing delay of Fixed Service scheduler is much longer thana ”not secure” FR-FCFS scheduling policy. As a contrast to these prior work, in ourPlumber-Q policy, we reduced read queuing delay by using epochs across processesand taking advantage of row-buffer hits. We show that the concept of epochs canbe broadly applied for row-conflict and read queue contention mitigation.29General Timing Channel MitigationGeneral timing channels mitigation across related disciplines (such as caches andother shared resources) have also been examined in prior work. Martin et. al sug-gest to restrict the user’s ability to take fine-grained timing measurements (such asrtdscp) [23]). The detection of timing channels via their architectural interferencehas also been proposed [24]. We believe that some of our ideas in epoch creation andlocality can be re-used to improve the performance of caches while also providingsecurity. Several authors have proposed using attacks like Flush-Reload that ag-gressively trash the cache and evict blocks [25]. These prior work use these evictedblock information to relay information2.7 SUMMARYAs machine resources are being share for efficiency, they also present avenues forside-channels and covert-channels. Today, several researchers in Academia and In-dustry are tacking side-channels in caches. This thesis proposes Drip-R and Drip-Qwhich target the main memory system. Thereafter, this paper proposes two high-performance scheduling policies called Plumber-R and Plumber-Q to mitigate thesevulnerabilities. Overall this paper has the following contributions:• This thesis suggests strategies that help launch these vulnerabilities withouthaving prior knowledge of the address mapping of the target machine.• We propose two high-performance scheduler called Plumber-R and Plumber-Q, that mitigate Drip-R and Drip-Q vulnerabilities.• To improve performance, we use the concept of epochs in time and queues.Using epochs, we try to schedule requests to the same row-buffer and improvethe row-buffer hit-rate. This helps improve performance.• Plumber-R and Plumber-Q schedulers provide 29% and 45% higher speedup30than prior state-of-the-art scheduling policies like Fixed Service and ClosePage.• Furthermore, the Plumber-R and Plumber-Q schedulers also reduce memorypower consumption by 10% and 50% respectively when compared to a FR-FCFS scheduler.As systems with large memory capacities and new memory technologies are beingdeveloped, we believe that this thesis provides key design choices to improve theirsecurity while maintaining performance.31Bibliography[1] T. Zhang, Y. Zhang, and R. B. Lee, “Dos attacks on your memory incloud,” in Proceedings of the 2017 ACM on Asia Conference on Computer andCommunications Security, ser. ASIA CCS ’17. New York, NY, USA: ACM,2017. [Online]. Available:[2] S. Wang, P. Wang, X. Liu, D. Zhang, and D. Wu, “Cached: Identifyingcache-based timing channels in production software,” in Proceedings ofthe 26th USENIX Conference on Security Symposium, ser. SEC’17.Berkeley, CA, USA: USENIX Association, 2017. [Online]. Available:[3] L. Lin, M. Kasper, T. Gu¨neysu, C. Paar, and W. Burleson, “Trojan side-channels: Lightweight hardware trojans through side-channel engineering,” inProceedings of the 11th International Workshop on Cryptographic Hardwareand Embedded Systems, ser. CHES ’09. Berlin, Heidelberg: Springer-Verlag,2009. [Online]. Available: 27[4] P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Hamburg,M. Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom, “Spectre at-tacks: Exploiting speculative execution,” in 2019 IEEE Symposium on Securityand Privacy (SP), May 2019.32[5] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh,J. Horn, S. Mangard, P. Kocher, D. Genkin, Y. Yarom, and M. Hamburg,“Meltdown: Reading kernel memory from user space,” in Proceedingsof the 27th USENIX Conference on Security Symposium, ser. SEC’18.Berkeley, CA, USA: USENIX Association, 2018. [Online]. Available:[6] J. Van Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens,M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx, “Foreshadow:Extracting the keys to the intel sgx kingdom with transient out-of-orderexecution,” in Proceedings of the 27th USENIX Conference on SecuritySymposium, ser. SEC’18. Berkeley, CA, USA: USENIX Association, 2018.[Online]. Available:[7] M. Yan, J. Choi, D. Skarlatos, A. Morrison, C. Fletcher, and J. Torrellas,“Invisispec: Making speculative execution invisible in the cache hierarchy,” in2018 51st Annual IEEE/ACM International Symposium on Microarchitecture(MICRO), Oct 2018.[8] K. N. Khasawneh, E. M. Koruyeh, C. Song, D. Evtyushkin, D. Ponomarev, andN. Abu-Ghazaleh, “Safespec: Banishing the spectre of a meltdown with leakage-free speculation,” in 2019 56th ACM/IEEE Design Automation Conference(DAC), June 2019.[9] G. Saileshwar and M. K. Qureshi, “Cleanupspec: An ”undo” approach to safespeculation,” in Proceedings of the 52Nd Annual IEEE/ACM InternationalSymposium on Microarchitecture, ser. MICRO ’52. New York, NY, USA:ACM, 2019. [Online]. Available:[10] “Engineering New Protections Into Hardware,”, accessed: 2019-11-23.33[11] “AMD Product Security,”, accessed: 2019-11-23.[12] J.-S. Kim, C. S. Oh, H. Lee, D. Lee, H.-R. Hwang, S. Hwang, B. Na, J. Moon,J.-G. Kim, H. Park, J.-W. Ryu, K. Park, S.-K. Kang, S.-Y. Kim, H. Kim, J.-M.Bang, H. Cho, M. Jang, C. Han, J.-B. Lee, K. Kyung, J.-S. Choi, and Y.-H.Jun, “A 1.2v 12.8gb/s 2gb mobile wide-i/o dram with 4x128 i/os using tsv-based stacking,” in Solid-State Circuits Conference Digest of Technical Papers(ISSCC), 2011 IEEE International, Feb 2011.[13] JEDEC Standard, “DDR3 Standard,” in JESD79-3E, 2015.[14] ddr3 8gb 1.5v twindie x4x8.pdf - Rev. C 4/13 EN, Micron, 2011.[15] K.-N. Lim, W.-J. Jang, H.-S. Won, K.-Y. Lee, H. Kim, D.-W. Kim, M.-H. Cho,S.-L. Kim, J.-H. Kang, K.-W. Park, and B.-T. Jeong, “A 1.2v 23nm 6f2 4gbddr3 sdram with local-bitline sense amplifier, hybrid lio sense amplifier anddummy-less array architecture,” in Solid-State Circuits Conference Digest ofTechnical Papers (ISSCC), 2012 IEEE International, Feb 2012.[16] JEDEC Standard, “DDR4 Standard,” in JESD79-4, 2015.[17] A. Shafiee, A. Gundu, M. Shevgoor, R. Balasubramonian, and M. Tiwari,“Avoiding information leakage in the memory controller with fixed service poli-cies,” in 2015 48th Annual IEEE/ACM International Symposium on Microar-chitecture (MICRO), Dec 2015.[18] P. Pessl, D. Gruss, C. Maurice, M. Schwarz, and S. Mangard, “{DRAMA}:Exploiting {DRAM} addressing for cross-cpu attacks,” in 25th {USENIX} Se-curity Symposium ({USENIX} Security 16), 2016.[19] S. Rixner, W. J. Dally, U. J. Kapasi, P. Mattson, and J. D. Owens, “Memoryaccess scheduling,” in Proceedings of the 27th Annual International Symposium34on Computer Architecture, ser. ISCA ’00. New York, NY, USA: ACM, 2000.[Online]. Available:[20] M. Blackmore, “A quantitative analysis of memory controller page policies,”2013.[21] N. Chatterjee, R. Balasubramonian, M. Shevgoor, S. H. Pugsley, A. N. Udipi,A. Shafiee, K. Sudan, M. Awasthi, and Z. Chishti, “Usimm: the utah simulatedmemory module a simulation infrastructure for the jwac memory schedulingchampionship,” 2012.[22] Z. Wang and R. B. Lee, “Covert and side channels due to processor architec-ture,” in 2006 22nd Annual Computer Security Applications Conference (AC-SAC’06), Dec 2006.[23] R. Martin, J. Demme, and S. Sethumadhavan, “Timewarp: Rethinking time-keeping and performance monitoring mechanisms to mitigate side-channel at-tacks,” in 2012 39th Annual International Symposium on Computer Architec-ture (ISCA), June 2012.[24] C. Hunger, M. Kazdagli, A. Rawat, A. Dimakis, S. Vishwanath, and M. Ti-wari, “Understanding contention-based channels and using them for defense,”in 2015 IEEE 21st International Symposium on High Performance ComputerArchitecture (HPCA), Feb 2015.[25] Y. Yarom and K. Falkner, “Flush+reload: A high resolution, lownoise, l3 cache side-channel attack,” in 23rd USENIX Security Symposium(USENIX Security 14). San Diego, CA: USENIX Association, Aug. 2014.[Online]. Available:


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items