- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Computer-aided design of atomic silicon quantum dots...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Computer-aided design of atomic silicon quantum dots and computational applications Ng, Samuel Sze Hang
Abstract
The ability for dangling bonds (DBs) to encode bit-information and perform logic computation has been demonstrated in recent works. This was made possible by the ability to fabricate DBs with atomic precision as well as to observe and control their charge states. An observed tendency for charge reconfiguration to occur among an array of DBs as the system relaxes to the ground state was utilized to create logic wires and gates. Building on these novel advances, this thesis explores a breadth of topics surrounding the DB computation platform. A computer-aided design tool, SiQAD, was developed as part of a greater research effort to enable the rapid design and simulation of DB layouts. Among the simulation tools included in SiQAD, this work most extensively contributed to the development of multiple ground state charge configuration finders, enabling the exploration of prospective DB logic gate and circuit designs. Similarities have been drawn between DB circuit scaling properties with those of existing field-coupled nanocomputing (FCN) research, justifying the use of FCN architectures as blueprints for DB logic research. As such, this thesis proposes and analyzes DB logic implementations from the gate level to the application level. This work identifies hardware acceleration of machine learning inference as a novel prospective application on the DB computational platform. Matrix multiplication is a common operation in the inference stage of many neural network implementations, and presents a bottleneck to inference performance. Recent works have proposed, or even made commercially available, various hardware inference acceleration frameworks. Among them, the matrix multiply unit (MXU) found in Google’s Tensor Processing Unit (TPU) has been indentified to be architecturally favorable for implementation on the DB platform. This work proposes a DB adaptation of the MXU with logic layouts and clocking configurations optimized for the platform. Comparing the DB MXU to Google’s MXU, this work estimates an improvement of 1 order of magnitude in area efficiency and up to 7 orders of magnitude in power efficiency when pegged to the same clock rate.
Item Metadata
Title |
Computer-aided design of atomic silicon quantum dots and computational applications
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2020
|
Description |
The ability for dangling bonds (DBs) to encode bit-information and perform logic computation has been demonstrated in recent works. This was made possible by the ability to fabricate DBs with atomic precision as well as to observe and control their charge states. An observed tendency for charge reconfiguration to occur among an array of DBs as the system relaxes to the ground state was utilized to create logic wires and gates. Building on these novel advances, this thesis explores a breadth of topics surrounding the DB computation platform. A computer-aided design tool, SiQAD, was developed as part of a greater research effort to enable the rapid design and simulation of DB layouts. Among the simulation tools included in SiQAD, this work most extensively contributed to the development of multiple ground state charge configuration finders, enabling the exploration of prospective DB logic gate and circuit designs. Similarities have been drawn between DB circuit scaling properties with those of existing field-coupled nanocomputing (FCN) research, justifying the use of FCN architectures as blueprints for DB logic research. As such, this thesis proposes and analyzes DB logic implementations from the gate level to the application level.
This work identifies hardware acceleration of machine learning inference as a novel prospective application on the DB computational platform. Matrix multiplication is a common operation in the inference stage of many neural network implementations, and presents a bottleneck to inference performance. Recent works have proposed, or even made commercially available, various hardware inference acceleration frameworks. Among them, the matrix multiply unit (MXU) found in Google’s Tensor Processing Unit (TPU) has been indentified to be architecturally favorable for implementation on the DB platform. This work proposes a DB adaptation of the MXU with logic layouts and clocking configurations optimized for the platform. Comparing the DB MXU to Google’s MXU, this work estimates an improvement of 1 order of magnitude in area efficiency and up to 7 orders of magnitude in power efficiency when pegged to the same clock rate.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2020-08-24
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0392909
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2020-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International