TY - ELEC
AU - Suchard, Marc
PY - 2014
TI - When multi-core statistical computing fails for massive sample sizes
LA - eng
M3 - Moving Image
AB - Much of statistical computing is memory-bandwidth limited, not floating-pointing operation throughput limited as commonly assumed. This often restricts the utility of multi-core computing techniques to improve statistical estimation run-time. I explore this conundrum in inference tools for a massive Bayesian model of sea-surface temperatures across the global. I describe approaches for computing the data likelihood that exploit fine-scale parallelization for potential scalability to real-time satellite surveillance data. These simple algorithmic changes open the door on using advancing computing technology involving many-core architectures. These architectures provide significantly higher memory-bandwidth and inexpensively afford order-of-magnitude run-time speed-ups.
N2 - Much of statistical computing is memory-bandwidth limited, not floating-pointing operation throughput limited as commonly assumed. This often restricts the utility of multi-core computing techniques to improve statistical estimation run-time. I explore this conundrum in inference tools for a massive Bayesian model of sea-surface temperatures across the global. I describe approaches for computing the data likelihood that exploit fine-scale parallelization for potential scalability to real-time satellite surveillance data. These simple algorithmic changes open the door on using advancing computing technology involving many-core architectures. These architectures provide significantly higher memory-bandwidth and inexpensively afford order-of-magnitude run-time speed-ups.
UR - https://open.library.ubc.ca/collections/48630/items/1.0043879
ER - End of Reference