The Case for Higher Computational Density in the Memory-Bound FDTD Method within Multicore Environments

It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on t...

Full description

Saved in:
Bibliographic Details
Main Author: Mohammed F. Hadi
Format: Article
Language:English
Published: Wiley 2012-01-01
Series:International Journal of Antennas and Propagation
Online Access:http://dx.doi.org/10.1155/2012/280359
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.
ISSN:1687-5869
1687-5877