Fast distributed large-pixel-count hologram computation using a GPU cluster

Page view(s)
10
Checked on Jul 17, 2023
Fast distributed large-pixel-count hologram computation using a GPU cluster
Title:
Fast distributed large-pixel-count hologram computation using a GPU cluster
Journal Title:
Applied optics
Keywords:
Publication Date:
09 September 2013
Citation:
Yuechao Pan, Xuewu Xu, and Xinan Liang, "Fast distributed large-pixel-count hologram computation using a GPU cluster," Appl. Opt. 52, 6562-6571 (2013)
Abstract:
Large-pixel-count holograms are one essential part for big size holographic 3D display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a GPU cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques such as shared memory on GPU, GPU level adaptive load balancing and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 seconds respectively, for more than 2 million object points with RGB colors. Color 3D objects with 591k points were successfully reconstructed from 186M-pixel hologram computed in 9.05 seconds with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
License type:
PublisherCopyrights
Funding Info:
Description:
© 2013 Optical Society of America. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modifications of the content of this paper are prohibited.
ISSN:
1559-128X
Files uploaded:

File Size Format Action
fy13-2262.pdf 537.10 KB PDF Open