Show simple item record

dc.contributor.advisorAsaduzzaman, Abu
dc.contributor.authorGummadi, Deepthi
dc.date.accessioned2014-11-17T16:22:39Z
dc.date.available2014-11-17T16:22:39Z
dc.date.issued2014-05
dc.identifier.othert14013
dc.identifier.urihttp://hdl.handle.net/10057/10959
dc.descriptionThesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
dc.description.abstractIn order to fast effective analysis of large complex systems, high-performance computing is essential. NVIDIA Compute Unified Device Architecture (CUDA)-assisted central processing unit (CPU) / graphics processing unit (GPU) computing platform has proven its potential to be used in high-performance computing. In CPU/GPU computing, original data and instructions are copied from CPU main memory to GPU global memory. Inside GPU, it would be beneficial to keep the data into shared memory (shared only by the threads of that block) than in the global memory (shared by all threads). However, shared memory is much smaller than global memory (for Fermi Tesla C2075, total shared memory per block is 48 KB and total global memory is 6 GB). In this paper, we introduce a CPU-memory to GPU-global-memory mapping technique to improve GPU and overall system performance by increasing the effectiveness of GPU-shared memory. We use NVIDIA 448-core Fermi and 2496-core Kepler GPU cards in this study. Experimental results, from solving Laplace's equation for 512x512 matrixes using a Fermi GPU card, show that proposed CPU-to-GPU memory mapping technique help decrease the overall execution time by more than 75%.
dc.format.extentxii, 62 p.
dc.language.isoen_US
dc.publisherWichita State University
dc.rightsCopyright 2014 Deepthi Gummadi
dc.subject.lcshElectronic dissertations
dc.titleImproving GPU performance by regrouping CPU-memory data
dc.typeThesis


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record