Professional CUDA C Programing
代码下载:http://www.wrox.com/WileyCDA/WroxTitle/Professional-CUDA-C-Programming.productCd-1118739329,descCd-DOWNLOAD.html
Memory Management
CUDA编程中的内存管理与C编程类似,并附加了程序员明确负责内存管理及主机与设备之间的数据移动。
➤ 分配和释放设备内存
➤ 在主机和设备之间传输数据
Memory Allocation and Deallocation
//分配显存:
cudaError_t cudaMalloc(void **devPtr, size_t count);
//初始化:
cudaError_t cudaMemset(void *devPtr, int value, size_t count);
//释放显存:
cudaError_t cudaFree(void *devPtr);
device资源分配是个非常昂贵的操作,因此device Memory应该尽可能的重用,而不是重新分配。
Memory Transfer
cudaError_t cudaMemcpy(void *dst, const void *src, size_t count,
enum cudaMemcpyKind kind);
//cudaMemcpy通常情况下,都是同步的。
Example:
#include "../common/common.h"
#include <cuda_runtime.h>
#include <stdio.h>
/*
* An example of using CUDA's memory copy API to transfer data to and from the
* device. In this case, cudaMalloc is used to allocate memory on the GPU and
* cudaMemcpy is used to transfer the contents of host memory to an array
* allocated using cudaMalloc.
*/
int main(int argc, char **argv)
{
// set up device
int dev = 0;
CHECK(cudaSetDevice(dev));
// memory size
unsigned int isize = 1 << 22;
unsigned int nbytes = isize * sizeof(float);
// get device information
cudaDeviceProp deviceProp;
CHECK(cudaGetDeviceProperties(&deviceProp, dev));
printf("%s starting at ", argv[0]);
printf("device %d: %s memory size %d nbyte %5.2fMB\n", dev,
deviceProp.name, isize, nbytes / (1024.0f * 1024.0f));
// allocate the host memory
float *h_a = (float *)malloc(nbytes);
// allocate the device memory
float *d_a;
CHECK(cudaMalloc((float **)&d_a, nbytes));
// initialize the host memory
for(unsigned int i = 0; i < isize; i++) h_a[i] = 0.5f;
// transfer data from the host to the device
CHECK(cudaMemcpy(d_a, h_a, nbytes, cudaMemcpyHostToDevice));
// transfer data from the device to the host
CHECK(cudaMemcpy(h_a, d_a, nbytes, cudaMemcpyDeviceToHost));
// free memory
CHECK(cudaFree(d_a));
free(h_a);
// reset device
CHECK(cudaDeviceReset());
return EXIT_SUCCESS;
}
编译运行:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvcc -O3 memTransfer.cu -o memTransfer
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvprof ./memTransfer
==8038== NVPROF is profiling process 8038, command: ./memTransfer
./memTransfer starting at device 0: GeForce GT 740M memory size 4194304 nbyte 16.00MB
==8038== Profiling application: ./memTransfer
==8038== Profiling result:
Time(%) Time Calls Avg Min Max Name
50.53% 2.7607ms 1 2.7607ms 2.7607ms 2.7607ms [CUDA memcpy HtoD]
49.47% 2.7030ms 1 2.7030ms 2.7030ms 2.7030ms [CUDA memcpy DtoH]
上图是CPU和GPU之间传输关系图,可以看出来,CPU和GPU之间传输速度相对很差(8GB/s),GPU和on-board Memory传输速度要快得多,所以对于编程来说,要时刻考虑减少CPU和GPU之间的数据传输。
Pinned Memory
为什么需要虚拟内存地址空间?
假设某个进程需要4MB的空间,内存假设是1MB的,如果进程直接使用物理地址,这个进程会因为内存不足跑不起来。但是进程可以根据运行时间调用部分数据,执行进程。
host的内存是按页进行管理的,虚拟内存和物理内存间有一个映射关系,比如要将host上的某个变量拷贝到device上,首先得知道host上变量的物理地址,实际上host的物理地址和虚拟地址的映射关系随时间而变化的。所以device无法安全地访问host的变量。因此,当将pageable host Memory数据送到device时,CUDA驱动会首先分配一个临时的page-locked或者pinned host Memory,并将host的数据放到这个临时空间里。然后GPU从这个所谓的pinned Memory中获取数据,如下图所示:
我们也可以显式的直接使用pinned Memory,如下:
cudaError_t cudaMallocHost(void **devPtr, size_t count);
由于pinned Memory能够被device直接访问(不是指不通过PCIE了,而是相对左图我们少了pageable Memory到pinned Memory这一步),所以他比pageable Memory具有相当高的读写带宽,但是可能会降低pageable Memory的数量,影响整个虚拟存储性能。
cudaError_t status = cudaMallocHost((void**)&h_aPinned, bytes);
if (status != cudaSuccess) {
fprintf(stderr, "Error returned from pinned host memory allocation\n");
exit(1);
}
//释放pinned memory
cudaError_t cudaFreeHost(void *ptr);
Example:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvcc -O3 pinMemTransfer.cu -o pinMemTransfer
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvprof ./pinMemTransfer
==9488== NVPROF is profiling process 9488, command: ./pinMemTransfer
./pinMemTransfer starting at device 0: GeForce GT 740M memory size 4194304 nbyte 16.00MB canMap 1
==9488== Profiling application: ./pinMemTransfer
==9488== Profiling result:
Time(%) Time Calls Avg Min Max Name
50.71% 2.5983ms 1 2.5983ms 2.5983ms 2.5983ms [CUDA memcpy HtoD]
49.29% 2.5255ms 1 2.5255ms 2.5255ms 2.5255ms [CUDA memcpy DtoH]
Pinned Memory比pageable Memory的分配操作更加昂贵,但是对大数据的传输有很好的表现。pinned Memory性能的好坏也是跟CC有关的。将许多小的传输合并到一次大的数据传输,并使用pinned Memory将降低很大的传输消耗。有些GPU数据传输和kernel的计算是可以overlap的。
Zero-Copy Memory
通常情况下,host不能直接访问device的变量,device的变量也不能直接访问host的变量。但Zero-Copy Memory是个例外,主机和设备都可以访问Zero-Copy Memory。
使用Zero-Copy Memory的优点如下:
➤当设备内存不足时利用主机内存
➤避免主机和设备之间的显式数据传输
➤提高PCIe传输速率
需要注意的问题:要注意device和host端的synchronize
memory accesses 问题,在同一时刻host和device端同时修改zero-copy的数据,可能会导致无法预料的后果。
Zero-copy本身实质就是pinned memory并且被映射到了device的地址空间。
cudaError_t cudaHostAlloc(void **pHost, size_t count, unsigned int flags);
当使用cudaHostAllocDefault时,cudaHostAlloc和cudaMallocHost等价。cudaHostAllocPortable则说明,分配的pinned memory对所有CUDA context都有效,而不是单单执行分配此操作的那个context或者说线程。cudaHostAllocWriteCombined是在特殊系统配置情况下使用的,这块pinned memory在PCIE上的传输更快,但是对于host自己来说,却没什么效率。所以该选项一般用来让host去写,然后device读。最常用的是cudaHostAllocMapped,就是返回一个标准的zero-copy。可以用下面的API来获取device端的地址:
cudaError_t cudaHostGetDevicePointer(void **pDevice, void *pHost, unsigned int flags);
//注意:flags目前设置为0
使用zero-copy memory来作为device memory的读写很频繁的那部分是很不明智的,究其根本原因还是GPU和CPU之间低的传输速度,甚至在频繁读写情况下,zero-copy表现比global memory也要差不少。
下面一段代买是比较频繁读写情况下,zero-copy的表现:
#include "../common/common.h"
#include <cuda_runtime.h>
#include <stdio.h>
/*
* This example demonstrates the use of zero-copy memory to remove the need to
* explicitly issue a memcpy operation between the host and device. By mapping
* host, page-locked memory into the device's address space, the address can
* directly reference a host array and transfer its contents over the PCIe bus.
*
* This example compares performing a vector addition with and without zero-copy
* memory.
*/
void checkResult(float *hostRef, float *gpuRef, const int N)
{
double epsilon = 1.0E-8;
for (int i = 0; i < N; i++)
{
if (abs(hostRef[i] - gpuRef[i]) > epsilon)
{
printf("Arrays do not match!\n");
printf("host %5.2f gpu %5.2f at current %d\n", hostRef[i],
gpuRef[i], i);
break;
}
}
return;
}
void initialData(float *ip, int size)
{
int i;
for (i = 0; i < size; i++)
{
ip[i] = (float)( rand() & 0xFF ) / 10.0f;
}
return;
}
void sumArraysOnHost(float *A, float *B, float *C, const int N)
{
for (int idx = 0; idx < N; idx++)
{
C[idx] = A[idx] + B[idx];
}
}
__global__ void sumArrays(float *A, float *B, float *C, const int N)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < N) C[i] = A[i] + B[i];
}
__global__ void sumArraysZeroCopy(float *A, float *B, float *C, const int N)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < N) C[i] = A[i] + B[i];
}
int main(int argc, char **argv)
{
// set up device
int dev = 0;
CHECK(cudaSetDevice(dev));
// get device properties
cudaDeviceProp deviceProp;
CHECK(cudaGetDeviceProperties(&deviceProp, dev));
// check if support mapped memory
if (!deviceProp.canMapHostMemory)
{
printf("Device %d does not support mapping CPU host memory!\n", dev);
CHECK(cudaDeviceReset());
exit(EXIT_SUCCESS);
}
printf("Using Device %d: %s ", dev, deviceProp.name);
// set up data size of vectors
int ipower = 10;
if (argc > 1) ipower = atoi(argv[1]);
int nElem = 1 << ipower;
size_t nBytes = nElem * sizeof(float);
if (ipower < 18)
{
printf("Vector size %d power %d nbytes %3.0f KB\n", nElem, ipower,
(float)nBytes / (1024.0f));
}
else
{
printf("Vector size %d power %d nbytes %3.0f MB\n", nElem, ipower,
(float)nBytes / (1024.0f * 1024.0f));
}
// part 1: using device memory
// malloc host memory
float *h_A, *h_B, *hostRef, *gpuRef;
h_A = (float *)malloc(nBytes);
h_B = (float *)malloc(nBytes);
hostRef = (float *)malloc(nBytes);
gpuRef = (float *)malloc(nBytes);
// initialize data at host side
initialData(h_A, nElem);
initialData(h_B, nElem);
memset(hostRef, 0, nBytes);
memset(gpuRef, 0, nBytes);
// add vector at host side for result checks
sumArraysOnHost(h_A, h_B, hostRef, nElem);
// malloc device global memory
float *d_A, *d_B, *d_C;
CHECK(cudaMalloc((float**)&d_A, nBytes));
CHECK(cudaMalloc((float**)&d_B, nBytes));
CHECK(cudaMalloc((float**)&d_C, nBytes));
// transfer data from host to device
CHECK(cudaMemcpy(d_A, h_A, nBytes, cudaMemcpyHostToDevice));
CHECK(cudaMemcpy(d_B, h_B, nBytes, cudaMemcpyHostToDevice));
// set up execution configuration
int iLen = 512;
dim3 block (iLen);
dim3 grid ((nElem + block.x - 1) / block.x);
sumArrays<<<grid, block>>>(d_A, d_B, d_C, nElem);
// copy kernel result back to host side
CHECK(cudaMemcpy(gpuRef, d_C, nBytes, cudaMemcpyDeviceToHost));
// check device results
checkResult(hostRef, gpuRef, nElem);
// free device global memory
CHECK(cudaFree(d_A));
CHECK(cudaFree(d_B));
// free host memory
free(h_A);
free(h_B);
// part 2: using zerocopy memory for array A and B
// allocate zerocpy memory
CHECK(cudaHostAlloc((void **)&h_A, nBytes, cudaHostAllocMapped));
CHECK(cudaHostAlloc((void **)&h_B, nBytes, cudaHostAllocMapped));
// initialize data at host side
initialData(h_A, nElem);
initialData(h_B, nElem);
memset(hostRef, 0, nBytes);
memset(gpuRef, 0, nBytes);
// pass the pointer to device
CHECK(cudaHostGetDevicePointer((void **)&d_A, (void *)h_A, 0));
CHECK(cudaHostGetDevicePointer((void **)&d_B, (void *)h_B, 0));
// add at host side for result checks
sumArraysOnHost(h_A, h_B, hostRef, nElem);
// execute kernel with zero copy memory
sumArraysZeroCopy<<<grid, block>>>(d_A, d_B, d_C, nElem);
// copy kernel result back to host side
CHECK(cudaMemcpy(gpuRef, d_C, nBytes, cudaMemcpyDeviceToHost));
// check device results
checkResult(hostRef, gpuRef, nElem);
// free memory
CHECK(cudaFree(d_C));
CHECK(cudaFreeHost(h_A));
CHECK(cudaFreeHost(h_B));
free(hostRef);
free(gpuRef);
// reset device
CHECK(cudaDeviceReset());
return EXIT_SUCCESS;
}
编译运行:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvcc -O3 sumArrayZerocpy.cu -o sumZerocpy
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/chapter04$ nvprof ./sumZerocpy
==11871== NVPROF is profiling process 11871, command: ./sumZerocpy
Using Device 0: GeForce GT 740M Vector size 1024 power 10 nbytes 4 KB
==11871== Profiling application: ./sumZerocpy
==11871== Profiling result:
Time(%) Time Calls Avg Min Max Name
36.62% 5.3440us 2 2.6720us 2.6560us 2.6880us [CUDA memcpy DtoH]
32.46% 4.7360us 1 4.7360us 4.7360us 4.7360us sumArraysZeroCopy(float*, float*, float*, int)
17.76% 2.5920us 2 1.2960us 1.2800us 1.3120us [CUDA memcpy HtoD]
13.16% 1.9200us 1 1.9200us 1.9200us 1.9200us sumArrays(float*, float*, float*, int)
???:
$ ./sumZerocopy <size-log-2>
因此,对于共享host和device之间的一小块内存空间,zero-copy是很好的选择,简化了编程。
在异构架构中有两种:集成&分离。集成:CPU和GPU在同一个芯片上,共享memory,这个时候zero-copy memory很适合。分离:CPU和GPU在不同的芯片上,通过PCIe总线进行传输,只有特定场景适合zero-copy。另外,不要过度使用zero-copy,因为device中的threads读取zero-copy非常慢。
Unified Virtual Addressing
在CC2.0以上的设备支持一种新特性:Unified Virtual Addressing (UVA).这个特性在CUDA4.0中首次介绍,并被64位Linux系统支持。如下图所示,在使用UVA的情况下,CPU和GPU使用同一块连续的地址空间:
在UVA之前,我们需要分别管理指向host memory和device memory的指针。使用UVA之后,实际指向内存空间的指针对我们来说是透明的,我们看到的是同一块连续地址空间。
这样,使用cudaHostAlloc分配的pinned memory获得的地址对于device和host来说是通用的。我们可以直接在kernel里使用这个地址。回看前文,我们对于zero-copy的处理过程是:
1 分配已经映射到device的pinned memory。
2 根据获得的host地址,获取device的映射地址。
3 在kernel中使用该映射地址。
使用UVA之后,就没必要来获取device的映射地址了,直接使用一个地址就可以,如下代码所示:
// allocate zero-copy memory at the host side
cudaHostAlloc((void **)&h_A, nBytes, cudaHostAllocMapped);
cudaHostAlloc((void **)&h_B, nBytes, cudaHostAllocMapped);
// initialize data at the host side
initialData(h_A, nElem);
initialData(h_B, nElem);
// invoke the kernel with zero-copy memory
sumArraysZeroCopy<<<grid, block>>>(h_A, h_B, d_C, nElem);
编译运行:
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/Solutions/chapter04$ nvcc -O3 sumArrayZerocpyUVA.cu -o sumArrayZerocpyUVA
nvcc warning : The 'compute_20', 'sm_20', and 'sm_21' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
hym@hym-ThinkPad-Edge-E440:~/CodeSamples/Solutions/chapter04$ nvprof ./sumArrayZerocpyUVA
==16987== NVPROF is profiling process 16987, command: ./sumArrayZerocpyUVA
Using Device 0: GeForce GT 740M Vector size 16777216 power 24 nbytes 64 MB
sumArrays, elapsed = 0.015717 s
sumArraysZeroCopy, elapsed = 0.020800 s
sumArraysZeroCopy w/ UVA, elapsed = 0.020872 s
==16987== Profiling application: ./sumArrayZerocpyUVA
==16987== Profiling result:
Time(%) Time Calls Avg Min Max Name
29.82% 33.489ms 3 11.163ms 11.107ms 11.256ms [CUDA memcpy DtoH]
19.39% 21.775ms 2 10.887ms 10.847ms 10.927ms [CUDA memcpy HtoD]
18.50% 20.778ms 1 20.778ms 20.778ms 20.778ms sumArraysZeroCopyWithUVA(float*, float*, float*, int)
18.46% 20.733ms 1 20.733ms 20.733ms 20.733ms sumArraysZeroCopy(float*, float*, float*, int)
13.84% 15.545ms 1 15.545ms 15.545ms 15.545ms sumArrays(float*, float*, float*, int)
Unified Memory
在CUDA 6.0,引入了一个Unified Memory的新功能,以简化CUDA的内存管理。
Unified Memory依赖于UVA,但它们是完全不同的技术。UVA给所有CPU和GPU提供了一个虚拟的地址空间,但是UVA不会自动地将数据从一个物理位置迁移到另一个位置,这正是Unified Memory所特有的。
Unified Memory提供了一个“单指针数据”模型,其概念上类似于zero-copy。 然而,零拷贝内存被分配在主机内存中,并且在kernek中的性能通常会受到PCIe总线对零拷贝内存的高延迟访问。另一方面,Unified Memory解耦内存和执行空间,以便数据可以透明地根据需要迁移到主机或设备,以提高局部性和性能。???没有理解
原始的CUDA程序:
__global__ void AplusB(int *ret, int a, int b)
{
ret[threadIdx.x] = a + b + threadIdx.x;
}
int main()
{
int *ret;
//**************************************
cudaMalloc(&ret, 1000 * sizeof(int));
AplusB<<<1, 1000>>>(ret, 10, 100);
//**************************************
int *host_ret = (int *)malloc(1000 * sizeof(int));
cudaMemcpy(host_ret, ret, 1000 * sizeof(int), cudaMemcpyDefault);
for(int i = 0; i < 1000; i++)
printf("%d: A + B = %d\n", i, host_ret[i]);
free(host_ret);
cudaFree(ret);
return 0;
}
使用Unifiled Memory
#include "../common/common.h"
#include <cuda_runtime.h>
#include <stdio.h>
__global__ void AplusB(int *ret, int a, int b)
{
ret[threadIdx.x] = a + b + threadIdx.x;
}
int main()
{
int *ret;
//***********************************************
CHECK(cudaMallocManaged(&ret, 1000 * sizeof(int)));
AplusB<<<1, 1000>>>(ret, 10, 100);
//***********************************************
CHECK(cudaDeviceSynchronize());
for(int i = 0; i < 1000; i++)
printf("%d: A + B = %d\n", i, ret[i]);
cudaFree(ret);
return 0;
}
从上面不同的代码可以看出,统一寻址后的代码更简洁,使用了函数cudaMallocManaged()
开辟一块存储空间,无论是在Kernel函数中还是main函数中,都可以使用这块内存,达到了统一寻址的目的。
注意:main函数在调用kernel函数之后,使用了一个同步函数。仔细思考后就会有所领悟——既然这块存储空间既可以被kernel函数访问,也可以被main函数访问,为了解决访问冲突的问题,因此使用了同步函数,使得在Kernel改变变量的值后,main函数才能使用该变量。
注意:Unifiled Memory需要在CC3.0以上,64bit.
http://blog.csdn.net/tom1027/article/details/44856875