问题描述:

OpenCL is of course designed to abstract away the details of hardware implementation, so going down too much of a rabbit hole with respect to worrying about how the hardware is configured is probably a bad idea.

Having said that, I am wondering how much local memory is efficient to use for any particular kernel. For example if I have a work group which contains 64 work items then presumably more than one of these may simultaneously run within a compute unit. However it seems that the local memory size as returned by CL_DEVICE_LOCAL_MEM_SIZE queries is applicable to the whole compute unit, whereas it would be more useful if this information was for the work group. Is there a way to know how many work groups will need to share this same memory pool if they coexist on the same compute unit?

I had thought that making sure that my work group memory usage was below one quarter of total local memory size was a good idea. Is this too conservative? Is tuning by hand the only way to go? To me that means that you are only tuning for one GPU model.

Lastly, I would like to know if the whole local memory size is available for user allocation for local memory, or if there are other system overheads that make it less? I hear that if you allocate too much then data is just placed in global memory. Is there a way of determining if this is the case?

网友答案:

Is there a way to know how many work groups will need to share this same memory pool if they coexist on the same compute unit?

Not in one step, but you can compute it. First you need to know how much local memory a workgroup will need. To do so you can use clGetKernelWorkGroupInfo with the flag CL_KERNEL_LOCAL_MEM_SIZE (strictly speaking it's the local memory required by one kernel). Since you know how much local memory there is per compute unit you can know the maximum number of workgroups that can coexist on one compute unit.

Actually this not that simple. You have to take into consideration other parameters such as the max number of threads that can reside on one compute unit.
This is a problem of occupancy (that you should try to maximize). Unfortunately occupancy will vary depending of the underlying architecture.

AMD publish an article on how to compute occupancy for different architecture here.
NVIDIA provide an xls sheet that compute the occupancy for they different architectures.
Not all the necessary information to do the calculation can be query with OCL (if I well recall), but nothing stops you to store info about different architectures in you application.

I had thought that making sure that my work group memory usage was below one quarter of total local memory size was a good idea. Is this too conservative?

It is quite rigid, and with clGetKernelWorkGroupInfo you don't need to do that. However there is something about CL_KERNEL_LOCAL_MEM_SIZE that needs to be taken into account:

If the local memory size, for any pointer argument to the kernel declared with the __local address qualifier, is not specified, its size is assumed to be 0.

Since you might need to compute dynamically the size of the necessary local memory per workgroup, here is a workaround based on the fact that the kernels are compiled in JIT.

You can define a constant in you kernel file and then use the -D option to set its value (previously computed) when calling clBuildProgram.

I would like to know if the whole local memory size is available for user allocation for local memory, or if there are other system overheads that make it less?

Again CL_KERNEL_LOCAL_MEM_SIZE is the answer. the standard states:

This includes local memory that may be needed by an implementation to execute the kernel...

网友答案:

If your work is fairly independent and doesn't re-use input data you can safely ignore everything about work groups and shared local memory. However, if your work items can share any input data (classic example is a 3x3 or 5x5 convolution that re-reads input data) then the optimal implementation will need shared local memory. Non-independent work can also benefit. One way to think of shared local memory is programmer-managed cache.

相关阅读:
Top