-
Notifications
You must be signed in to change notification settings - Fork 12.4k
opencl: add set_rows
for f16
and f32
#14547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ggml/src/ggml-opencl/ggml-opencl.cpp
Outdated
|
||
int nth0 = 256; | ||
size_t global_work_size[] = {(size_t)ne01*nth0, (size_t)ne02, (size_t)ne03}; | ||
size_t local_work_size[] = {(size_t)nth0, 1, 1}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that implementing it like this won't be very efficient. This dedicates 256 threads for each row of data. So for small rows with less than 256 elements, there will be wasted resources. For example, when FA is disabled, ggml_set_rows()
is used with rows of 1 element (due to the V cache being transposed), so 255 out of the 256 local threads will be idle.
That's why in the Metal implementation I did "threadgroup batching" so that the local threads can work on multiple rows. Might want to consider implementing it here too for improved performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the suggestion - it makes a good point. Looking into this.
@ggerganov Any chance getting an OpenCL runner on ggml-ci? |
The easiest way is if there are suitable machines in Azure cloud because we have a grant for these. The other option is someone to donate a dedicated machine (like how the SYCL node is donated by Menlo AI). |
Yeah, I was thinking of adding some X-Elite based runners but didn't get a chance to look into it. |
The N-series might work : https://learn.microsoft.com/en-us/azure-stack/user/gpu-vms-about?view=azs-2501 |
Even the MI25? |
Oh, I didn't notice MI25 in there, only saw the NV flavors somehow :). @ggerganov if you could get one of those VMs hooked up as runner with some new tag we could see which tests we could run on it. It will be useful for the HIP and maybe Vulkan backends as well. |
Currently it won't run on AMD. I did try to enable on AMD but have never really finished. I will take a look at it again. Intel should just work if integrated Intel GPUs can be used. Nvidia's OpenCL implementation never gets OpenCL 2.0 features like subgroups (unfortunately subgroups is not mandatory for OpenCL 3.0 so you can claim OpenCL 3.0 without subgroups). |
Ok, if you confirm support with any of the Azure hosts let me know and I'll add a node. |
I'm going to merge it now. We can iterate further if needed. |
Following a70c8a0, this PR adds
set_rows
for f16 and f32.Make sure to read the contributing guidelines before submitting a PR