-
Notifications
You must be signed in to change notification settings - Fork 548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intro to HIP Programming Model #3739
base: docs/develop
Are you sure you want to change the base?
Conversation
Co-authored-by: Leo Paoletti <[email protected]>
Co-authored-by: Leo Paoletti <[email protected]>
fix cooperative groups
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the figure, docs/data/understand/programming_model/cpu-gpu-comparison.svg
It's better to add text "CPU core" in one of CPU core block, and a "CU" in one of GPU part.
In the figure, docs/data/understand/programming_model/host-device-flow.svg, the text for "Execute Kernel" should be "Execute Kernel(s)" |
|
||
1. Initialize the HIP runtime and select the GPU: As described in :ref:`initialization`, refers to identifying and selecting a target GPU, setting up a context to let the CPU interact with the GPU. | ||
2. Data preparation: As discussed in :ref:`memory_management`, this includes allocating the required memory on the host and device, preparing input data and transferring it from the host to the device. The data is both transferred to the device, and passed as an input parameter when launching the kernel. | ||
3. Configure and launch the kernel on the GPU: As described in :ref:`device_program`, define and load the kernel or kernels to be run, launch kernels using the triple chevron syntax or appropriate API call (for example ``hipLaunchKernelGGL``), and pass parameters as needed. On the GPU, kernels run on streams, or a queue of operations. Within the same stream, operations run in the order they were issued, but different streams are independent and can execute concurrently. In the HIP runtime, kernels run on the default stream when one is not specified, but specifying a stream for the kernel lets you increase concurrency in task scheduling and resource utilization, and launch and manage multiple kernels from the host program. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"define and load the kernel or kernels to be run, launch kernels using the triple chevron syntax or appropriate API call (for example hipLaunchKernelGGL
), and pass parameters as needed. On the GPU, kernels run on streams, or a queue of operations."
Should be
"defines kernel configurations and arguments, launches kernel to excute on the GPU device using the triple chevron syntax or appropriate API call (for example hipLaunchKernelGGL). On the GPU, multiple kernels can run on streams, with a queue of operations. "
rewrite Programming Model content to provide an introduction to programming with HIP for GPU applications.