-
Notifications
You must be signed in to change notification settings - Fork 199
Shard several of the most costly targets. #2266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Introduces a filter_tuple_by_modulo to break up tuples. Drops build time of target from 21 minutes to under 14 minutes with 64 build processes, or 11 minutes with 128 build processes. time ninja -j 64 device_grouped_conv3d_fwd_instance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't look like there's anything for docs.
I wasn't sure how to test the header-only instantiation code on my initial commit. From Jenkins CI test results, I see that there is a test target that depends on these headers: ninja -j 128 test_grouped_convnd_fwd This allowed me to test the build locally. I found three mistakes I made, mostly related to early experiments on I tried on the code. This was hard to find earlier because this PR is really too large. I also discovered that there are five 2D convolution targets that now dominate the compilation time. I will likely address those in a later PR, rather than adding even more changes to this PR.
Our pattern for instantiating MIOpen templates uses duplicate declarations (instead of headers). This is fragile, and I didn't notice that my last commit had a bunch of link errors. I fixed these mistakes, and the bin/test_grouped_conv_fwd test target binary now links correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @shumway , great job. Is it possible to generate these c++ files using some python script? At now it is impossible to check/review if each instance has been added correctly. It will be much easier to handle such python generator instead of 150 files. It will also be very useful to port such python script to generate c++ files for bwd data and bwd weight
@bartekxk and I chatted, and I'll look at existing codegen inside the CK codebase to see how to set this up so that it matches what's already here. |
@bartekxk 's suggestion to use codegen looks like a good idea. I have started to implement this as a cmake function, using |
Use a CMake function with template files to generate the source files for the intantiating the kerenels and to generate the calling function.
This code is much simpler now, down to 35 files changed. I followed @bartekxk 's suggestion and converted to code |
Now that we have automated the shard instantiation, we can shard the 2D convolution targets that take the longest to build. The target test_grouped_conv2d_fwd now compiles in 15 minutes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Could you just make sanity check and verify if test_grouped_convnd_fwd generates the same output on the develop and on your branch? |
I used CMAKE_SOURCE_DIR to refer to the top-level source directory in the ShardInstantiation.cmake file, but this can cause issues with git submodules. Instead, we should use PROJECT_SOURCE_DIR to ensure compatibility when this project is used as a submodule in another project.
This reverts commit 3a0cb27.
Proposed changes
We want to reduce build time for the CK library that is consumed by MIOpen.
Several of the build units (source files) take ~20 minutes to compile. To reduce compilation time, we split those files
out into multiple copies (build shards). There are three complications:
The solution we propose is to create a utility metafunction
filter_tuple_by_modulo_t
that splits up the kerneltype tuples using a stride and offset.
extern template
to make sure we don't reinstantiate these templates from the header code.We partially automated this processes with code generation, by writing a CMake function
generate_sharded_instantiations to generate the source files for the shared instatiation functions and the calling
function directly from CMake.
Checklist
Please put an
x
into the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-format
on all changed filesDiscussion
I went with this solution because the strategy of breaking down tuples into smaller tuples wasn't going to
get us all the individual kernels. This was the best compromise I could find between vebosity and getting
build performance. Now we have about 165 targets for this target, which shard well on 128 shards:
time ninja device_grouped_conv3d_fwd_instance
This reduces build time from this target from about 22 minutes to 11 minutes.