-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
macho: faster and more memory efficient linker #13260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
kubkon/zld gitrev 5733ed87abe2f07e1330c3232a252e9defec638a
1. If an object file was not compiled with `MH_SUBSECTIONS_VIA_SYMBOLS` such a hand-written ASM on x86_64, treat the entire object file as not suitable for dead code stripping aka a GC root. 2. If there are non-extern relocs within a section, treat the entire section as a root, at least temporarily until we work out the exact conditions for marking the atoms live.
You can count the drone CI as a success. It passed all the tests and simply ran out of CI time when it was creating the tarball at the end. Furthermore, this is affecting master branch and 10b8c4d should reduce failure rate. Edit: oh, also you already had a full successful run and only added docs after that :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What a beauty.
btw you can give 3 commands at the same time to hyperfine
Oh shit, did not know that! I'll keep that in mind for the future, thanks! |
Closes #9764
This PR is a culmination of a general rewrite of our MachO linkers (traditional and incremental) I set out to do just before SYCL, and after I finished first batch of work on the COFF linker. If I were to summarise the major achievement of this PR it is the rewriting of the majority of the linker in the spirit of data-oriented design which led to
lld
andld64
- you can find some numbers below, andSome benches
zld
refers to our linker as a standalone binary,lld
to LLVM's linker, andld64
to Apple's linker. All in all, I should point out that we are still missing a number of optimisations in the linker such as cstring deduplication, compression of dynamic linker's relocations, and synthesising of the unwind info section, so this difference between us and other linkers will most likely shrink a little. Also note that for linking larger files, we are currently slowed down by oursha256
implementation, but once this is optimised, I think we should be able to reach/beatlld
(which is currently the fastest out ofzld
,lld
andld64
). I haven't compared withmold
yet mainly because I used to experience breakage, but that was a while back and perhapsmold
's stability on macOS has improved since.redis-server
stage3
Motivation
Firstly, some motivation for the rewrite. While working on the COFF linker I realised that conflating traditional with incremental states is a bad idea since then we do not optimise for either - traditional context will allow for optimisations that are otherwise not achievable or awkward when working in the incremental context. A perfect example would be preallocating output sections which in traditional context will happen towards the end of the linking process to simplify the entire process, while is required upfront in the incremental context for obvious reasons.
No relocs/code pre-parsing per Atom
Prior to this rewrite, we would preparse the code and relocs per each
Atom
aka a subsection of an input section per relocatable object file, and store the results on the heap. This is not only slow but also completely unnecessary. We can actually delay the work until we actually need it. This approach is now followed throughout.Linker now follows standard stages
Like
lld
,mold
andld64
, we also implement linking in stages, e.g., first comes symbol resolution, then we parse input sections into atoms, we then do dead code stripping (if desired), then create synthetic atoms such GOT cells, then create thunks if required, etc. This significantly simplified the entire linker as we do a very specialised work per stage and no more.We do not store any code or relocs per synthetic atoms
Instead of generating the code and relocs per synthetic atoms (GOT, stubs, etc) we only track their numbers, VM addresses and targets, while we generate the code and relocate when writing to the final image. In fact, we do not even need to track the addresses beyond the start and size of each synthetic section. I will refactor this in the future also.
Thunks
While at it, I also went ahead and implemented range extending thunks which mean we can now link larger programs on arm64 without erroring out in the linker. For more info, see #9764. One word of explanation is that contrary to what the issue suggested, we extend jump range via thunks rather than branch islands. For those unfamiliar, both methods extend the range of jump for the given RISC ISA, however, thunks use the scratch register and a load to load unreachable target's address into the scratch register and branch via register. As such, a thunk is 12 bytes on arm64. Branch islands on the other hand are 4 bytes as they are simple
bl #next_label
instructions. Branch islands are thus short range extenders where in order to jump further in the file, we chain the jumps by jumping between islands until reaching the actual target.Future work
If you browse over the changes, you will notice that I have introduced quite a bit of duplicated code. This is intentional but only temporary and I will be deduping common bits in-tree. In general however,
zld.zig
will contain the the main entry point and state tracking for the traditional linker, whileMachO.zig
will contain incremental state tracking.