-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Move Deferred helpers to Microsoft.Toolkit #3762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks michael-hawker for opening a Pull Request! The reviewers will test the PR and highlight if there is any conflict or changes required. If the PR is approved we will proceed to merge the pull request 🙌 |
/// <summary> | ||
/// <see cref="EventArgs"/> which can retrieve a <see cref="EventDeferral"/> in order to process data asynchronously before an <see cref="EventHandler"/> completes and returns to the calling control. | ||
/// </summary> | ||
public class DeferredEventArgs : EventArgs, IDisposable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to maintain history here, but git is so annoying about moving files and updating them... I even did them as two separate commits.
@pedrolamas you didn't have any unit tests we should port over, eh? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a small note, still approving because this was the behavior before the refactoring too, so it's not an issue with this PR per se even if we did want to address that 🙂
lock (_eventDeferralLock) | ||
{ | ||
return _eventDeferral ?? (_eventDeferral = new EventDeferral()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused as to how this would work when more than one subscriber to the event asks for a deferral, wouldn't the object that raised the event just wait until the first one that completes the (same) deferral signals a completion?
As in, imagine this scenario:
- An object
sender
raises an event with somrDeferredEventArgs
argument. - A subscriber
sub1
is the first in the invocation list, so starts executing its handler. sub1
callsGetDeferral
, then starts some asynchronous operations, yielding back.- The multicast delegate keeps going and gets to the second target in the invocation list.
sub2
callsGetDeferral
, then starts some asynchronous operation and yields back.
...sub2
's operation completes first. It gets control back and completes the deferral.sender
keeps going as the deferral is complete.
...Meanwhile sub1
is still awaiting, but nobody is waiting for it 😢
If this is by design and we only want to support at most one receiver with a deferral, I think we should document so, and possibly detect cases where this happens and possibly throw, or indicate the intended behavior somehow 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call out @Sergio0694, we should probably file a new issue for the future. This was the same in the original library as well.
Though, I think in most cases we've seen at a UI layer, there's usually only one subscriber, but there's no guarantee of that for sure based on C# conventions.
That could be something we clean-up as an internal implementation detail in the future, eh? I don't think we need to modify anything later? I think the main thing is if the base usage of the event raiser would know what to do with multiple subscribers, as here we just expect the event args to be modified:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please bear with me as I've written that code quite a few years ago, but I believe at the time I made sure that you don't call the sub2 before sub1 is complete!
I'll check this better tomorrow, it's now 0h here in the UK...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've double-checked and the code is correct: after each handler invocation, we will then call eventArgs.GetCurrentDeferralAndReset();
so that we get the current TaskCompletionSource
instance from that invocation and reset it before we move on to the next handler - so each handler has his own chance to call for deferral.
So between each handler's call to GetDeferral()
and the next handler, we call the GetCurrentDeferralAndReset()
that will ensure that each one has a unique awaitable instance.
Now what this means is that we can have multiple handlers executing at the same time, but we will still wait for all of them to finish before moving ahead (that's where that Task.WhenAll
comes along)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, sorry I assumed you would just invoke the handler normally and simply await on the args afterwards. I see from the code that @michael-hawker linked that you're using a custom extension to sequentially run the registered handlers instead, yeah that'll work fine for now. That approach does have the limitation that handlers will not be able to run sequentially though, so eg. if one takes a long time it'll also delay all the others, but this is consistent with the previous behavior anyway so we might as well address this in another PR in the future if it becomes a problem or just leave it as is, which is fine for more scenarios anyway 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed you are correct, it will run the even handlers in parallel and not wait for the completion of each before running the next!
However, if we do want to do that, it is quite trivial to add an optional boolean parameter to the InvokeAsync
method with an alternative execution (so if that parameter is set to true, we would ensure await immediately instead of using the Task.WhenAll
in the end!
protected virtual void Dispose(bool disposing) | ||
{ | ||
if (_disposed) | ||
{ | ||
return; | ||
} | ||
|
||
_eventDeferral?.Dispose(); | ||
|
||
_disposed = true; | ||
} | ||
|
||
/// <inheritdoc/> | ||
public void Dispose() | ||
{ | ||
// Dispose of unmanaged resources. | ||
Dispose(true); | ||
|
||
// Suppress finalization. | ||
GC.SuppressFinalize(this); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically this is suggested for types that also have unmanaged resources, to guarantee they're properly disposed no matter what. In this case it's still correct but possibly a bit overkill, we might want to simplify this in the future if we wanted. Just a note, the code itself is not wrong 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I didn't want to do this stuff, but they're all raised as warnings/suggestions. So we'd have to turn those off or ignore them I guess... thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dispose method here is less related with the safe memory release and more with ease of use, as one can just use a using
code block to release the deferral!
Please check the end remarks on this article: https://www.pedrolamas.com/2017/04/04/await-your-event-handlers-completion-with-deferred-events/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we want consumers to use this type though? Because if we want them to always just use a using
block for the deferral, we might as well also remove the destructor completely, which also reduces the overhead on the runtime/GC. It's true that when doing so, people could potentially cause bugs if they forget to dispose a deferral, but then again that should never happen so I'm not sure providing a safe fallback in that care is even really needed 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a very fair point! When I wrote this initially, I wanted to keep the same signature usage as one can find in UWP apps (so the whole GetDeferral()
with Complete()
) but as that does require way more coding than just using an IDisposable
inside a using
.
The destructor is most likely an overkill, I initially add it to maintain the full IDisposable
pattern, but I do not expect it to actually be called at any time!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michael-hawker following up on this, I do agree with @Sergio0694, removing the deconstructor and GC.SuppressFinalize()
call makes sense as we actually do not care about it anyway for the using
pattern (all we actually need is the IDisposable
implementation and nothing else!)
I did write some tests, the thing is I didn't mock anything and instead used delays to prove that it works as expected, but I wouldn't call that ideal... let me see if I can write something a bit better than those! |
@michael-hawker I just added some unit tests to the project here: https://github.com/pedrolamas/DeferredEvents/blob/master/src/DeferredEventsTests/UnitTests.cs The last one of those tests serves to show that the situation that @Sergio0694 commented here does not actually happen and that it works as expected! |
I'll import the tests and clean this up in the morning. |
I've added a PR to your PR with the tests here: michael-hawker#1 I see no point in having a full |
I have to admit, I'm a bit troubled with the changes in visibility of some methods from From what I could ascertain, the reason for that is that Microsoft.Toolkit is strongly signed, yet the Microsoft.Toolkit.Uwp one isn't, so one can't use If all assemblies (even the test ones!) were strongly signed, this would not be a problem at all... am I missing something here? |
Adds Unit Tests and reverts IDisposable pattern
Thanks @pedrolamas for setting that up.
@pedrolamas UWP doesn't support strong assemblies. ☹ We'll be able to fix this when we make the shift to WinUI 3. Since we may ship parallel packages for that transition, we can fix it in our WinUI 3 branch soon and make them internal again. |
Not sure why the build failed, going to kick it off again as seems transient? |
@azchohfi not sure what happened but seems like the smoke test started trying to look for different bits than what were built in the build step??? Going to re-kick the whole thing, but not sure if this is a concern in general at a broader level... |
It's been a while since I looked into this, but my understanding is that a UWP assembly is still an assembly so one can still use strong names, but UWP will just ignore those completely... I did a proof of concept of just that here: pedrolamas@22d1c3e Basically, I allowed for UWP assemblies to have strong name and then ensured that all |
Interesting (FYI @azchohfi). I guess we've come around again from where we were before. Thoughts @clairernovotny? Should we just do this now to help with the transition for WinUI in the future when everything's back in .NET land (as we're already signing our .NET assemblies). |
Adding a strong name to an assembly is a breaking change as it changes the assembly identity. The guidance is always to strong name assemblies when they may run on .NET Framework (such as .NET Standard libraries). That would ripple to anything that needs internal access. |
Thanks for your input @clairernovotny Sorry if I'm going sideways on the topic (don't want to start any "tabs vs. spaces" war of sorts here) but I think this is pertinent not only for the current matter but for future similar ones. Indeed, adding a strong name will cause a break in assembly identity, but is that significant for UWP assemblies? Please correct me if I'm wrong here, but a UWP assembly can only be used as a reference on another UWP based assembly, and my understanding is that UWP compiler mostly ignores the strong naming of assemblies, so adding one to the UWP toolkit assemblies should be a safe thing to do and would allow for proper internals visibility from .NET Standard assemblies to UWP ones. |
For that specific case, I'm not completely sure @terrajobst would know better. |
The UWP runtime will ignore them, but I don't believe MSBuild or the compilers will, so during compilation you'll likely run into unification issues. So yeah, adding a strong name will break a bunch of stuff. |
Well, given @terrajobst comments I guess there is no way to avoid changing the .NET Core library methods visibility from |
Also, we've pretty much banned IVT in our code base because it's hard to reason about breaking changes. |
That's very interesting @terrajobst, what is the pattern you are using to avoid it? Just make the methods |
Multiple ways:
(1) is quite heavyweight. Basically you have two assemblies: a public reference assembly that omits the logically internal APIs. And implementation assembly with both the public and the logically internal APIs, as public. (2) Was tried by many teams, such as ASP.NET. It doesn't work -- when API is public customers will use it and in the end you're going to be forced to not break them. However, now you have a public API you didn't design properly. Seems like the worst of all worlds. (3) You just bite the bullet and design and support them as regular public APIs. My personal opinion is that you're almost always better off just doing (3). |
Thanks everyone for the info for this conversation! @pedrolamas lets leave this as is for now then as we're unclear the impact of strong naming all the UWP assemblies (we're a breaking change release, but it sounds like there could be some unknown MSBuild issues). In our WinUI 3 branch, we'll probably be signing everything anyway and we can use IVT there (as this is a pretty specific use-case tied to a framework specific implementation we control) or we can come up with a public API version of this? |
Indeed, given the above I think the current state is probably the best for now! |
We are already signing everything on the WinUI3 branch. We still need to ask the Xaml Behaviors team to strongly sign their assemblies for WinUI3 as well, since we have a dependency on them (so right now we are ignoring one warning), but most of our preview nugets are signed already. |
Hello @michael-hawker! Because this pull request has the p.s. you can customize the way I help with merging this pull request, such as holding this pull request until a specific person approves. Simply @mention me (
|
Fixes #3707
Moves the Deferred Event Helpers to the main .NET package and leaves the Uwp specific extensions in the Microsoft.Toolkit.Uwp package.
PR Type
What kind of change does this PR introduce?
What is the current behavior?
Deferred events pattern only available in UWP.
What is the new behavior?
Deferred events pattern can be used in .NET
PR Checklist
Please check if your PR fulfills the following requirements:
Other information