-
Notifications
You must be signed in to change notification settings - Fork 6
[WIP] Rewrite and rename to 'async-hatch' #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ments that turned up in the process.
…_ptr_hatch`, update docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good. I appreciate your efforts.
I have two additional notes to the stuff I annotated:
- I think we should wrap the
Flags
in an additional newtype struct, so that we can easily markflags
(the struct field) onSender
andReceiver
aspub
, and put the get/set methods on them. I also think that the set-methods should have get a&mut self
instead ofself
and then return&mut Self
. This would be more ergonomical, and easier to compose. - I'm not sure if I correctly understand these multiple layers of
Option
wrapping, and why they are done... It might be a good idea to invest some time in trying to wrap this into a move-semantic workflow/type-state pattern instead (see also), but that could also be done after the merge (although it would be a good idea to get, if we would do that, ready before a release, because otherwise it would be annoying to have such a rapidly changing API)
src/sender.rs
Outdated
} else if state.recv() { | ||
Poll::Ready(Ok(that)) | ||
#[cfg(feature="async")] | ||
#[must_use = "Wait does nothing unless you oll it as a Future."] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this must_use
is required, because if Wait
is a Future
, it would issue a warning anyways.
And the comment has a typo: oll
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think they're automatic. I guess I'll have a play and figure out. And fix the typo.
For a few months of experimentation, I actually had them as a repr(u8) enum. I just ultimately got fed up of it being a lot more code than was necessary. It was pretty nice though.
The options are really just so we can disable the destructors when we know there is no further cleanup to do. I love typestate style APIs, but they make life difficult for storing them inside an object (such as a future...) |
Correct me if I'm wrong, but I think broadly, you don't dislike the public api. The internal API is incredibly up for grabs. Here are a few things we could do:
The more flags we use, the harder it gets to keep the locking logic simple and readable. And could also impact performance negatively. In terms of performance, while we are very fast, we're probably not optimal. I think with rather more work than seems strictly necessary, we could probably shave some further time off, though as the times aren't very long already (except destruction), it seems like something that we might do gradually. One thing I think is essential is to keep the |
I have mostly been working on the (forthcoming shortly) async-spsc this week, which offers a bounded alternative. I am wondering whether I should just put them both into async-spsc to deal with the naming issue. Or at least to cause a different naming issue 🙄 They are both SPSC channels differing primarily in buffer capacity, although since my rewrite, this library is becoming a slightly more versatile synchronisation primitive than most channels. Does anyone have any strong feelings about this? |
I'm thinking no at the current time. |
ok, the options are gone and so are a bunch of potential branches thanks to some neat bit fiddling. the code is also somewhat shorter and easier to read. |
This might be interesting/contributing to the |
That is interesting, yes. I thought a lot about the Send/Sync bounds the other week and i'm happy that if My main observation is that while it's more convenient to go from Because logically there are only two parties, the entire library was based around this assumption, which |
… the tests temporarily
I decided the SendError splitting was worthwhile and went with it. On the other hand it broke all the tests and I can't be fucked to fix them today. Also: fine-grained the async/sync send operation |
…and run with async disabled and miri run quicker. Benchmarks still broken.
src/sender.rs
Outdated
let flags = self.sender.lock(); | ||
// Our cleanup is to remove the waker. Also release the lock. | ||
let shared = unsafe { &mut *self.sender.hatch.inner.get() }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Safety remarks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seemed a little excessive given the lock is taken in the previous line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, so they are directly related, e.g. sender.lock
locks the hatch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes. the spinlock (taken via sender.lock
/ receiver.lock
) provides mutual exclusion for the value and wakers (in the async case).
we assume that the other side has always made a change (acquire) and we release our changes when we unlock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know we used to have individual locks, but it didn't seem to be winning us anything while it was increasing complexity. now we just have the one lock for all the mutable hatch data.
the sender.lock
/ receiver.lock
return either when they have the lock or they detect the other side has closed (by reading the returned flags). in the latter case, they will update the sender or receiver's local copy of the flags, allowing us to elide further atomics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a mutex is the wrong primitive, it implies that on failure (usually after a few spins) it will use a CondVar to get the OS to sleep the current thread. That is the absolute last thing we want in an async environment because it is just asking for the executor to be held up.
so we do the next best thing, a spinlock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually, you accidentally stumbled upon something quite interesting about the current design of async rust - there is no better way to handle this than a spin loop.
imagine the sender and receiver are on different threads. the sender takes the lock and then the OS deschedules it, then the receiver tries to take the lock. that spinlock is not going to succeed until the sender thread gets rescheduled.
in a non-async environment, we would actually want the OS to be involved, but in an async environment, that is not acceptable.
i have mooted previously that it would be nice if you could ask to reschedule the current async task. There is a hack for doing that (wake yourself, return pending), but it only does a reasonable job if you have fair scheduling (e.g. tokio's scheduler uses a 'next' slot which would just cause the same future to be executed again in all likelihood. it's a common optimisation and the workaround (also scheduling a dummy) would penalise users of executors that do provide fair queuing if we didn't lock it behind (yet another) feature flag. also it's just horrible and i'm hopeful that eventually we can get better support from the rust stdlib.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is definitively possible to implement a mutex that is compatible with async
, although it definitively takes a bit more work... Have you already looked at async-lock
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it's not.
because ultimately, you have to do whatever it is you're doing or set a waker to be woken up again. but the lock construct sets a waker! under the hood, even atomic-waker is a spinlock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, we'd no longer be able to squeeze all our flags down to a single atomic if we used an existing implementation
… performance blooper where the fast path was never taken
The benchmarks are working again and they've been extended somewhat. I figured out how to do the remaining difficult benchmarks (at least with boxes). Some of our benchmark times are slightly worse than they were before because of a change to benchmarking methodology: not unwrapping options. I suspect that llvm was being devious and that this should better reflect real-world performance. I've also done a little comparative testing with tokio's oneshot and we're a few percent faster for oneshot send+receive (sync and async) on my dev machine. |
I still think that |
hrm, the reason for that was because i anticipated you would want to immediately configure them and it turns it into a one liner. I suppose we could return |
Ah hrm, so that allows you to chain subsequent modifications, but you still need to have the original data stored, so it means you lose the ability to do a one-liner. I tried it just now and got this error:
|
Maybe we should compromise and have both 👀 |
I would be fine with that. |
… and the addition of set_* methods for the Sender flags
I've just finished some improvements to the Sender. Mostly in the way of documentation and comments, but it includes these. |
I've ripped out the borrowed pointer stuff. i also can't think of a use case that is still not served by other means. |
thanks |
Sorry to necrobump, but has this been abandoned? |
yes. the new maintainer of this crate didn't want it and i don't want to maintain it. feel free to do something with it. |
The library has undergone significant improvements as part of another rewrite. The API has changed slightly to accommodate the improvements. Most importantly, you need to
sender.send(42).now()
instead ofsender.send(42)
(though your code should start throwing warnings if you don't).Notable changes:
unsafe
.There's a little left to do before release:
I also only have benchmarks for boxed hatches because criterion makes writing the others quite difficult.
Feedback very wanted. I haven't had much so far from people who actually use it. Wouldn't complain if someone wrote those tests either ;)