Skip to content

Add ability to inject a tokio runtime as the backend for libevent.#15

Open
jmaygarden wants to merge 12 commits intojmagnuson:masterfrom
jmaygarden:feature/tokio-backend
Open

Add ability to inject a tokio runtime as the backend for libevent.#15
jmaygarden wants to merge 12 commits intojmagnuson:masterfrom
jmaygarden:feature/tokio-backend

Conversation

@jmaygarden
Copy link
Copy Markdown

A optional tokio backend for handling libevent I/O and signal readiness is optionally provided. It is not patched into libevent directly, but is substituted at run time with a call to libevent::inject_tokio.

The primary motivation for this feature is to allow native tokio and libevent tasks to co-exist with a single event loop on the same thread. This feature is especially useful when gradually migrating a C/libevent project to Rust/tokio when use of FFI between the C and Rust code prevents running the event loops on separate threads.

The feature requires bundling of the libevent C code build with libevent-sys due to dependencies on internal/non-public data structures within the library. It could work with a system installed build of libevent if the versions match, but that approach would not be without risk.

src/backend.rs Outdated
base.evbase = Box::into_raw(backend).cast();
}

/// Convenience function that returns true if the signal event bit is set.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would guess the compiler will figure it out but these might be decent candidates for explicitly requesting inlining via #[inline(always)]

src/backend.rs Outdated
self.runtime.block_on(async move {
match timeout {
// spawned tasks are serviced during the sleep time
Some(timeout) => tokio::time::sleep(timeout).await,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue I see with this approach is that an I/O event that libevent has registered with tokio might result in timers being registered that would then be at the top of the timer heap. This implementation would not return control back to libevent to possibly execute these actions by the deadlines for those new timers.

Take for example the following scenario:

  1. I register a 30s timer with libevent
  2. Libevent calls dispatch, there are 29.99s until the next timer needs to fire so that is the timeout passed into dispatch
  3. After 0.99s an I/O event occurs and a singleshot 250ms timer is registered with libevent.
  4. The remaining 29s elapses and libevent sees that it needs to service the 250ms oneshot timer but we've done so 28.75s late of its intended deadline.

I think this situation could occur with this implementation.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that you are right. We could use a Notify to break out of the loop, but that seems like a big performance hit. I need to do some profiling.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I modified the time-test sample program to verify this. I extended the timeout to 10 seconds. Then I set an SIGALRM handler for 1 second (so it doesn't use libevent timers). In that handler, I set another 0 second timeout that then sets up another SIGALRM.

The original version of this PR delayed the signal handling and 0 second timeout until the initial 10 second timeout completed. That's not executable. So, I added a notification mechanism to allow the block_on to break out early if an event occurs.

With the change, the SIGALRM is handled every second and the 0 second timeout occurs immediately afterwards.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

libevent-benchmark

I adapted libevent/test/bench.c to run with the tokio backend. It is definitely slower than using the native kqueue backend on my MacBook.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kqueue-vs-tokio

I haven't been able to get the performance on this benchmark any closer than half as good as straight kqueue. We could still end up doing better in practice because of better coordination between tokio and libevent.

…ent.

Verified that the dispatch needs to break when an event occurs and implement that with a notification.
…ent.

Added a benchmark program from libevent for comparing performance of tokio backend.
…ent.

Added two more samples from libevent for testing the tokio backend.
…ent.

Optimize dispatch function by not constructing a sleep if the given duration is 0.
…ent.

Removed use of fdinfo in favor of a hash map.
@jmaygarden jmaygarden force-pushed the feature/tokio-backend branch from 193f285 to 645e3cf Compare October 25, 2021 21:57
@jmaygarden jmaygarden force-pushed the feature/tokio-backend branch from 15c260d to 2c1a6a9 Compare October 29, 2021 17:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants