feat: Upstream NIOAsyncRuntime from PassiveLogic to swift-nio#3487
feat: Upstream NIOAsyncRuntime from PassiveLogic to swift-nio#3487scottmarchant wants to merge 1 commit intoapple:mainfrom
Conversation
f04675d to
b12f019
Compare
| private let _swiftTaskEnqueueGlobalHook: UnsafeMutablePointer<EnqueueGlobalHook?> = | ||
| dlsym(dlopen(nil, RTLD_LAZY), "swift_task_enqueueGlobal_hook").assumingMemoryBound( | ||
| to: EnqueueGlobalHook?.self | ||
| ) |
There was a problem hiding this comment.
I can't find any evidence of this file being used.
There was a problem hiding this comment.
Yep, you're right. Copy pasted from the NIOPosix benchmark, never removed. Good find, thank you!
I also noticed I didn't update the benchmark Package.swift to include an executable target for this. I fixed that as well.
|
|
||
| import struct Foundation.UUID | ||
|
|
||
| #if canImport(Dispatch) |
There was a problem hiding this comment.
Let's not keep unreachable code here.
There was a problem hiding this comment.
The code associated with this import is used in the unit tests (provided in a separate PR to keep the diff size in this PR at least somewhat tenable), so it is reachable in that context.
|
|
||
| // MARK: - AsyncEventLoop - | ||
|
|
||
| /// A single‑threaded `EventLoop` implemented solely with Swift Concurrency. |
There was a problem hiding this comment.
Let's have a more thorough explanation of why this is here, including the note that this event loop is really only intended to be used as a transitional step. It should be used only by projects that currently use MTELG, and only use it for scheduling with no use of I/O at all. Those projects should eventually move to Swift Concurrency in its entirety, but for now they can use this to help transition.
| /// A single‑threaded `EventLoop` implemented solely with Swift Concurrency. | ||
| @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) | ||
| public final class AsyncEventLoop: EventLoop, @unchecked Sendable { | ||
| public enum AsynceEventLoopError: Error { |
There was a problem hiding this comment.
| public enum AsynceEventLoopError: Error { | |
| public enum AsyncEventLoopError: Error { |
There was a problem hiding this comment.
🤦♂️ Oh man, how have I not caught this yet. Fixed, thank you!
| import Atomics | ||
| import NIOCore | ||
|
|
||
| import struct Foundation.UUID |
There was a problem hiding this comment.
Let's avoid the Foundation dependency here. A global atomic counter should be fine.
|
|
||
| /// Drop‑in stand‑in for `NIOThreadPool`, powered by Swift Concurrency. | ||
| @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) | ||
| public final class NIOThreadPool: @unchecked Sendable { |
There was a problem hiding this comment.
@Lukasa The naming of the public classes and enums in this file are intentionally chosen to exactly match the names in NIOPosix. Happy to change the name, but we would lose the drop-in compatibility with code that currently uses NIOPosix. If we rename, then downstream code would have to use a typealias or conditional refactoring to add wasm support. As-is, downstream packages only need to conditionalize the dependency and the import, no additional code changes needed. The same logic applies for the errors. All public-facing api's were intentionally chosen to have drop-in compatibility to minimize changes required downstream to enable wasm compilation.
Do you still want me to rename, or prefer to keep the drop-in compatibility?
If I rename, then I'll update the readme and downstream code in vapor to use typealiases.
UPDATE: Actually, I think you have a great point here. I've been looking at this as a drop-in replacement, so the identical names used to make sense. But now that it is in the same package, it is worth the small extra work to type alias in downstream consumption to be able to avoid duplicate names. Otherwise, when people search in swift-nio, they'll see duplicate symbols of the same MTELG name (etc).
Will update the names to avoid conflict.
|
|
||
| /// Drop‑in stand‑in for `NIOThreadPool`, powered by Swift Concurrency. | ||
| @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) | ||
| public final class NIOThreadPool: @unchecked Sendable { |
There was a problem hiding this comment.
| public final class NIOThreadPool: @unchecked Sendable { | |
| public final class NIOThreadPool: Sendable { |
| startWorkersIfNeeded() | ||
|
|
||
| Task { | ||
| await self.workQueue.enqueue(IdentifiableWorkItem(workItem: body, id: nil)) |
There was a problem hiding this comment.
Is there any need to close over self?
There was a problem hiding this comment.
Nope, removed. Thank you!
| guard workerTasks.isEmpty else { return } | ||
| for index in 0..<numberOfThreads { | ||
| workerTasks.append( | ||
| Task.detached { [weak self] in |
There was a problem hiding this comment.
This is another special case where I think [weak self] is probably required to prevent a retain cycle, since the closure is appended to the workerTasks member variable. If you feel strongly, happy to remove it. But it seems like this one is necessary to avoid leaking memory.
| private var isShuttingDown = false | ||
|
|
||
| func enqueue(_ item: IdentifiableWorkItem) { | ||
| if let continuation = waiters.popLast() { |
There was a problem hiding this comment.
popLast is unexpected: is this intentionally LIFO?
There was a problem hiding this comment.
Yes, this is intentionally done this way, but not necessarily for LIFO. The order that available waiters are selected out of the pool for running a new task doesn't matter as far as I'm aware. But popLast is O(1), whereas popFirst may have cases where it is O(n) or worse.
Note that this is for the pool of waiters (ie. "threads"), not the queue of work items.
Do you want me to investigate unit test and benchmark differences if I switch to popFirst?
e7c6cd2 to
b12f019
Compare
scottmarchant
left a comment
There was a problem hiding this comment.
Hi @Lukasa, thanks so much for the great review! I believe I have addressed all comments except for a few that I had questions about or wanted to confirm first.
Please let me know if there any more improvements or changes you would like.
| private let _swiftTaskEnqueueGlobalHook: UnsafeMutablePointer<EnqueueGlobalHook?> = | ||
| dlsym(dlopen(nil, RTLD_LAZY), "swift_task_enqueueGlobal_hook").assumingMemoryBound( | ||
| to: EnqueueGlobalHook?.self | ||
| ) |
There was a problem hiding this comment.
Yep, you're right. Copy pasted from the NIOPosix benchmark, never removed. Good find, thank you!
I also noticed I didn't update the benchmark Package.swift to include an executable target for this. I fixed that as well.
|
|
||
| import struct Foundation.UUID | ||
|
|
||
| #if canImport(Dispatch) |
There was a problem hiding this comment.
The code associated with this import is used in the unit tests (provided in a separate PR to keep the diff size in this PR at least somewhat tenable), so it is reachable in that context.
|
|
||
| #if os(WASI) || canImport(Testing) | ||
|
|
||
| import Atomics |
There was a problem hiding this comment.
This was just following the existing paradigm in swift-nio, which much more heavily uses import Atomics compared to import Synchronization. Also, at the time of writing this code, I'm not sure that Synchronization was available in wasm builds. But it looks like it works now. It requires availability updates, but that is fine for NIOAsyncRuntime.
I updated this to import Synchronization and removed the swiftAtomics dependency in the manifest.
| import Atomics | ||
| import NIOCore | ||
|
|
||
| import struct Foundation.UUID |
|
|
||
| /// A single‑threaded `EventLoop` implemented solely with Swift Concurrency. | ||
| @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) | ||
| public final class AsyncEventLoop: EventLoop, @unchecked Sendable { |
There was a problem hiding this comment.
Great catch, thanks! This is done
|
|
||
| /// Drop‑in stand‑in for `NIOThreadPool`, powered by Swift Concurrency. | ||
| @available(macOS 10.15, iOS 13, tvOS 13, watchOS 6, *) | ||
| public final class NIOThreadPool: @unchecked Sendable { |
| startWorkersIfNeeded() | ||
|
|
||
| Task { | ||
| await self.workQueue.enqueue(IdentifiableWorkItem(workItem: body, id: nil)) |
There was a problem hiding this comment.
Nope, removed. Thank you!
| guard workerTasks.isEmpty else { return } | ||
| for index in 0..<numberOfThreads { | ||
| workerTasks.append( | ||
| Task.detached { [weak self] in |
There was a problem hiding this comment.
This is another special case where I think [weak self] is probably required to prevent a retain cycle, since the closure is appended to the workerTasks member variable. If you feel strongly, happy to remove it. But it seems like this one is necessary to avoid leaking memory.
| // inject themselves in the middle of a clean run. | ||
| cancelScheduledWakeUp() | ||
|
|
||
| let newTask: Task<Void, Never> = Task { |
There was a problem hiding this comment.
Great call out, thank you. I improved the documentation a bit.
| private var isShuttingDown = false | ||
|
|
||
| func enqueue(_ item: IdentifiableWorkItem) { | ||
| if let continuation = waiters.popLast() { |
There was a problem hiding this comment.
Yes, this is intentionally done this way, but not necessarily for LIFO. The order that available waiters are selected out of the pool for running a new task doesn't matter as far as I'm aware. But popLast is O(1), whereas popFirst may have cases where it is O(n) or worse.
Note that this is for the pool of waiters (ie. "threads"), not the queue of work items.
Do you want me to investigate unit test and benchmark differences if I switch to popFirst?
b12f019 to
93ac3b5
Compare
486dd69 to
6cbaace
Compare
|
Something to note, I noticed some failures running the new benchmark in CI. Haven't seen those failures locally. I'll investigate tomorrow. UPDATE: One of the new benchmarks in NIOAsyncRuntime crashes in CI only, but not locally. The likely root cause is running out of memory. This means that NIOAsyncRuntime likely uses more memory than NIOPosix, which is not unexpected. The benchmark creates conditions (millions of long-deadline tasks) that are unlikely to be replicated in real world usage. Because this is more of a test-facing issue than a production-facing issue, I elected to skip the benchmark for now. As this is a new benchmark and new code, it is not nearly as concerning to do this compared to skipping an existing benchmark. |
5b5f7b0 to
42701f7
Compare
e125982 to
acefafe
Compare
…ncRuntime is a new module that provides alternative implementations of MTELG and NIOThreadPool using Swift Concurrency. It compiles to wasm and runs in the browser.
acefafe to
b2b5bb0
Compare
Upstreams NIOAsyncRuntime to swift-nio as a new module that provides wasm-compatible MTELG and NIOThreadPool implementations.
Motivation:
Several sqlite-related Vapor repositories currently depend on NIOPosix, but use only the MTELG and NIOThreadPool implementations from NIOPosix. It would be really nice to be able to use sqlite in the browser.
This PR unlocks key wasm compilation ability for several downstream packages that currently depend on NIOPosix only for the MTELG. It also provides an off-ramp for such packages to eventually migrate to Swift Concurrency, as the ability to use only NIOAsyncRuntime and not NIOPosix definitively proves the package could use just Swift Concurrency in place of MTELG.
Modifications:
Result:
Using this new module along with changes in vapor repositories, I have successfully compiled and ran vapor's sqlite implementation to wasm and ran in the browser.
Note that the included benchmarks indicate that NIOAsyncRuntime's implementation of MTELG and NIOThreadPool is less performant than the implementation in NIOPosix. This is not surprsing
Related PR's
The following PR adds testing and CI for this module
#3488
Testing Done
Threadimport under wasm (latest builds in Swift 6.4 may improve this).