All of these tests also exist in test_capability.py. The only difference is the
way the .capnp file is loaded. But that could be tested with much less code.
Python 3.7 seems to have trouble dealocating objects in a timely fashion. We
rely on this, because the c++ destructors need to run before the kj event loop
is closed. Hence, we do it manually.
* Integrate the KJ event loop into Python's asyncio event loop
Fix#256
This PR attempts to remove the slow and expensive polling behavior for asyncio
in favor of proper linking of the KJ event loop to the asyncio event loop.
* Don't memcopy buffer
* Improve promise cancellation and prepare for timer implementation
* Add attribution for asyncProvider.cpp
* Implement timeout
* Cleanup
* First round of simplifications
* Add more a_wait functions and a shutdown function
* Fix edge-cases with loop shutdown
* Clean up calculator examples
* Cleanup
* Cleanup
* Reformat
* Fix warnings
* Reformat again
* Compatibility with macos
* Inline the asyncio loop in some places where this is feasible
* Add todo
* Fix
* Remove synchronous wait
* Wrap fd listening callbacks in a class
* Remove poll_forever
* Remove the thread-local/thread-global optimization
This will not matter much soon anyway, and simplifies things
* Share promise code by using fused types
* Improve refcounting of python objects in promises
We replace many instances of PyObject* by Own<PyRefCounter> for more automatic
reference management.
* Code wrapPyFunc in a similar way to wrapPyFuncNoArg
* Refactor capabilityHelper, fix several memory bugs for promises and add __await__
* Improve promise ownership, reduce memory leaks
Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might
seem like excessive nesting of objects (which to some degree it is, but with
good reason):
- The outer Own is needed because Cython cannot allocate objects without a
nullary constructor on the stack (Promise doesn't have a nullary constructor).
Additionally, I believe it would be difficult or impossible to detect when a
promise is cancelled/moved if we use a bare Promise.
- Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a
reference to the returned object keeps existing until the promise is fulfilled
or cancelled. Previously, this was attempted using attach, which is redundant
and makes reasoning about PyINCREF and PyDECREF very difficult.
- Because a promise holds a Own<Promise<...>>, when we perform any kind of
action on that promise (a_wait, then, ...), we have to explicitly move() the
ownership around. This will leave the original promise with a NULL-pointer,
which we can easily detect as a cancelled promise.
Promises now only hold references to their 'parents' when strictly needed. This
should reduce memory pressure.
* Simplify and test the promise joining functionality
* Attach forgotten parent
* Catch exceptions in add_reader and friends
* Further cleanup of memory leaks
* Get rid of a_wait() in examples
* Cancel all fd read operations when the python asyncio loop is closed
* Formatting
* Remove support for capnp < 7000
* Bring asyncProvider.cpp more in line with upstream async-io-unix.c++
It was originally copied from the nodejs implementation, which in turn copied
from async-io-unix.c++. But that copy is pretty old.
* Fix a bug that caused file descriptors to never be closed
* Implement AsyncIoStream based on Python transports and protocols
* Get rid of asyncProvider
All asyncio now goes through _AsyncIoStream
* Formatting
* Add __dict__ to PyAsyncIoStreamProtocol for python 3.7
* Reintroduce strange ipv4/ipv6 selection code to make ci happy
* Extra pause_reading()
* Work around more python bugs
* Be careful to only close transport when this is still possible
* Move pause_reading() workaround
* Use cibuildwheel in ci
`cibuildwheel` is a system that automatically compiles and repairs wheels for
many python versions and architectures at once. This has some advantages vs the
old situation:
- Macosx wheels had inconsistent minimum versions ranging between 10.9 and
11.0. I'm not sure why this happens, but for some users this means they have
to build from source on macosx. With cibuildwheel, the build is
consistent, with 10.9 as the minimum for x86 and 11.0 for arm64.
- Consolidation between the packaging tests and manylinux tests.
- Addition of musllinux targets and additional cross-compilation to ppc64le and
s390x.
- With cibuildwheel, new python versions should be automatically picked up.
- Separation of the sdist build and lint checks. There is not reason to run that
many times.
All possible build targets succeed, except for ARM64 on Windows. The upstream
capnp build fails. I've disabled it.
The cross-compilation builds on linux are pretty slow. This could potentially be
sped up by separating the builds of manylinux and musllinux, but I'm not sure if
it's worth the extra complexity. (One can also contemplate disabling these
targets.)
Tests for macosx arm64 cannot be run (but also couldn't be run in the previous
system. This should be remedied once apple silicon becomes available on the CI.
I've also added some commented-out code that can automatically take care of
uploading a build to PyPi when a release is created. One might contemplate using this.
* Set CMAKE_OSX_ARCHITECTURES for arm64 and disable universal2
* Add Python3.11 to the Github Actions, in the manylinux2014 build as well as the
packaging test.
I added a second image, x86_64, to the manylinux2014 build matrix - the i686
build didn't immediately pass, so I left it out.
* Move the bundled capnproto library forward to 0.10.3.
* Remove the 2010 build entirely, and update the 2014 build to include i686.
The example async server code uses timeouts around read() operations.
However, this has a race condition where data can be read, the timeout
fires, and the data is lost.
These timeouts are not really needed in this example code, so I removed
them to prevent people from having strange issues with lost messages
and undefined RPC behavior when using the example code.
This patch fixes a problem of reading random values for reader options
in pycapnp. The code which adds task to the list captures 'opts' by
reference and that causes a problem in case when 'opts' is allocated on
on the caller's stack. By the time when task is handled the stack frame
holding the 'opts' is gone which leaves dangling reference to 'opts' in
lambda's captures. As a result pycapnp reads random values for reader
options which sometimes causes unexpected errors (for example an error
that nesting level ius negative).
* Fixing issue with m1 build
- prevent earlier build step from running on arm64
- forced clean up of bundled dir
- adding --force-bundled-libcapnp to trigger a rebuild
- echoing out env variables and showing final wheel size
* Disable tests on Apple Silicon cross compile
* Added macos arm64 Apple Silicon to CI
- Added missing python 3.10 metadata to setup.py
* Need all python versions in the matrix
* Fixing github action matrix syntax
* Changing MACOSX_DEPLOYMENT_TARGET for older python versions
- Technically no one would be running python 3.7 on M1 anyway
* Adding windows visualcpp build tools and updated cmake
* Increase max-parallel for all matrix builds
* yaml indent
* trying visualstudio2022-workload-vctools
* disabled everything but windows and added tunshell for debugging
* try older powershell
* trying again
* try python instead of powershell
* bash with wget
* Using old windows client on other side
* wrong way
* backwards
* :(
* urgh back to caveman debugging
- trying older windows runners
* try it again with extra build tools
* Reverting back to windows-2019 runner for windows builds
Fixing error:
CMake Error at CMakeLists.txt:2 (project):
Generator
Ninja
does not support platform specification, but platform
x64
was specified.