I'm using Pycapnp in a project, where we compile `.capnp` files directly to
Cython instead of using the dynamic interface (for speed). For this, we need
access to the `reraise_kj_exception` C function defined by Pycapnp. This is not
possible, because Cython does not automatically make this function available to
downstream users.
My previous solution, in #301, was rather flawed. The file `capabilityHelper.cpp`, where
`reraise_kj_exception` is defined, was bundled into the distribution, so that
this file could be included in downstream libraries. This turns out to be a
terrible idea, because it redefines a bunch of other things like
`ReadPromiseAdapter`. For reasons not entirely clear to me, this leads to
segmentation faults. This PR revers #301.
Instead, in this PR I've made `reraise_kj_exception` a Cython-level function,
that can be used by downstream libraries. The C-level variant has been renamed
to `c_reraise_kj_exception`.
* add capnp_api.h to gitignore
* Change type of read_min_bytes from size to int
Not sure why this was not causing issues before or if that
is the right fix ... but it seems to be fine :)
* Adapt python_requires to >=3.8
This was overlooked when 3.7 was deprecated. The ci no longer
works with python 3.7 and cibuildwheel uses python_requires ...
* Replace deprecated find_module with find_spec (importlib)
find_module was deprecated with python 3.4 and python 3.12
removed it (https://docs.python.org/3.12/whatsnew/3.12.html#importlib).
The new command is find_spec and only required a few adaptions
- Update CHANGELOG.md
- Update to bundled capnproto-1.0.1
* Compiles with capnproto-0.8.0 and higher
- *Breaking Change* Remove allow_cancellation (see
https://capnproto.org/news/2023-07-28-capnproto-1.0.html)
* This is tricky to handle for older versions of capnproto. Instead of
dealing with lots of complication, removing it entirely.
- Fix some documentation after the build backend support was added
- Update tox.ini to support 3.8 to 3.12
- Update cibuildwheel to 2.16.1
* Adds Python 3.12 supports and implicitly deprecates EOL 3.7 (though it's
still built)
* Integrate the KJ event loop into Python's asyncio event loop
Fix#256
This PR attempts to remove the slow and expensive polling behavior for asyncio
in favor of proper linking of the KJ event loop to the asyncio event loop.
* Don't memcopy buffer
* Improve promise cancellation and prepare for timer implementation
* Add attribution for asyncProvider.cpp
* Implement timeout
* Cleanup
* First round of simplifications
* Add more a_wait functions and a shutdown function
* Fix edge-cases with loop shutdown
* Clean up calculator examples
* Cleanup
* Cleanup
* Reformat
* Fix warnings
* Reformat again
* Compatibility with macos
* Inline the asyncio loop in some places where this is feasible
* Add todo
* Fix
* Remove synchronous wait
* Wrap fd listening callbacks in a class
* Remove poll_forever
* Remove the thread-local/thread-global optimization
This will not matter much soon anyway, and simplifies things
* Share promise code by using fused types
* Improve refcounting of python objects in promises
We replace many instances of PyObject* by Own<PyRefCounter> for more automatic
reference management.
* Code wrapPyFunc in a similar way to wrapPyFuncNoArg
* Refactor capabilityHelper, fix several memory bugs for promises and add __await__
* Improve promise ownership, reduce memory leaks
Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might
seem like excessive nesting of objects (which to some degree it is, but with
good reason):
- The outer Own is needed because Cython cannot allocate objects without a
nullary constructor on the stack (Promise doesn't have a nullary constructor).
Additionally, I believe it would be difficult or impossible to detect when a
promise is cancelled/moved if we use a bare Promise.
- Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a
reference to the returned object keeps existing until the promise is fulfilled
or cancelled. Previously, this was attempted using attach, which is redundant
and makes reasoning about PyINCREF and PyDECREF very difficult.
- Because a promise holds a Own<Promise<...>>, when we perform any kind of
action on that promise (a_wait, then, ...), we have to explicitly move() the
ownership around. This will leave the original promise with a NULL-pointer,
which we can easily detect as a cancelled promise.
Promises now only hold references to their 'parents' when strictly needed. This
should reduce memory pressure.
* Simplify and test the promise joining functionality
* Attach forgotten parent
* Catch exceptions in add_reader and friends
* Further cleanup of memory leaks
* Get rid of a_wait() in examples
* Cancel all fd read operations when the python asyncio loop is closed
* Formatting
* Remove support for capnp < 7000
* Bring asyncProvider.cpp more in line with upstream async-io-unix.c++
It was originally copied from the nodejs implementation, which in turn copied
from async-io-unix.c++. But that copy is pretty old.
* Fix a bug that caused file descriptors to never be closed
* Implement AsyncIoStream based on Python transports and protocols
* Get rid of asyncProvider
All asyncio now goes through _AsyncIoStream
* Formatting
* Add __dict__ to PyAsyncIoStreamProtocol for python 3.7
* Reintroduce strange ipv4/ipv6 selection code to make ci happy
* Extra pause_reading()
* Work around more python bugs
* Be careful to only close transport when this is still possible
* Move pause_reading() workaround
* Use cibuildwheel in ci
`cibuildwheel` is a system that automatically compiles and repairs wheels for
many python versions and architectures at once. This has some advantages vs the
old situation:
- Macosx wheels had inconsistent minimum versions ranging between 10.9 and
11.0. I'm not sure why this happens, but for some users this means they have
to build from source on macosx. With cibuildwheel, the build is
consistent, with 10.9 as the minimum for x86 and 11.0 for arm64.
- Consolidation between the packaging tests and manylinux tests.
- Addition of musllinux targets and additional cross-compilation to ppc64le and
s390x.
- With cibuildwheel, new python versions should be automatically picked up.
- Separation of the sdist build and lint checks. There is not reason to run that
many times.
All possible build targets succeed, except for ARM64 on Windows. The upstream
capnp build fails. I've disabled it.
The cross-compilation builds on linux are pretty slow. This could potentially be
sped up by separating the builds of manylinux and musllinux, but I'm not sure if
it's worth the extra complexity. (One can also contemplate disabling these
targets.)
Tests for macosx arm64 cannot be run (but also couldn't be run in the previous
system. This should be remedied once apple silicon becomes available on the CI.
I've also added some commented-out code that can automatically take care of
uploading a build to PyPi when a release is created. One might contemplate using this.
* Set CMAKE_OSX_ARCHITECTURES for arm64 and disable universal2
* Added macos arm64 Apple Silicon to CI
- Added missing python 3.10 metadata to setup.py
* Need all python versions in the matrix
* Fixing github action matrix syntax
* Changing MACOSX_DEPLOYMENT_TARGET for older python versions
- Technically no one would be running python 3.7 on M1 anyway
* Adding windows visualcpp build tools and updated cmake
* Increase max-parallel for all matrix builds
* yaml indent
* trying visualstudio2022-workload-vctools
* disabled everything but windows and added tunshell for debugging
* try older powershell
* trying again
* try python instead of powershell
* bash with wget
* Using old windows client on other side
* wrong way
* backwards
* :(
* urgh back to caveman debugging
- trying older windows runners
* try it again with extra build tools
* Reverting back to windows-2019 runner for windows builds
Fixing error:
CMake Error at CMakeLists.txt:2 (project):
Generator
Ninja
does not support platform specification, but platform
x64
was specified.
- Full cleanup of all the docs
- General sphinx housekeeping
- Updated all the old/bad links
- More reliable tests
- Updated Changelog
- Removed dead/deprecated code
- Added documentation generation test
- Includes some test stabilization
- Fixes manylinux2010 build issues (linker flag order due to old gcc)
- More rigorous python setup.py clean
- Requires capnproto v0.8.0 or greater
- Including system libcapnp include path for import (e.g. import
stream_capnp)
- Bundle libcapnp .capnp files when not using system libcapnp
- Removing more distutils usage. Now using pkg-config to determine the
system version of libcapnp (mainly for Linux, but should work on macOS
with brew)
- Removed dead code
Resolves issues #215#216#217
Lots of fixes for Issue #218 (all sorts of retry methods needed for
GitHub Actions)
- Validates that the manylinux2010 environment can build pycapnp
- Handle lib and lib64 cases
- Has patch for aligned_alloc symbol (not available prior to glibc 2.16)
https://github.com/capnproto/capnproto/issues/743
- Fixes#197
- Should be ready to prepare a v1.0.0b1 release
- Basic tests are working
- May need some adjustments to get all tests working
- Cleaned up bundling to take Python arch into account when building
with multiple architectures
- Breaking some earlier compatibility to cleanup build messages
- As well as being able to publish pypi releases
* Builds can be complicated to package correctly
- Windows support
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics --exclude benchmark
Excluding the benchmark directory (due to protobuf generated files)
Also removing some Python2 specific code