See haata/pycapnp#1 for a discussion. The cause of this bug is still unknown to
me. But it likely has been fixed in Python 3.10. For some crazy reason, you can
just keep retrying the offending call, and the attribute will magically
'reappear'.
* add capnp_api.h to gitignore
* Change type of read_min_bytes from size to int
Not sure why this was not causing issues before or if that
is the right fix ... but it seems to be fine :)
* Adapt python_requires to >=3.8
This was overlooked when 3.7 was deprecated. The ci no longer
works with python 3.7 and cibuildwheel uses python_requires ...
* Replace deprecated find_module with find_spec (importlib)
find_module was deprecated with python 3.4 and python 3.12
removed it (https://docs.python.org/3.12/whatsnew/3.12.html#importlib).
The new command is find_spec and only required a few adaptions
- Update CHANGELOG.md
- Update to bundled capnproto-1.0.1
* Compiles with capnproto-0.8.0 and higher
- *Breaking Change* Remove allow_cancellation (see
https://capnproto.org/news/2023-07-28-capnproto-1.0.html)
* This is tricky to handle for older versions of capnproto. Instead of
dealing with lots of complication, removing it entirely.
- Fix some documentation after the build backend support was added
- Update tox.ini to support 3.8 to 3.12
- Update cibuildwheel to 2.16.1
* Adds Python 3.12 supports and implicitly deprecates EOL 3.7 (though it's
still built)
* Pin cython to below version 3
Cython 3 includes backwards incompatible changes so it's no longer
possible to install pycapnp from source.
* Add py311 environment
I'm not sure if this is necessary, but 3.11 is out so might as well?
Cap'n Proto provides a schema loader, which can be used to dynamically
load schemas during runtime. To port this functionality to pycapnp,
a new class is provided `C_SchemaLoader`, which exposes the Cap'n
Proto C++ interface, and `SchemaLoader`, which is part of the pycapnp
library.
The specific use case for this is when a capnp message contains
a Node.Reader: The schema for a yet unseen message can be loaded
dynamically, allowing the future message to be properly processed.
If the message is a struct containing other structs, all the schemas for
every struct must be loaded to correctly parse the message. See
https://github.com/DaneSlattery/capnp_generic_poc for a
proof-of-concept.
Add docs and cleanup
Add more docs
Reduce changes
Fix flake8 formatting
Fix get datatype
All of these tests also exist in test_capability.py. The only difference is the
way the .capnp file is loaded. But that could be tested with much less code.
Python 3.7 seems to have trouble dealocating objects in a timely fashion. We
rely on this, because the c++ destructors need to run before the kj event loop
is closed. Hence, we do it manually.
* Integrate the KJ event loop into Python's asyncio event loop
Fix#256
This PR attempts to remove the slow and expensive polling behavior for asyncio
in favor of proper linking of the KJ event loop to the asyncio event loop.
* Don't memcopy buffer
* Improve promise cancellation and prepare for timer implementation
* Add attribution for asyncProvider.cpp
* Implement timeout
* Cleanup
* First round of simplifications
* Add more a_wait functions and a shutdown function
* Fix edge-cases with loop shutdown
* Clean up calculator examples
* Cleanup
* Cleanup
* Reformat
* Fix warnings
* Reformat again
* Compatibility with macos
* Inline the asyncio loop in some places where this is feasible
* Add todo
* Fix
* Remove synchronous wait
* Wrap fd listening callbacks in a class
* Remove poll_forever
* Remove the thread-local/thread-global optimization
This will not matter much soon anyway, and simplifies things
* Share promise code by using fused types
* Improve refcounting of python objects in promises
We replace many instances of PyObject* by Own<PyRefCounter> for more automatic
reference management.
* Code wrapPyFunc in a similar way to wrapPyFuncNoArg
* Refactor capabilityHelper, fix several memory bugs for promises and add __await__
* Improve promise ownership, reduce memory leaks
Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might
seem like excessive nesting of objects (which to some degree it is, but with
good reason):
- The outer Own is needed because Cython cannot allocate objects without a
nullary constructor on the stack (Promise doesn't have a nullary constructor).
Additionally, I believe it would be difficult or impossible to detect when a
promise is cancelled/moved if we use a bare Promise.
- Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a
reference to the returned object keeps existing until the promise is fulfilled
or cancelled. Previously, this was attempted using attach, which is redundant
and makes reasoning about PyINCREF and PyDECREF very difficult.
- Because a promise holds a Own<Promise<...>>, when we perform any kind of
action on that promise (a_wait, then, ...), we have to explicitly move() the
ownership around. This will leave the original promise with a NULL-pointer,
which we can easily detect as a cancelled promise.
Promises now only hold references to their 'parents' when strictly needed. This
should reduce memory pressure.
* Simplify and test the promise joining functionality
* Attach forgotten parent
* Catch exceptions in add_reader and friends
* Further cleanup of memory leaks
* Get rid of a_wait() in examples
* Cancel all fd read operations when the python asyncio loop is closed
* Formatting
* Remove support for capnp < 7000
* Bring asyncProvider.cpp more in line with upstream async-io-unix.c++
It was originally copied from the nodejs implementation, which in turn copied
from async-io-unix.c++. But that copy is pretty old.
* Fix a bug that caused file descriptors to never be closed
* Implement AsyncIoStream based on Python transports and protocols
* Get rid of asyncProvider
All asyncio now goes through _AsyncIoStream
* Formatting
* Add __dict__ to PyAsyncIoStreamProtocol for python 3.7
* Reintroduce strange ipv4/ipv6 selection code to make ci happy
* Extra pause_reading()
* Work around more python bugs
* Be careful to only close transport when this is still possible
* Move pause_reading() workaround
* Use cibuildwheel in ci
`cibuildwheel` is a system that automatically compiles and repairs wheels for
many python versions and architectures at once. This has some advantages vs the
old situation:
- Macosx wheels had inconsistent minimum versions ranging between 10.9 and
11.0. I'm not sure why this happens, but for some users this means they have
to build from source on macosx. With cibuildwheel, the build is
consistent, with 10.9 as the minimum for x86 and 11.0 for arm64.
- Consolidation between the packaging tests and manylinux tests.
- Addition of musllinux targets and additional cross-compilation to ppc64le and
s390x.
- With cibuildwheel, new python versions should be automatically picked up.
- Separation of the sdist build and lint checks. There is not reason to run that
many times.
All possible build targets succeed, except for ARM64 on Windows. The upstream
capnp build fails. I've disabled it.
The cross-compilation builds on linux are pretty slow. This could potentially be
sped up by separating the builds of manylinux and musllinux, but I'm not sure if
it's worth the extra complexity. (One can also contemplate disabling these
targets.)
Tests for macosx arm64 cannot be run (but also couldn't be run in the previous
system. This should be remedied once apple silicon becomes available on the CI.
I've also added some commented-out code that can automatically take care of
uploading a build to PyPi when a release is created. One might contemplate using this.
* Set CMAKE_OSX_ARCHITECTURES for arm64 and disable universal2
* Add Python3.11 to the Github Actions, in the manylinux2014 build as well as the
packaging test.
I added a second image, x86_64, to the manylinux2014 build matrix - the i686
build didn't immediately pass, so I left it out.
* Move the bundled capnproto library forward to 0.10.3.
* Remove the 2010 build entirely, and update the 2014 build to include i686.
The example async server code uses timeouts around read() operations.
However, this has a race condition where data can be read, the timeout
fires, and the data is lost.
These timeouts are not really needed in this example code, so I removed
them to prevent people from having strange issues with lost messages
and undefined RPC behavior when using the example code.