- Stop adding the directory of every .capnp file to the import path. If a .capnp
file wants to import a file in its own directory, it should use a relative
import. Fixes#278
- Stop using /usr/include/capnp as an import path. This is incorrect. It should
only be /usr/include.
- Stop allowing additional paths to be specified for magic imports. This leads
to inconsistencies. More specifically, the way that a nested import like
`ma.mb.mc_capnp` gets imported by python, is to first import `ma`, then import
`ma.mb`, and finally `ma.mb.mc_capnp`. Pycapnp's magic importing is only
involved in the last step. So any additional paths specified don't work for
nested imports. It is very confusing to only have this for non-nested imports.
Users with folder layouts that don't follow pythons import paths can still use
`capnp.load(.., .., imports=[blah])`.
When a server method is cancelled, but it nonetheless raises an exception (other
than `CancelledError`), this exception cannot be reported to the caller (because
it has cancelled that call).
The only place where it can go is to the asyncio exception handler...
- The `KjException._to_python()` function neglected to check if the wrapper was
set when attempting to convert to `AttributeError`, leading to exceptions while
raising an exception.
- The syntax `raise A, B, C` hasn't existed since Python 3. The only reason it
works is because Cython supports it. Lets get rid of it.
- There was an attempt to convert a certain kind of `KjException` to an
`AttributeError`. However, the original exception remains in the context when
the new exception is raised. This is confusing. We get rid of the original
exception by doing `raise e._to_python() from None`.
See the test for an explanation.
Note that I'm not sure what the purpose of `_setDynamicFieldWithField` and
`_setDynamicFieldStatic` is. It does not appear to be used. I've kept them for
now (they are a public API), but perhaps this can be removed.
I'm using Pycapnp in a project, where we compile `.capnp` files directly to
Cython instead of using the dynamic interface (for speed). For this, we need
access to the `reraise_kj_exception` C function defined by Pycapnp. This is not
possible, because Cython does not automatically make this function available to
downstream users.
My previous solution, in #301, was rather flawed. The file `capabilityHelper.cpp`, where
`reraise_kj_exception` is defined, was bundled into the distribution, so that
this file could be included in downstream libraries. This turns out to be a
terrible idea, because it redefines a bunch of other things like
`ReadPromiseAdapter`. For reasons not entirely clear to me, this leads to
segmentation faults. This PR revers #301.
Instead, in this PR I've made `reraise_kj_exception` a Cython-level function,
that can be used by downstream libraries. The C-level variant has been renamed
to `c_reraise_kj_exception`.
This was already fixed in c9bea05f44, but the fix does not seem to work.
This commit uses a set union, which should be more robust. It also adds
a couple of assertions to verify that it indeed works.
In the last commit touching this line, a ')' was put in the wrong place, leading to errors like this one:
```
File "capnp/lib/capnp.pyx", line 2172, in capnp.lib.capnp._DynamicCapabilityClient.__dir__
TypeError: unsupported operand type(s) for +: 'set' and 'tuple'
```
In its current form, when a server callback throws an exception, it is
completely swallowed. Only when the asyncio loop is being shut down might one
possibly see that error. On top of that, the connection is never closed, causing
any clients to hang, and a memory leak in the server.
This is a proposed fix that reports the exception to the asyncio exception
handler. It also makes sure that the connection is always closed, even if the
callback doesn't close it explicitly.
Note that the design of AsyncIoStream is directly based on the design of
Python's asyncio streams: https://docs.python.org/3/library/asyncio-stream.html
These streams appear to have exactly the same flaw. I've reported this here:
https://github.com/python/cpython/issues/110894. Since I don't really know what
I'm doing, it might be worth seeing what kind of solution they might come up
with and model our solution after theirs.
Logic bug: We are looping over segments sent by the C++ library and sending them
over a python transport. If the last message is larger than the transport pause
threshold, this causes the transport to pause us. In that case, we forget to
increment the current write_index, causing us to retransmit the same message in
an infinite loop.
This is a serious bug, because it causes messages to become corrupted.
* Update documentation to async code (#331)
This commit updates the documentation to the latest changes added
with pycapnp 2.0.0.
* Remove non existing classes/functions from the reference documentation
* Adapt the quickstart to the latest changes. Mainly to new rpc handling,
that now exlusively is done through asyncio.
* DOC: Add section about send and receive messages over a socket
Since #313 it is possible to read and write messages over a socket.
This commit adds a small section for read and write in the quickstart.
See haata/pycapnp#1 for a discussion. The cause of this bug is still unknown to
me. But it likely has been fixed in Python 3.10. For some crazy reason, you can
just keep retrying the offending call, and the attribute will magically
'reappear'.
* add capnp_api.h to gitignore
* Change type of read_min_bytes from size to int
Not sure why this was not causing issues before or if that
is the right fix ... but it seems to be fine :)
* Adapt python_requires to >=3.8
This was overlooked when 3.7 was deprecated. The ci no longer
works with python 3.7 and cibuildwheel uses python_requires ...
* Replace deprecated find_module with find_spec (importlib)
find_module was deprecated with python 3.4 and python 3.12
removed it (https://docs.python.org/3.12/whatsnew/3.12.html#importlib).
The new command is find_spec and only required a few adaptions
- Update CHANGELOG.md
- Update to bundled capnproto-1.0.1
* Compiles with capnproto-0.8.0 and higher
- *Breaking Change* Remove allow_cancellation (see
https://capnproto.org/news/2023-07-28-capnproto-1.0.html)
* This is tricky to handle for older versions of capnproto. Instead of
dealing with lots of complication, removing it entirely.
- Fix some documentation after the build backend support was added
- Update tox.ini to support 3.8 to 3.12
- Update cibuildwheel to 2.16.1
* Adds Python 3.12 supports and implicitly deprecates EOL 3.7 (though it's
still built)
Cap'n Proto provides a schema loader, which can be used to dynamically
load schemas during runtime. To port this functionality to pycapnp,
a new class is provided `C_SchemaLoader`, which exposes the Cap'n
Proto C++ interface, and `SchemaLoader`, which is part of the pycapnp
library.
The specific use case for this is when a capnp message contains
a Node.Reader: The schema for a yet unseen message can be loaded
dynamically, allowing the future message to be properly processed.
If the message is a struct containing other structs, all the schemas for
every struct must be loaded to correctly parse the message. See
https://github.com/DaneSlattery/capnp_generic_poc for a
proof-of-concept.
Add docs and cleanup
Add more docs
Reduce changes
Fix flake8 formatting
Fix get datatype
Python 3.7 seems to have trouble dealocating objects in a timely fashion. We
rely on this, because the c++ destructors need to run before the kj event loop
is closed. Hence, we do it manually.
* Integrate the KJ event loop into Python's asyncio event loop
Fix#256
This PR attempts to remove the slow and expensive polling behavior for asyncio
in favor of proper linking of the KJ event loop to the asyncio event loop.
* Don't memcopy buffer
* Improve promise cancellation and prepare for timer implementation
* Add attribution for asyncProvider.cpp
* Implement timeout
* Cleanup
* First round of simplifications
* Add more a_wait functions and a shutdown function
* Fix edge-cases with loop shutdown
* Clean up calculator examples
* Cleanup
* Cleanup
* Reformat
* Fix warnings
* Reformat again
* Compatibility with macos
* Inline the asyncio loop in some places where this is feasible
* Add todo
* Fix
* Remove synchronous wait
* Wrap fd listening callbacks in a class
* Remove poll_forever
* Remove the thread-local/thread-global optimization
This will not matter much soon anyway, and simplifies things
* Share promise code by using fused types
* Improve refcounting of python objects in promises
We replace many instances of PyObject* by Own<PyRefCounter> for more automatic
reference management.
* Code wrapPyFunc in a similar way to wrapPyFuncNoArg
* Refactor capabilityHelper, fix several memory bugs for promises and add __await__
* Improve promise ownership, reduce memory leaks
Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might
seem like excessive nesting of objects (which to some degree it is, but with
good reason):
- The outer Own is needed because Cython cannot allocate objects without a
nullary constructor on the stack (Promise doesn't have a nullary constructor).
Additionally, I believe it would be difficult or impossible to detect when a
promise is cancelled/moved if we use a bare Promise.
- Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a
reference to the returned object keeps existing until the promise is fulfilled
or cancelled. Previously, this was attempted using attach, which is redundant
and makes reasoning about PyINCREF and PyDECREF very difficult.
- Because a promise holds a Own<Promise<...>>, when we perform any kind of
action on that promise (a_wait, then, ...), we have to explicitly move() the
ownership around. This will leave the original promise with a NULL-pointer,
which we can easily detect as a cancelled promise.
Promises now only hold references to their 'parents' when strictly needed. This
should reduce memory pressure.
* Simplify and test the promise joining functionality
* Attach forgotten parent
* Catch exceptions in add_reader and friends
* Further cleanup of memory leaks
* Get rid of a_wait() in examples
* Cancel all fd read operations when the python asyncio loop is closed
* Formatting
* Remove support for capnp < 7000
* Bring asyncProvider.cpp more in line with upstream async-io-unix.c++
It was originally copied from the nodejs implementation, which in turn copied
from async-io-unix.c++. But that copy is pretty old.
* Fix a bug that caused file descriptors to never be closed
* Implement AsyncIoStream based on Python transports and protocols
* Get rid of asyncProvider
All asyncio now goes through _AsyncIoStream
* Formatting
* Add __dict__ to PyAsyncIoStreamProtocol for python 3.7
* Reintroduce strange ipv4/ipv6 selection code to make ci happy
* Extra pause_reading()
* Work around more python bugs
* Be careful to only close transport when this is still possible
* Move pause_reading() workaround
This patch fixes a problem of reading random values for reader options
in pycapnp. The code which adds task to the list captures 'opts' by
reference and that causes a problem in case when 'opts' is allocated on
on the caller's stack. By the time when task is handled the stack frame
holding the 'opts' is gone which leaves dangling reference to 'opts' in
lambda's captures. As a result pycapnp reads random values for reader
options which sometimes causes unexpected errors (for example an error
that nesting level ius negative).
There is no test coverage for these exception clauses,
however the invocation of obj.__str__() for client
objects could raise any exception, hence the very broad
exception catch.