This fixes build issues for installations that cannot use wheels
published on PyPI. Issue #364 would be fixed by this.
Python build tools like `python -m build` first build a source
distribuiton (sdist) and then use it (and only it) to build a binary
wheel from. This doesn't work if one of the build scripts isn't in the
sdist however, which was the case prior to this patch.
- Stop adding the directory of every .capnp file to the import path. If a .capnp
file wants to import a file in its own directory, it should use a relative
import. Fixes#278
- Stop using /usr/include/capnp as an import path. This is incorrect. It should
only be /usr/include.
- Stop allowing additional paths to be specified for magic imports. This leads
to inconsistencies. More specifically, the way that a nested import like
`ma.mb.mc_capnp` gets imported by python, is to first import `ma`, then import
`ma.mb`, and finally `ma.mb.mc_capnp`. Pycapnp's magic importing is only
involved in the last step. So any additional paths specified don't work for
nested imports. It is very confusing to only have this for non-nested imports.
Users with folder layouts that don't follow pythons import paths can still use
`capnp.load(.., .., imports=[blah])`.
When a server method is cancelled, but it nonetheless raises an exception (other
than `CancelledError`), this exception cannot be reported to the caller (because
it has cancelled that call).
The only place where it can go is to the asyncio exception handler...
- The `KjException._to_python()` function neglected to check if the wrapper was
set when attempting to convert to `AttributeError`, leading to exceptions while
raising an exception.
- The syntax `raise A, B, C` hasn't existed since Python 3. The only reason it
works is because Cython supports it. Lets get rid of it.
- There was an attempt to convert a certain kind of `KjException` to an
`AttributeError`. However, the original exception remains in the context when
the new exception is raised. This is confusing. We get rid of the original
exception by doing `raise e._to_python() from None`.
See the test for an explanation.
Note that I'm not sure what the purpose of `_setDynamicFieldWithField` and
`_setDynamicFieldStatic` is. It does not appear to be used. I've kept them for
now (they are a public API), but perhaps this can be removed.
I'm using Pycapnp in a project, where we compile `.capnp` files directly to
Cython instead of using the dynamic interface (for speed). For this, we need
access to the `reraise_kj_exception` C function defined by Pycapnp. This is not
possible, because Cython does not automatically make this function available to
downstream users.
My previous solution, in #301, was rather flawed. The file `capabilityHelper.cpp`, where
`reraise_kj_exception` is defined, was bundled into the distribution, so that
this file could be included in downstream libraries. This turns out to be a
terrible idea, because it redefines a bunch of other things like
`ReadPromiseAdapter`. For reasons not entirely clear to me, this leads to
segmentation faults. This PR revers #301.
Instead, in this PR I've made `reraise_kj_exception` a Cython-level function,
that can be used by downstream libraries. The C-level variant has been renamed
to `c_reraise_kj_exception`.
This was already fixed in c9bea05f44, but the fix does not seem to work.
This commit uses a set union, which should be more robust. It also adds
a couple of assertions to verify that it indeed works.
In the last commit touching this line, a ')' was put in the wrong place, leading to errors like this one:
```
File "capnp/lib/capnp.pyx", line 2172, in capnp.lib.capnp._DynamicCapabilityClient.__dir__
TypeError: unsupported operand type(s) for +: 'set' and 'tuple'
```
Aperantly github added ninja to all of there runners now. This
causes the windows build to fail. This is expected because we
add the architecture as a compiler arg which is not known to
ninja. Even with this the build fails.
This commit disables ninja on windows for now. Once we fixed the
underlying issue with ninja and windows we can reenable it.
In its current form, when a server callback throws an exception, it is
completely swallowed. Only when the asyncio loop is being shut down might one
possibly see that error. On top of that, the connection is never closed, causing
any clients to hang, and a memory leak in the server.
This is a proposed fix that reports the exception to the asyncio exception
handler. It also makes sure that the connection is always closed, even if the
callback doesn't close it explicitly.
Note that the design of AsyncIoStream is directly based on the design of
Python's asyncio streams: https://docs.python.org/3/library/asyncio-stream.html
These streams appear to have exactly the same flaw. I've reported this here:
https://github.com/python/cpython/issues/110894. Since I don't really know what
I'm doing, it might be worth seeing what kind of solution they might come up
with and model our solution after theirs.
Logic bug: We are looping over segments sent by the C++ library and sending them
over a python transport. If the last message is larger than the transport pause
threshold, this causes the transport to pause us. In that case, we forget to
increment the current write_index, causing us to retransmit the same message in
an infinite loop.
This is a serious bug, because it causes messages to become corrupted.
* Update documentation to async code (#331)
This commit updates the documentation to the latest changes added
with pycapnp 2.0.0.
* Remove non existing classes/functions from the reference documentation
* Adapt the quickstart to the latest changes. Mainly to new rpc handling,
that now exlusively is done through asyncio.
* DOC: Add section about send and receive messages over a socket
Since #313 it is possible to read and write messages over a socket.
This commit adds a small section for read and write in the quickstart.
See haata/pycapnp#1 for a discussion. The cause of this bug is still unknown to
me. But it likely has been fixed in Python 3.10. For some crazy reason, you can
just keep retrying the offending call, and the attribute will magically
'reappear'.
* add capnp_api.h to gitignore
* Change type of read_min_bytes from size to int
Not sure why this was not causing issues before or if that
is the right fix ... but it seems to be fine :)
* Adapt python_requires to >=3.8
This was overlooked when 3.7 was deprecated. The ci no longer
works with python 3.7 and cibuildwheel uses python_requires ...
* Replace deprecated find_module with find_spec (importlib)
find_module was deprecated with python 3.4 and python 3.12
removed it (https://docs.python.org/3.12/whatsnew/3.12.html#importlib).
The new command is find_spec and only required a few adaptions
- Update CHANGELOG.md
- Update to bundled capnproto-1.0.1
* Compiles with capnproto-0.8.0 and higher
- *Breaking Change* Remove allow_cancellation (see
https://capnproto.org/news/2023-07-28-capnproto-1.0.html)
* This is tricky to handle for older versions of capnproto. Instead of
dealing with lots of complication, removing it entirely.
- Fix some documentation after the build backend support was added
- Update tox.ini to support 3.8 to 3.12
- Update cibuildwheel to 2.16.1
* Adds Python 3.12 supports and implicitly deprecates EOL 3.7 (though it's
still built)
* Pin cython to below version 3
Cython 3 includes backwards incompatible changes so it's no longer
possible to install pycapnp from source.
* Add py311 environment
I'm not sure if this is necessary, but 3.11 is out so might as well?
Cap'n Proto provides a schema loader, which can be used to dynamically
load schemas during runtime. To port this functionality to pycapnp,
a new class is provided `C_SchemaLoader`, which exposes the Cap'n
Proto C++ interface, and `SchemaLoader`, which is part of the pycapnp
library.
The specific use case for this is when a capnp message contains
a Node.Reader: The schema for a yet unseen message can be loaded
dynamically, allowing the future message to be properly processed.
If the message is a struct containing other structs, all the schemas for
every struct must be loaded to correctly parse the message. See
https://github.com/DaneSlattery/capnp_generic_poc for a
proof-of-concept.
Add docs and cleanup
Add more docs
Reduce changes
Fix flake8 formatting
Fix get datatype
All of these tests also exist in test_capability.py. The only difference is the
way the .capnp file is loaded. But that could be tested with much less code.
Python 3.7 seems to have trouble dealocating objects in a timely fashion. We
rely on this, because the c++ destructors need to run before the kj event loop
is closed. Hence, we do it manually.