pycapnp/examples/async_ssl_calculator_client.py

333 lines
11 KiB
Python
Raw Normal View History

#!/usr/bin/env python3
import argparse
import asyncio
import os
import ssl
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
import socket
import capnp
import calculator_capnp
this_dir = os.path.dirname(os.path.abspath(__file__))
class PowerFunction(calculator_capnp.Calculator.Function.Server):
"""An implementation of the Function interface wrapping pow(). Note that
we're implementing this on the client side and will pass a reference to
the server. The server will then be able to make calls back to the client."""
async def call(self, params, **kwargs):
"""Note the **kwargs. This is very necessary to include, since
protocols can add parameters over time. Also, by default, a _context
variable is passed to all server methods, but you can also return
results directly as python objects, and they'll be added to the
results struct in the correct order"""
return pow(params[0], params[1])
def parse_args():
parser = argparse.ArgumentParser(
2023-06-08 02:29:24 +02:00
usage="Connects to the Calculator server at the given address and does some RPCs"
)
parser.add_argument("host", help="HOST:PORT")
return parser.parse_args()
async def main(host):
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
addr, port = host.split(":")
# Setup SSL context
ctx = ssl.create_default_context(
ssl.Purpose.SERVER_AUTH, cafile=os.path.join(this_dir, "selfsigned.cert")
)
# Handle both IPv4 and IPv6 cases
try:
print("Try IPv4")
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
stream = await capnp.AsyncIoStream.create_connection(
addr, port, ssl=ctx, family=socket.AF_INET
)
except Exception:
print("Try IPv6")
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
stream = await capnp.AsyncIoStream.create_connection(
addr, port, ssl=ctx, family=socket.AF_INET6
)
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
client = capnp.TwoPartyClient(stream)
# Bootstrap the Calculator interface
calculator = client.bootstrap().cast_as(calculator_capnp.Calculator)
"""Make a request that just evaluates the literal value 123.
What's interesting here is that evaluate() returns a "Value", which is
another interface and therefore points back to an object living on the
server. We then have to call read() on that object to read it.
However, even though we are making two RPC's, this block executes in
*one* network round trip because of promise pipelining: we do not wait
for the first call to complete before we send the second call to the
server."""
print("Evaluating a literal... ", end="")
# Make the request. Note we are using the shorter function form (instead
# of evaluate_request), and we are passing a dictionary that represents a
# struct and its member to evaluate
eval_promise = calculator.evaluate({"literal": 123})
# This is equivalent to:
"""
request = calculator.evaluate_request()
request.expression.literal = 123
# Send it, which returns a promise for the result (without blocking).
eval_promise = request.send()
"""
# Using the promise, create a pipelined request to call read() on the
# returned object. Note that here we are using the shortened method call
# syntax read(), which is mostly just sugar for read_request().send()
read_promise = eval_promise.value.read()
# Now that we've sent all the requests, wait for the response. Until this
# point, we haven't waited at all!
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
response = await read_promise
assert response.value == 123
print("PASS")
"""Make a request to evaluate 123 + 45 - 67.
The Calculator interface requires that we first call getOperator() to
get the addition and subtraction functions, then call evaluate() to use
them. But, once again, we can get both functions, call evaluate(), and
then read() the result -- four RPCs -- in the time of *one* network
round trip, because of promise pipelining."""
print("Using add and subtract... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op="add").func
# Get the "subtract" function from the server.
subtract = calculator.getOperator(op="subtract").func
# Build the request to evaluate 123 + 45 - 67. Note the form is 'evaluate'
# + '_request', where 'evaluate' is the name of the method we want to call
request = calculator.evaluate_request()
subtract_call = request.expression.init("call")
subtract_call.function = subtract
subtract_params = subtract_call.init("params", 2)
subtract_params[1].literal = 67.0
add_call = subtract_params[0].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].literal = 123
add_params[1].literal = 45
# Send the evaluate() request, read() the result, and wait for read() to finish.
eval_promise = request.send()
read_promise = eval_promise.value.read()
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
response = await read_promise
assert response.value == 101
print("PASS")
"""
Note: a one liner version of building the previous request (I highly
recommend not doing it this way for such a complicated structure, but I
just wanted to demonstrate it is possible to set all of the fields with a
dictionary):
eval_promise = calculator.evaluate(
{'call': {'function': subtract,
'params': [{'call': {'function': add,
'params': [{'literal': 123},
{'literal': 45}]}},
{'literal': 67.0}]}})
"""
"""Make a request to evaluate 4 * 6, then use the result in two more
requests that add 3 and 5.
Since evaluate() returns its result wrapped in a `Value`, we can pass
that `Value` back to the server in subsequent requests before the first
`evaluate()` has actually returned. Thus, this example again does only
one network round trip."""
print("Pipelining eval() calls... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op="add").func
# Get the "multiply" function from the server.
multiply = calculator.getOperator(op="multiply").func
# Build the request to evaluate 4 * 6
request = calculator.evaluate_request()
multiply_call = request.expression.init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[0].literal = 4
multiply_params[1].literal = 6
multiply_result = request.send().value
# Use the result in two calls that add 3 and add 5.
add_3_request = calculator.evaluate_request()
add_3_call = add_3_request.expression.init("call")
add_3_call.function = add
add_3_params = add_3_call.init("params", 2)
add_3_params[0].previousResult = multiply_result
add_3_params[1].literal = 3
add_3_promise = add_3_request.send().value.read()
add_5_request = calculator.evaluate_request()
add_5_call = add_5_request.expression.init("call")
add_5_call.function = add
add_5_params = add_5_call.init("params", 2)
add_5_params[0].previousResult = multiply_result
add_5_params[1].literal = 5
add_5_promise = add_5_request.send().value.read()
# Now wait for the results.
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
assert (await add_3_promise).value == 27
assert (await add_5_promise).value == 29
print("PASS")
"""Our calculator interface supports defining functions. Here we use it
to define two functions and then make calls to them as follows:
f(x, y) = x * 100 + y
g(x) = f(x, x + 1) * 2;
f(12, 34)
g(21)
Once again, the whole thing takes only one network round trip."""
print("Defining functions... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op="add").func
# Get the "multiply" function from the server.
multiply = calculator.getOperator(op="multiply").func
# Define f.
request = calculator.defFunction_request()
request.paramCount = 2
# Build the function body.
add_call = request.body.init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[1].parameter = 1 # y
multiply_call = add_params[0].init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[0].parameter = 0 # x
multiply_params[1].literal = 100
f = request.send().func
# Define g.
request = calculator.defFunction_request()
request.paramCount = 1
# Build the function body.
multiply_call = request.body.init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[1].literal = 2
f_call = multiply_params[0].init("call")
f_call.function = f
f_params = f_call.init("params", 2)
f_params[0].parameter = 0
add_call = f_params[1].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].parameter = 0
add_params[1].literal = 1
g = request.send().func
# OK, we've defined all our functions. Now create our eval requests.
# f(12, 34)
f_eval_request = calculator.evaluate_request()
f_call = f_eval_request.expression.init("call")
f_call.function = f
f_params = f_call.init("params", 2)
f_params[0].literal = 12
f_params[1].literal = 34
f_eval_promise = f_eval_request.send().value.read()
# g(21)
g_eval_request = calculator.evaluate_request()
g_call = g_eval_request.expression.init("call")
g_call.function = g
g_call.init("params", 1)[0].literal = 21
g_eval_promise = g_eval_request.send().value.read()
# Wait for the results.
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
assert (await f_eval_promise).value == 1234
assert (await g_eval_promise).value == 4244
print("PASS")
"""Make a request that will call back to a function defined locally.
Specifically, we will compute 2^(4 + 5). However, exponent is not
defined by the Calculator server. So, we'll implement the Function
interface locally and pass it to the server for it to use when
evaluating the expression.
This example requires two network round trips to complete, because the
server calls back to the client once before finishing. In this
particular case, this could potentially be optimized by using a tail
call on the server side -- see CallContext::tailCall(). However, to
keep the example simpler, we haven't implemented this optimization in
the sample server."""
print("Using a callback... ", end="")
# Get the "add" function from the server.
add = calculator.getOperator(op="add").func
# Build the eval request for 2^(4+5).
request = calculator.evaluate_request()
pow_call = request.expression.init("call")
pow_call.function = PowerFunction()
pow_params = pow_call.init("params", 2)
pow_params[0].literal = 2
add_call = pow_params[1].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].literal = 4
add_params[1].literal = 5
# Send the request and wait.
Integrate the KJ event loop into Python's asyncio event loop (#310) * Integrate the KJ event loop into Python's asyncio event loop Fix #256 This PR attempts to remove the slow and expensive polling behavior for asyncio in favor of proper linking of the KJ event loop to the asyncio event loop. * Don't memcopy buffer * Improve promise cancellation and prepare for timer implementation * Add attribution for asyncProvider.cpp * Implement timeout * Cleanup * First round of simplifications * Add more a_wait functions and a shutdown function * Fix edge-cases with loop shutdown * Clean up calculator examples * Cleanup * Cleanup * Reformat * Fix warnings * Reformat again * Compatibility with macos * Inline the asyncio loop in some places where this is feasible * Add todo * Fix * Remove synchronous wait * Wrap fd listening callbacks in a class * Remove poll_forever * Remove the thread-local/thread-global optimization This will not matter much soon anyway, and simplifies things * Share promise code by using fused types * Improve refcounting of python objects in promises We replace many instances of PyObject* by Own<PyRefCounter> for more automatic reference management. * Code wrapPyFunc in a similar way to wrapPyFuncNoArg * Refactor capabilityHelper, fix several memory bugs for promises and add __await__ * Improve promise ownership, reduce memory leaks Promise wrappers now hold a Own<Promise<Own<PyRefCounter>>> object. This might seem like excessive nesting of objects (which to some degree it is, but with good reason): - The outer Own is needed because Cython cannot allocate objects without a nullary constructor on the stack (Promise doesn't have a nullary constructor). Additionally, I believe it would be difficult or impossible to detect when a promise is cancelled/moved if we use a bare Promise. - Every promise returns a Owned PyRefCounter. PyRefCounter makes sure that a reference to the returned object keeps existing until the promise is fulfilled or cancelled. Previously, this was attempted using attach, which is redundant and makes reasoning about PyINCREF and PyDECREF very difficult. - Because a promise holds a Own<Promise<...>>, when we perform any kind of action on that promise (a_wait, then, ...), we have to explicitly move() the ownership around. This will leave the original promise with a NULL-pointer, which we can easily detect as a cancelled promise. Promises now only hold references to their 'parents' when strictly needed. This should reduce memory pressure. * Simplify and test the promise joining functionality * Attach forgotten parent * Catch exceptions in add_reader and friends * Further cleanup of memory leaks * Get rid of a_wait() in examples * Cancel all fd read operations when the python asyncio loop is closed * Formatting * Remove support for capnp < 7000 * Bring asyncProvider.cpp more in line with upstream async-io-unix.c++ It was originally copied from the nodejs implementation, which in turn copied from async-io-unix.c++. But that copy is pretty old. * Fix a bug that caused file descriptors to never be closed * Implement AsyncIoStream based on Python transports and protocols * Get rid of asyncProvider All asyncio now goes through _AsyncIoStream * Formatting * Add __dict__ to PyAsyncIoStreamProtocol for python 3.7 * Reintroduce strange ipv4/ipv6 selection code to make ci happy * Extra pause_reading() * Work around more python bugs * Be careful to only close transport when this is still possible * Move pause_reading() workaround
2023-06-06 20:08:15 +02:00
response = await request.send().value.read()
assert response.value == 512
print("PASS")
if __name__ == "__main__":
# Using asyncio.run hits an asyncio ssl bug
# https://bugs.python.org/issue36709
# asyncio.run(main(parse_args().host), loop=loop, debug=True)
loop = asyncio.get_event_loop()
loop.run_until_complete(capnp.run(main(parse_args().host)))