-
-
Notifications
You must be signed in to change notification settings - Fork 33.4k
Description
Feature or enhancement
Proposal:
Summary:
Add a high-level asynchronous API to run Python functions in subinterpreters from asyncio, using InterpreterPoolExecutor. This enables CPU-bound tasks to execute in parallel without being limited by the GIL.
Motivation:
Currently, asyncio provides mechanisms for running code in threads (run_in_executor, to_thread) or processes (ProcessPoolExecutor), but there is no built-in high-level support for subinterpreters.
Proposed API:
async def run_in_subinterpreter(func, /, *args, **kwargs):
"""
Run a Python function in a subinterpreter asynchronously.
This function executes the given callable `func` in a subinterpreter
using `InterpreterPoolExecutor`. It chooses the number of
subinterpreters based on the CPU count of the host system,
defaulting to half the available cores if detectable.
Args:
func (Callable): The function to run in the subinterpreter.
*args: Positional arguments to pass to `func`.
**kwargs: Keyword arguments to pass to `func`.
Returns:
Any: The result returned by `func`.
Raises:
RuntimeError: If CPU count cannot be determined and a default
cannot be safely chosen.
"""Details:
The function automatically determines a reasonable number of subinterpreters based on the system CPU count (defaulting to half the available cores).
Functions must be top-level module functions (no lambdas or local functions), and their arguments/return values must be shareable across interpreters.
Provides a simple way to execute CPU-bound functions in parallel without manually managing InterpreterPoolExecutor.
Example usage with high-level api:
import asyncio
from asyncio import run_in_subinterpreter
def cpu_task(x):
return x * x
async def main():
result = await run_in_subinterpreter(cpu_task, 10)
# or
result = await asyncio.run_in_subinterpreter(cpu_task, 10)
print(result)
asyncio.run(main())Impact:
This API would provide a high-level, asyncio-friendly way to leverage subinterpreters for real parallelism in CPU-bound code, filling a current gap in Python’s concurrency model.
Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
Links to previous discussion of this feature:
No response
Linked PRs
Metadata
Metadata
Assignees
Labels
Projects
Status
Status