SpaceCore exists for writing numerical algorithms once, independently of the array backend.
For example, the same algorithm can run with NumPy for debugging, JAX for JIT/autodiff, and Torch for tensor workflows, while preserving the same mathematical spaces and linear operators.
Numerical algorithms often start as clear NumPy code and later need to move to JAX, Torch, or another array system. Without a backend boundary, that migration usually leaks through the whole implementation: array constructors, dtype handling, inner products, sparse support, and linear-operator conventions all become backend-specific.
SpaceCore keeps those choices in a Context, while algorithms work with
mathematical objects:
- a
Spaceknows the structure and geometry of its elements; - a
LinOpmaps one space to another; - backend-specific array creation and operations live behind
BackendOps.
The result is ordinary Python code whose core numerical logic is not tied to one array library.
Mental model:
BackendOps -> Context -> Space/LinOp -> Algorithm
This gradient descent loop uses only the Space and LinOp APIs. It does not
know whether the arrays are NumPy arrays, JAX arrays, or Torch tensors.
import numpy as np
import spacecore as sc
def as_numpy(x):
if hasattr(x, "detach"):
return x.detach().cpu().numpy()
return np.asarray(x)
def make_problem(ctx):
X = sc.VectorSpace((3,), ctx)
Y = sc.VectorSpace((2,), ctx)
A = sc.DenseLinOp(
ctx.asarray([[1.0, 2.0, 3.0], [0.0, 1.0, 0.0]]),
dom=X,
cod=Y,
ctx=ctx,
)
x = ctx.asarray([1.0, 0.0, -1.0])
b = ctx.asarray([0.5, 0.25])
return X, Y, A, x, b
def gradient_step(X, A, x, b, eta):
r = A.apply(x) - b
grad = A.rapply(r)
return X.axpy(-eta, grad, x)
def run_gradient_descent(X, A, x, b, eta, steps):
for _ in range(steps):
x = gradient_step(X, A, x, b, eta)
return xRun it with NumPy:
np_ctx = sc.Context(sc.NumpyOps(), dtype="float64")
X, Y, A, x, b = make_problem(np_ctx)
x_numpy = run_gradient_descent(X, A, x, b, eta=0.1, steps=5)
print(as_numpy(x_numpy))Later, run the same problem and the same run_gradient_descent with JAX:
import jax
jax.config.update("jax_enable_x64", True)
jax_ctx = sc.Context(sc.JaxOps(), dtype="float64")
X, Y, A, x, b = make_problem(jax_ctx)
x_jax = run_gradient_descent(X, A, x, b, eta=0.1, steps=5)
print(as_numpy(x_jax))
print(np.allclose(as_numpy(x_numpy), as_numpy(x_jax)))Run it the same way with Torch:
torch_ctx = sc.Context(sc.TorchOps(), dtype="float64")
X, Y, A, x, b = make_problem(torch_ctx)
x_torch = run_gradient_descent(X, A, x, b, eta=0.1, steps=5)
print(as_numpy(x_torch))
print(np.allclose(as_numpy(x_numpy), as_numpy(x_torch)))All three backends produce the same result:
[ 1.184125 0.3411875 -0.447625 ]
[ 1.184125 0.3411875 -0.447625 ]
True
[ 1.184125 0.3411875 -0.447625 ]
True
If you do not want to enable JAX 64-bit mode, use a supported dtype such as
"float32".
SpaceCore is not an optimizer and not a NumPy/JAX/Torch replacement. It provides backend-aware spaces, operators, and context handling so you can write your own algorithms without wiring them to one array library.
A Context specifies how objects are represented:
- backend operations (
NumpyOps,JaxOps,TorchOps, etc.); - default dtype;
- runtime validation behavior.
Constructors resolve contexts in priority order: explicit ctx=..., then
contexts inferred from inputs, then the global default context. Advanced code
that needs this resolution step directly can call
spacecore.resolve_context_priority(...).
A Space describes the structure and geometry of values:
VectorSpacefor Euclidean vectors and tensors;HermitianSpacefor Hermitian or symmetric matrices;ProductSpacefor Cartesian products of spaces.
Algorithms should use space methods such as zeros, add, scale, axpy,
inner, norm, flatten, and unflatten instead of hard-coding backend array
operations.
A LinOp represents a linear operator between spaces:
DenseLinOpfor dense matrix or tensor operators;SparseLinOpfor sparse operators;BlockDiagonalLinOpfor block-diagonal product-space operators;StackedLinOpfor operators from one space into a product space;SumToSingleLinOpfor operators from a product space into one space.
Operators expose apply and rapply, so algorithms can use a linear map and
its adjoint without depending on the storage format.
SpaceCore is aimed at people writing optimization, inverse-problem, optimal transport, semidefinite programming, or scientific ML algorithms that should not be tied to one backend.
It is most useful when you want the mathematical model to stay stable while the execution backend changes.
Base install:
pip install spacecoreWith JAX support:
pip install "spacecore[jax]"With PyTorch support:
pip install "spacecore[torch]"spacecore[jax]installs optional JAX support.- GPU users should install the appropriate CUDA-enabled JAX build first, following the official JAX installation guide.
spacecore[torch]installs optional PyTorch support fortorch.Tensorbackends.- GPU users should install the appropriate CUDA-enabled PyTorch build first, following the official PyTorch installation guide.
For local development:
python -m pip install -e ".[dev]"For a complete example of regularized optimal transport problem, see the model is written once and solved with NumPy/JAX backends and its notebook.
The hosted documentation is available here.
The documentation website is built with Sphinx from docs/source.
Install the documentation dependencies:
python -m pip install -e ".[docs]"Build the local HTML documentation:
sphinx-build -b html docs/source docs/build/htmlSpaceCore is currently experimental and under active development. The public API may still evolve.
Apache License 2.0