This library provides a fiber abstraction meant as a basic building block to build non-local control flow abstractions such as async/await, yield-style generators, lightweight threads, first-class continuations, and more, in C programs, and compiling said programs to Wasm.
This library offers two backends:
- Stackless fibers using the Asyncify transform of binaryen.
- Native stackful fibers powered by the WasmFX instruction set.
Clone the repository:
git clone https://github.com/wasmfx/fiber-c.git
To compile the benchmarks programs in examples from C to Wasm binaries, you will need:
- WASI-SDK 30.0
- Wasm reference interpreter from the stack-switching proposal
- Up-to-date Binaryen
- A Wasm execution engine with support for the stack switching instruction set, e.g. Wasmtime, V8 canary, or Wizard.
Please install and build these packages according to their docs. Please also make sure they are installed in the same root directory,
then update the ROOT field of make.config.
Note: Instead of installing these dependencies manually, you can use the docker container build at http://github.com/wasmfx/benchtainer and run compilation/benchmarking runs within that container. See instructions therein.
After this, you should be able to run
make all
All binaries and compilation artefacts are stored in the out directory.
To discard all existing artefacts, run
make clean
The benchmarking script in bench.py can build, run, and time each benchmark. To get it working, you will need to:
-
Install and update engine dependencies
You need to update the path to each engine in
config.yml. -
Install hyperfine
-
Install python3 and the following dependencies for chart generation: pyyaml, matplotlib, numpy
You can now invoke the benchmarking script by running:
# run all benchmarks on all engines by default
./bench.py
which generates benchmarking scripts into the /run-scripts directory, and saves the benchmarking results
into bench_results/results. Charts displaying the absolute and relative runtimes of the benchmarks across
the WasmFX and Asyncify backends are saved to bench_results/results/charts.
You can also configure the benchmarks and engines you want to use:
# run selected benchmarks and engines, saves results to `bench_results/results_dir
./bench.py --benchmarks sieve1 itersum --engines d8 wasmtime -o results_dir
bench.py works by invoking build.py, which is responsible for invoking the Makefile
as well as generating scripts containing the commands for hyperfine to time (e.g. the scripts in the
/run-scripts directory). You can invoke build.py on its own to inspect these scripts without
starting the benchmarking process:
./build.py