Batch Sealing with SupraSeal
This page explains how to setup supraseal batch sealer in Curio
Disclaimer: SupraSeal batch sealing is currently in BETA. Use with caution and expect potential issues or changes in future versions. Currently some additional manual system configuration is required.
SupraSeal is an optimized batch sealing implementation for Filecoin that allows sealing multiple sectors in parallel. It can significantly improve sealing throughput compared to sealing sectors individually.
Key Features
Seals multiple sectors (up to 128) in a single batch
Up to 16x better core utilisation efficiency
Optimized to utilize CPU and GPU resources efficiently
Uses raw NVMe devices for layer storage instead of RAM
Requirements
CPU with at least 4 cores per CCX (AMD) or equivalent
NVMe drives with high IOPS (10-20M total IOPS recommended)
GPU for PC2 phase (NVIDIA RTX 3090 or better recommended)
1GB hugepages configured (minimum 36 pages)
Ubuntu 22.04 or compatible Linux distribution (gcc-11 required, doesn't need to be system-wide)
At least 256GB RAM, ALL MEMORY CHANNELS POPULATED
Without all memory channels populated sealing performance will suffer drastically
NUMA-Per-Socket (NPS) set to 1
Storage Recommendations
You need 2 sets of NVMe drives:
Drives for layers:
Total 10-20M IOPS
Capacity for 11 x 32G x batchSize x pipelines
Raw unformatted block devices (SPDK will take them over)
Each drive should be able to sustain ~2GiB/s of writes
This requirement isn't understood well yet, it's possible that lower write rates are fine. More testing is needed.
Drives for P2 output:
With a filesystem
Fast with sufficient capacity (~70G x batchSize x pipelines)
Can be remote storage if fast enough (~500MiB/s/GPU)
Hardware Recommendations
Currently, the community is trying to determine the best hardware configurations for batch sealing. Some general observations are:
Single socket systems will be easier to use at full capacity
You want a lot of NVMe slots, on PCIe Gen4 platforms with large batch sizes you may use 20-24 3.84TB NVMe drives
In general you'll want to make sure all memory channels are populated
You need 4~8 physical cores (not threads) for batch-wide compute, then on each CCX you'll lose 1 core for a "coordinator"
Each thread computes 2 sectors
On zen2 and earlier hashers compute only one sector per thread
Large (many-core) CCX-es are typically better
Please consider contributing to the SupraSeal hardware examples.
Benchmark NVME IOPS
Please make sure to benchmark the raw NVME IOPS before proceeding with further configuration to verify that IOPS requirements are fulfilled.
The output should look like below
With ideally >10M IOPS total for all devices.
Setup
Dependencies
CUDA 12.x is required, 11.x won't work. The build process depends on GCC 11.x system-wide or gcc-11/g++-11 installed locally.
On Arch install https://aur.archlinux.org/packages/gcc11
Ubuntu 22.04 has GCC 11.x by default
On newer Ubuntu install
gcc-11
andg++-11
packages
For calibnet
The build should be run on the target machine. Binaries won't be portable between CPU generations due to different AVX512 support.
Configuration
Run
curio calc batch-cpu
on the target machine to determine supported batch sizes for your CPU
Create a new layer configuration for the batch sealer, e.g. batch-machine1:
Configure hugepages:
This can be done by adding the following to /etc/default/grub
. You need 36 1G hugepages for the batch sealer.
Then run sudo update-grub
and reboot the machine.
Or at runtime:
Then check /proc/meminfo to verify the hugepages are available:
Expect output like:
Check that HugePages_Free
is equal to 36, the kernel can sometimes use some of the hugepages for other purposes.
Setup NVMe devices for SPDK:
This is only needed while batch sealing is in beta, future versions of Curio will handle this automatically.
PC2 output storage
Attach scratch space storage for PC2 output (batch sealer needs ~70GB per sector in batch - 32GiB for the sealed sector, and 36GiB for the cache directory with TreeC/TreeR and aux files)
Usage
Start the Curio node with the batch sealer layer
Add a batch of CC sectors:
Monitor progress - you should see a "Batch..." task running in the Curio GUI
PC1 will take 3.5-5 hours, followed by PC2 on GPU
After batch completion, the storage will be released for the next batch
Optimization
Balance batch size, CPU cores, and NVMe drives to keep PC1 running constantly
Ensure sufficient GPU capacity to complete PC2 before next PC1 batch finishes
Monitor CPU, GPU and NVMe utilization to identify bottlenecks
Monitor hasher core utilisation
Troubleshooting
Node doesn't start / isn't visible in the UI
Ensure hugepages are configured correctly
Check NVMe device IOPS and capacity
If spdk setup fails, try to
wipefs -a
the NVMe devices (this will wipe partitions from the devices, be careful!)
Performance issues
You can monitor performance by looking at "hasher" core utilisation in e.g. htop
.
To identify hasher cores, call curio calc supraseal-config --batch-size 128
(with the correct batch size), and look for coordinators
In this example, cores 59, 64, 72, 80, and 88 are "coordinators", with two hashers per core, meaning that
In first group core 59 is a coordinator, cores 60-63 are hashers (4 hasher cores / 8 hasher threads)
In second group core 64 is a coordinator, cores 65-71 are hashers (7 hasher cores / 14 hasher threads)
And so on
Coordinator cores will usually sit at 100% utilisation, hasher threads SHOULD sit at 100% utilisation, anything less indicates a bottleneck in the system, like not enough NVMe IOPS, not enough Memory bandwidth, or incorrect NUMA setup.
To troubleshoot:
Read the requirements at the top of this page very carefully
Validate GPU setup if PC2 is slow
Review logs for any errors during batch processing
Slower than expeted NVMe speed
If the NVME Benchmark shows lower than expected IOPS, you can try formatting the NVMe devices with SPDK:
Go through the menus like this
Then you might see a difference in performance like this:
Last updated