-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(raiko): enable kzg blob check #148
Conversation
Looks good, indeed unfortunate of building complications. I got it to compile with risc0, but for some reason I cannot get the SP1 guest to compile using Looks like we need that For the building with extra options, seems like we can do that using #133 and/or using a makefile like in the zkevm circuits: https://github.com/privacy-scaling-explorations/zkevm-circuits/blob/main/Makefile so we can just do things like |
For SP1, I need to use +nightly to avoid this error:
I did not mention that because I thought it was the common sense of building sp1 since looks irrelevant with my change, no?? |
Where you download the riscv toolchain? I can create a install.sh within this PR and fix CI. |
Run
I've already copied the correct toolchain file from EACH repo and this script is for copy and paste them when run in different guest targets
|
Nice! i guess u can leave it in the comment for now, and after the Pipeline in #133 is merged I can easily add this CC flag for R0 builder. |
I just downloaded it and did not build it, and worked okay for me to build the risc0 guest. No need to build it I think because only the headers are used? |
He's talking about https://github.com/stnolting/riscv-gcc-prebuilt I think, the thing we need to download now to be able to build c-kzg.
Yeah need to run |
Yes, we need add this download to current build script or the future compile pipeline..
Seems keeping built elf in repo is still error-prone, I can double check tomorrow, and maybe some help is needed from sp1 office @CeciliaZ030 |
I think this is because of
this should be fixed when #133 is merged |
I think it was sp1's issue (guess due to no_std based on the error messages), yesterday I can not compile, but today I run sp1up again & compile works. Now to compile, use
BTW: pls using the latest build script. |
I rarely see a toolchain managed by this way. And using scripts to manage toolchain files isn't a suitable approach. If we have lots of toolchain versions, maybe we can use another approach to manage them, be like using cargo command line with toolchain flag in Makefile as Brechtpd said: cargo +nightly-xx-xx build .. Please see rust-lang/rustup#3546 Or, we can identify a minimal compatible version, be like some nightly-xx suitable for both build targets. BTW, perhaps we can split zkVMs and SGX into different repos. Each target has its own approach to build, like SP1, they have building config in the repo's root and this configuration can be used in various complex situations: {
"entrypoints": [
{
"name": "step",
"preset": "circomx",
"baseDir": ".",
"buildCommand": "yarn install && yarn ncc build src/step.ts -o build && node build/index.js build",
"proveCommand": "node build/index.js prove"
},
{
"name": "rotate",
"preset": "circomx",
"baseDir": ".",
"buildCommand": "yarn install && yarn ncc build src/rotate.ts -o build && node build/index.js build",
"proveCommand": "node build/index.js prove"
}
]
} So, we don't need to convert SP1's config to our build scrips. Typically, zkVMs and SGX operate in entirely different environments, we don't need to build all of them at the same time. I think separating them can reduce management complexity and facilitate development, and of course, they can share some libraries, like Let's ensure |
Another issue is that, we don't need any build features to split different targets in |
Can we still keep a monorepo? Otherwise I think we'll directly have
which is quite cumbersome if PRs need to cover all 4 (like a trait change) |
CI passed with some modifications on scripts. |
Ah yeah this approach seems nice and simple!
The fear was that SP1 required a very specific compiler even for the host, so if any of the other prover backends would also need a very specific compiler version to work we would be in trouble and it would not be possible to compile all of them inside a single binary. Seems like things are less finicky now (though latest SP1 again requires some unstable features I think), hopefully it remains that way.
I would also prefer keeping everything in the same repo because otherwise we'll have lots of version management and multiple PRs etc... I think a bit more complex on the building side will be worth it.
Simplicity above all. 🫡 |
I think @johntaiko has a good point! My toolchain.sh was really a hack when the CI is looking for a toolchain file to install rust so I had to copy and paste the different one for different guests' CI process, but for sure there's better way.
|
update path for chmod default.json Added step install nodejs feat(raiko): enable kzg blob check (taikoxyz#148) * feat(raiko): enable kzg blob check * update build script * update ci * fix ci * keep fixing ci * keep fixing ci - sgx docker build * fix ci risc0 * update ci env feat(raiko): generalized build pipeline for ZkVMs guests (taikoxyz#133) * init + some taiko_util -> util * pipeline * example * r0 bins & tests done * exmple-sp1 done * gen proof with unittests * test harness for sp1 doen + split the builder and driver * make example a workspace * change dest param * harness upgrade * image id compute * cleanup * pipeline api change * r0 switch builder-driver * r0 switch builder-driver * sp1 prover -> driver rename * sp1 builder * cargo check passed * name changes * commented out sp1 & ris0 flags in irrelavent setup * fixes + makefile * update * clean up * update CI r0 * update sp1 * add builder to workspace * sp1 CI permission denied * add example test in guest + clippy * add test to CI * fmt & fix CI bug * test with --release + toolchain * fix * fix docker * add CC support for c-kzg * trait Pipeline impl in specific builder * fix ci * download & verify riscv-gcc-prebuilt * Update pipeline/src/lib.rs * update script * Updated readme + fix local building * cargo: -> cargo:: --------- Co-authored-by: Brechtpd <[email protected]> fix(lib): temporarily disable kzg check in sgx/sp1 provers (taikoxyz#157) * temporarily disable kzg check in sgx/sp1 provers * fix typo fast on-chain register + change path before clone repo
* feat(raiko): enable kzg blob check * update build script * update ci * fix ci * keep fixing ci * keep fixing ci - sgx docker build * fix ci risc0 * update ci env
* Update README_Docker_and_RA.md (#145) this command no need $ * Streamline FMSPC retrieval instructions (#124) * streamline FMSPC retrieval instructions * fix: remove redirection to /dev/null in PCKID retrieval command * refactoring * revert "refactoring" --------- Co-authored-by: Roger <[email protected]> * feat(raiko): enable kzg blob check (#148) * feat(raiko): enable kzg blob check * update build script * update ci * fix ci * keep fixing ci * keep fixing ci - sgx docker build * fix ci risc0 * update ci env * fix sgx ckzg issue (#162) * revert unstable config change --------- Co-authored-by: RogerRabbit <[email protected]> Co-authored-by: Ivan Lagunovsky <[email protected]> Co-authored-by: Roger <[email protected]>
Calculating the kzg commit from input tx_data in guest code to make it part of the verification. Change seems simple enought, but 2 tricky things are involved:
Because the ref libs c-kzg used has no riscv abi compilation, see: risc0/risc0#443. Also it means it needs riscv toolchain locally (the whole building can be moved into a preconfigured docker env to save some effort, but local debugging still asks the toolchain)
Can we make a copy of riscv toolchain somewhere and install (like
cargo install taiko-riscv
) before the compile? @johntaiko, then maybe no other repo dependency.