Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(raiko): enable kzg blob check #148

Merged
merged 8 commits into from
May 4, 2024
Merged

feat(raiko): enable kzg blob check #148

merged 8 commits into from
May 4, 2024

Conversation

smtmfft
Copy link
Contributor

@smtmfft smtmfft commented May 2, 2024

Calculating the kzg commit from input tx_data in guest code to make it part of the verification. Change seems simple enought, but 2 tricky things are involved:

  1. Need these 2 variables to compile.
export CC=gcc
export CC_riscv32im_risc0_zkvm_elf=/opt/riscv/bin/riscv32-unknown-elf-gcc 

Because the ref libs c-kzg used has no riscv abi compilation, see: risc0/risc0#443. Also it means it needs riscv toolchain locally (the whole building can be moved into a preconfigured docker env to save some effort, but local debugging still asks the toolchain)
Can we make a copy of riscv toolchain somewhere and install (like cargo install taiko-riscv) before the compile? @johntaiko, then maybe no other repo dependency.

  1. The other trick is renaming optimizated c-kzg to c-kzg-taiko to avoid troublesome native lib conflict. An alternative way is we managing all 3rd party libs to link to that one. Seems current approach is easier. optimization c-kzg lib ref: ethereum/c-kzg-4844@main...smtmfft:c-kzg-4844:for-alpha7

@Brechtpd
Copy link
Contributor

Brechtpd commented May 2, 2024

Looks good, indeed unfortunate of building complications.

I got it to compile with risc0, but for some reason I cannot get the SP1 guest to compile using cargo build --features sp1 anymore so haven't been able to get that working.

Looks like we need that install.sh script! So that we can install/download all the stuff necessary to build all the provers without having to read the README and following the instructions. Downloading the riscv toolchain to some folder seems easy enough. This install.sh can then also be used by the ci to set stuff up, so we can also be sure it works and keeps working,

For the building with extra options, seems like we can do that using #133 and/or using a makefile like in the zkevm circuits: https://github.com/privacy-scaling-explorations/zkevm-circuits/blob/main/Makefile so we can just do things like make risc0 and the makefile will do everything needed to make that happen. The makefile seems like a nice to have in general to do all kind of things we will want to do in a single place so also usable by everybody (including ci).

@smtmfft
Copy link
Contributor Author

smtmfft commented May 2, 2024

For SP1, I need to use +nightly to avoid this error:

error[E0554]: `#![feature]` may not be used on the stable release channel
  --> /home/ubuntu/.cargo/git/checkouts/sp1-20c98843a1ffc860/12c841b/core/src/lib.rs:15:1
   |
15 | #![feature(generic_const_exprs)]
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I did not mention that because I thought it was the common sense of building sp1 since looks irrelevant with my change, no??

@smtmfft
Copy link
Contributor Author

smtmfft commented May 2, 2024

Where you download the riscv toolchain? I can create a install.sh within this PR and fix CI.
Mine comes from the src code repo and need make/install which is too complex to meet this simple requirement.

@CeciliaZ030
Copy link
Contributor

CeciliaZ030 commented May 2, 2024

Run ./toolchain.sh sp1 or ./toolchain.sh ris0 before building any guests it automatically sets the right tool chain
https://github.com/taikoxyz/raiko/blob/13ca16a920a091bb38eba348e235b86baf459d0e/toolchain.sh

Can we make a copy of riscv toolchain somewhere and install (like cargo install taiko-riscv) before the compile? @johntaiko, then maybe no other repo dependency.

I've already copied the correct toolchain file from EACH repo and this script is for copy and paste them when run in different guest targets

For SP1, I need to use +nightly to avoid this error:

error[E0554]: `#![feature]` may not be used on the stable release channel
  --> /home/ubuntu/.cargo/git/checkouts/sp1-20c98843a1ffc860/12c841b/core/src/lib.rs:15:1
   |
15 | #![feature(generic_const_exprs)]
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I did not mention that because I thought it was the common sense of building sp1 since looks irrelevant with my change, no??

@CeciliaZ030
Copy link
Contributor

export CC=gcc
export CC_riscv32im_risc0_zkvm_elf=/opt/riscv/bin/riscv32-unknown-elf-gcc 

Nice! i guess u can leave it in the comment for now, and after the Pipeline in #133 is merged I can easily add this CC flag for R0 builder.

@Brechtpd
Copy link
Contributor

Brechtpd commented May 2, 2024

Where you download the riscv toolchain? I can create a install.sh within this PR and fix CI. Mine comes from the src code repo and need make/install which is too complex to meet this simple requirement.

I just downloaded it and did not build it, and worked okay for me to build the risc0 guest. No need to build it I think because only the headers are used?

@Brechtpd
Copy link
Contributor

Brechtpd commented May 2, 2024

I've already copied the correct toolchain file from EACH repo and this script is for copy and paste them when run in different guest targets

He's talking about https://github.com/stnolting/riscv-gcc-prebuilt I think, the thing we need to download now to be able to build c-kzg.

I did not mention that because I thought it was the common sense of building sp1 since looks irrelevant with my change, no??

Yeah need to run ./toolchain.sh sp1 before it. But my problem is that the SP1 guest binary simply does not get built. The compilation is successful, but the guest elf is not generated. If you already have an ELF it might look okay because it will use the elf that already exists, but I deleted my elf to make sure it got rebuilt and it doesn't get generated for me when doing cargo build --features sp1, and using cargo prove in the sp1 guest folder doesn't seem to pick up the necessary extra flags.

@smtmfft
Copy link
Contributor Author

smtmfft commented May 2, 2024

He's talking about https://github.com/stnolting/riscv-gcc-prebuilt I think, the thing we need to download now to be able to build c-kzg.

Yes, we need add this download to current build script or the future compile pipeline..

Yeah need to run ./toolchain.sh sp1 before it. But my problem is that the SP1 guest binary simply does not get built. The compilation is successful, but the guest elf is not generated. If you already have an ELF it might look okay because it will use the elf that already exists, but I deleted my elf to make sure it got rebuilt and it doesn't get generated for me when doing cargo build --features sp1, and using cargo prove in the sp1 guest folder doesn't seem to pick up the necessary extra flags.

Seems keeping built elf in repo is still error-prone, I can double check tomorrow, and maybe some help is needed from sp1 office @CeciliaZ030

@CeciliaZ030
Copy link
Contributor

But my problem is that the SP1 guest binary simply does not get built. The compilation is successful, but the guest elf is not generated. If you already have an ELF it might look okay because it will use the elf that already exists, but I deleted my elf to make sure it got rebuilt

I think this is because of rerun_if_change which gets invoked in build.rs. It compiles cuz it compiled the build script and the main binary, but it detacts that guest folder is not changed so didn't build the guest elf. Try add some extra lines in the guest code and run again.

Seems keeping built elf in repo is still error-prone, I can double check tomorrow, and maybe some help is needed from sp1 office @CeciliaZ030

this should be fixed when #133 is merged

@smtmfft
Copy link
Contributor Author

smtmfft commented May 3, 2024

Yeah need to run ./toolchain.sh sp1 before it. But my problem is that the SP1 guest binary simply does not get built. The compilation is successful, but the guest elf is not generated. If you already have an ELF it might look okay because it will use the elf that already exists, but I deleted my elf to make sure it got rebuilt and it doesn't get generated for me when doing cargo build --features sp1, and using cargo prove in the sp1 guest folder doesn't seem to pick up the necessary extra flags.

I think it was sp1's issue (guess due to no_std based on the error messages), yesterday I can not compile, but today I run sp1up again & compile works. Now to compile, use

source toolchain.sh sp1  # using `source` to keep the export

BTW: pls using the latest build script.

@johntaiko
Copy link
Member

johntaiko commented May 4, 2024

I rarely see a toolchain managed by this way. And using scripts to manage toolchain files isn't a suitable approach. If we have lots of toolchain versions, maybe we can use another approach to manage them, be like using cargo command line with toolchain flag in Makefile as Brechtpd said:

cargo +nightly-xx-xx build ..

Please see rust-lang/rustup#3546

Or, we can identify a minimal compatible version, be like some nightly-xx suitable for both build targets.

BTW, perhaps we can split zkVMs and SGX into different repos. Each target has its own approach to build, like SP1, they have building config in the repo's root and this configuration can be used in various complex situations:
https://github.com/succinctlabs/telepathyx/blob/main/succinct.json

{
    "entrypoints": [
        {
            "name": "step",
            "preset": "circomx",
            "baseDir": ".",
            "buildCommand": "yarn install && yarn ncc build src/step.ts -o build && node build/index.js build",
            "proveCommand": "node build/index.js prove"
        },
        {
            "name": "rotate",
            "preset": "circomx",
            "baseDir": ".",
            "buildCommand": "yarn install && yarn ncc build src/rotate.ts -o build && node build/index.js build",
            "proveCommand": "node build/index.js prove"
        }
    ]
}

So, we don't need to convert SP1's config to our build scrips. Typically, zkVMs and SGX operate in entirely different environments, we don't need to build all of them at the same time. I think separating them can reduce management complexity and facilitate development, and of course, they can share some libraries, like lib and primitives.

Let's ensure KISS. How do you all think? @smtmfft @Brechtpd @CeciliaZ030

@johntaiko
Copy link
Member

johntaiko commented May 4, 2024

Another issue is that, we don't need any build features to split different targets in host, since the host is likely to need to support all targets simultaneously. And the host is not a performance bottleneck. The features have added complexity to the code base.

@mratsim
Copy link
Contributor

mratsim commented May 4, 2024

BTW, perhaps we can split zkVMs and SGX into different repos.

Can we still keep a monorepo? Otherwise I think we'll directly have

  • raiko-libs for traits
  • raiko-zkvms
  • raiko-sgx
  • raiko-host

which is quite cumbersome if PRs need to cover all 4 (like a trait change)

@smtmfft
Copy link
Contributor Author

smtmfft commented May 4, 2024

CI passed with some modifications on scripts.

@Brechtpd
Copy link
Contributor

Brechtpd commented May 4, 2024

If we have lots of toolchain versions, maybe we can use another approach to manage them, be like using cargo command line with toolchain flag in Makefile as Brechtpd said:

cargo +nightly-xx-xx build ..

Please see rust-lang/rustup#3546

Ah yeah this approach seems nice and simple!

Another issue is that, we don't need any build features to split different targets in host, since the host is likely to need to support all targets simultaneously. And the host is not a performance bottleneck. The features have added complexity to the code base.

The fear was that SP1 required a very specific compiler even for the host, so if any of the other prover backends would also need a very specific compiler version to work we would be in trouble and it would not be possible to compile all of them inside a single binary. Seems like things are less finicky now (though latest SP1 again requires some unstable features I think), hopefully it remains that way.

BTW, perhaps we can split zkVMs and SGX into different repos. Each target has its own approach to build, like SP1, they have building config in the repo's root and this configuration can be used in various complex situations:

I would also prefer keeping everything in the same repo because otherwise we'll have lots of version management and multiple PRs etc... I think a bit more complex on the building side will be worth it.

Let's ensure KISS. How do you all think?

Simplicity above all. 🫡

@Brechtpd Brechtpd merged commit 9865b4c into taiko/unstable May 4, 2024
8 checks passed
@Brechtpd Brechtpd deleted the c-kzg-verify branch May 4, 2024 17:30
@CeciliaZ030
Copy link
Contributor

CeciliaZ030 commented May 4, 2024

I think @johntaiko has a good point! My toolchain.sh was really a hack when the CI is looking for a toolchain file to install rust so I had to copy and paste the different one for different guests' CI process, but for sure there's better way.
Let me do this: we remove all rust-toolchain and use Makefile only be like

RISC0_TOOLCHAIN = +nightly_02_01_2024
SP1_TOOLCHAIN = +nightly_04_23_2024
SGX_TOOLCHAIN = +stable

risc0:
	@cargo $(RISC0_TOOLCHAIN) run --bin risc0-builder && cargo $(RISC0_TOOLCHAIN) build --release --features risc0

sp1:
	@cargo  $(SP1_TOOLCHAIN)  run --bin sp1-builder && cargo $(SP1_TOOLCHAIN)  build --release --features sp1

quangtuyen88 pushed a commit to quangtuyen88/raiko that referenced this pull request May 6, 2024
update path for chmod default.json

Added step install nodejs

feat(raiko): enable kzg blob check (taikoxyz#148)

* feat(raiko): enable kzg blob check

* update build script

* update ci

* fix ci

* keep fixing ci

* keep fixing ci - sgx docker build

* fix ci risc0

* update ci env

feat(raiko): generalized build pipeline for ZkVMs guests (taikoxyz#133)

* init + some taiko_util -> util

* pipeline

* example

* r0 bins & tests done

* exmple-sp1 done

* gen proof with unittests

* test harness for sp1 doen + split the builder and driver

* make example a workspace

* change dest param

* harness upgrade

* image id compute

* cleanup

* pipeline api change

* r0 switch builder-driver

* r0 switch builder-driver

* sp1 prover -> driver rename

* sp1 builder

* cargo check passed

* name changes

* commented out sp1 & ris0 flags in irrelavent setup

* fixes + makefile

* update

* clean up

* update CI r0

* update sp1

* add builder to workspace

* sp1 CI permission denied

* add example test in guest + clippy

* add test to CI

* fmt & fix CI bug

* test with --release + toolchain

* fix

* fix docker

* add CC support for c-kzg

* trait Pipeline impl in specific builder

* fix ci

* download & verify riscv-gcc-prebuilt

* Update pipeline/src/lib.rs

* update script

* Updated readme + fix local building

* cargo: -> cargo::

---------

Co-authored-by: Brechtpd <[email protected]>

fix(lib): temporarily disable kzg check in sgx/sp1 provers (taikoxyz#157)

* temporarily disable kzg check in sgx/sp1 provers

* fix typo

fast on-chain register + change path before clone repo
smtmfft added a commit to smtmfft/raiko that referenced this pull request May 7, 2024
* feat(raiko): enable kzg blob check

* update build script

* update ci

* fix ci

* keep fixing ci

* keep fixing ci - sgx docker build

* fix ci risc0

* update ci env
johntaiko pushed a commit that referenced this pull request May 8, 2024
* Update README_Docker_and_RA.md (#145)

this command no need $

* Streamline FMSPC retrieval instructions (#124)

* streamline FMSPC retrieval instructions

* fix: remove redirection to /dev/null in PCKID retrieval command

* refactoring

* revert "refactoring"

---------

Co-authored-by: Roger <[email protected]>

* feat(raiko): enable kzg blob check (#148)

* feat(raiko): enable kzg blob check

* update build script

* update ci

* fix ci

* keep fixing ci

* keep fixing ci - sgx docker build

* fix ci risc0

* update ci env

* fix sgx ckzg issue (#162)

* revert unstable config change

---------

Co-authored-by: RogerRabbit <[email protected]>
Co-authored-by: Ivan Lagunovsky <[email protected]>
Co-authored-by: Roger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants