-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken Pipe errors #154
Comments
If your sgx id registration successes, you are able to run as a prover. |
I also encountered a broken pipe error. The following command occasionally results in an error, with a roughly 50% occurrence rate.
error
|
Hey, Same issue here: kzg check enabled! thread 'tokio-runtime-worker' panicked at /opt/raiko/provers/sgx/prover/src/lib.rs:280:48: prover address: 0xDD8897F0729D1D0E7eCF36Df004562CC4F243E11 |
what is the block num? |
Does it matter what block do we interogate? In this case was 96419 |
Not really, but we can check that block on our side to see if proof generation itself is ok. |
same issue for 110568 for example |
Does these 2 blocks work for you? |
Yes, I used a infura's holesky rpc & @isekaitaiku 's beacon rpc (above) to run these 3 suspicious blocks: 96419, 107906 and 110568. All good. Any other changes you have made?? |
what do you mean when you say changes? |
0xDD8897F0729D1D0E7eCF36Df004562CC4F243E11 my prover address should have TTKOh in order the call to work properly? |
I tried running "prove_block.sh taiko_a7 sgx 110568" As a result, logs showed progress beyond block_number=107906 curl used case. Error Case
block_number=110568 3 transactions |
You change the config & then re-setup the sgx include bootstrap? |
weird.....can you disable the setup & bootstrap in the script & try again? |
Here's what I did when updating to the latest version(cc0e8d8)
|
Merged a refine PR for better show error message #182. |
Thank you for your assistance.
failed 3/5
|
Out-of-memory in library OS ... never met before. how many memory do you have? |
If prove is not running, it looks like there would be about 18G of memory left over.
|
Can this increased? like 4G or even bigger? I remember ours are 16G (half of 32) if not mistake. And what's your OVH instance info?? Another user reported a rare occurrence also on OVH, which is related to intel certification failure. maybe he can switch to another instance like yours. |
It seems that the memory allocated for SGX is capped at a maximum of 256MB.
|
It's a hardware (CPU) limit I think |
It seems that there is a set maximum limit per CPU. |
I rented a server capable of allocating 512MB to SGX and ran the Raiko prove_test on it. |
SGX bootstrap stderr: Gramine is starting. Parsing TOML manifest file, this may take some time... |
What server and provider you use? |
@isekaitaiku what server do you finally use? |
ovh advanced-1 |
That's means the pccs server does not work properly, a common reason is incorrect config, could you double check settings at https://github.com/taikoxyz/raiko/blob/taiko/alpha-7/README_Docker_and_RA.md#raiko-docker section? BTW: which platform are you using?? we saw a platform in OVH can not support PCCS because not registered to Intel see intel/SGXDataCenterAttestationPrimitives#398. Hopefully yours are just incorrect config. |
It's useful for me tks |
Closing as since the issue was raised 2 months ago, we've been in mainnet the codebase has undergone numerous changes and SGX is used in production. Feel free to comment and we'll reopen for investigation if there is still an issue. |
Describe the bug
I have managed to register my prover & receive TTKOh tokens, but when trying to test the proof generation via
curl
, I receive numeroustask *** panicked
errors no matter what block height I set or Holesky RPC I use.I enabled Rust backtrace and here are my logs:
Because of this error, I am not sure if my prover is suitable for participation in Hekla testnet. Tokens remain untouched.
FYI, my prover address is
0x18c2942c51d0947d4d51ddf863e2aa9bc409a241
, instance ID -206
.Steps to reproduce
No response
Spam policy
The text was updated successfully, but these errors were encountered: