-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lpw_stable_diffusion pipeline not working when "from_single_file" is used #7881
Comments
Hi @digitardz
but in the script you provided, you only used StableDiffusionPipeline, did I miss anything? |
ahh I see!!! ohh I don't think Can you do this: after pipeline = StableDiffusionPipeline.from_single_file(
'model.safetensors',
torch_dtype=torch.float16,
) and then pipe_lpw = DiffusionPipeline.from_pipe(
pipeline,
custom_pipeline="lpw_stable_diffusion",
).to("cuda") see more about |
I tried this but apparently |
ohh sorry you have to install diffusers from source, since the PR was merged after the last release pip install git+https://github.com/huggingface/diffusers |
Alright that worked, no truncation error with your code |
Describe the bug
I'm using
StableDiffusionPipeline
with lpw_stable_diffusion to do txt2img generations from long prompts.When loading the safetensors checkpoint file using
.from_single_file
, prompts longer than 77 tokens still get truncated.However, after converting the checkpoint to diffusers format with your convert_from_ckpt script and loading its path with
.from_pretrained
, prompts seem to get handled properly with no truncation.Reproduction
Model used for test: https://civitai.com/models/25694/epicrealism
StableDiffusionPipeline.from_single_file
, generate imageOutput:

The prompt gets truncated.
StableDiffusionPipeline.from_pretrained
, generate imageOutput:

No truncation.
Note that the "Token indices sequence length" is a false alarm according to the community pipeline documentation.
System Info
Google Colab clean environment with T4 GPU
transformers 4.40.2
diffusers 0.27.2
Who can help?
@DN6, maybe @SkyTNT?
I think the handling of long prompts is a great feature for diffusers pipelines, I hope it gets mantained over time :)
The text was updated successfully, but these errors were encountered: