You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have not tested it for dev model and this is an experimental code. It is possible that eta=2 is high for dev models.
could I also suggest playing with the parameter skip_slider_timestep_till during inference. I would try 1-10 if you are using total inference steps as 50:
image = pipe(
target_prompt,
height=height,
width=width,
guidance_scale=guidance_scale,
num_inference_steps=num_inference_steps,
max_sequence_length=max_sequence_length,
num_images_per_prompt=1,
generator=torch.Generator().manual_seed(seed),
from_timestep=0,
till_timestep=None,
output_type='pil',
network=networks[net],
skip_slider_timestep_till=0, # this will skip adding the slider on the first step of generation ('1' will skip first 2 steps)
)
after 50-60 steps , the image composition for val -1 and val +1 is changing a lot , like the person's face and pose is changing
could it be because of values like eta=2 is too high for dev but okay for schnell ?
The text was updated successfully, but these errors were encountered: