You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently trained a character LoRA for the ltx-2.3 model, but I'm running into a frustrating issue during inference and was hoping to get some advice.
When I apply the character LoRA at a weight of 1.0, the generated output loses almost all movement, resulting in very static videos. Furthermore, it seems to completely override or ignore most of my text prompts.
I tried lowering the LoRA weight down to 0.5 to see if it would give the base model more room to generate motion. However, even at 0.5, simple action prompts like "walking" are still almost entirely ignored, and the character remains mostly stationary.
My questions:
Has anyone else experienced this severe loss of motion and prompt adherence when using character LoRAs with ltx-2.3?
Are there specific training parameters (like rank/alpha, learning rate, or dataset captioning methods) that help preserve the base model's motion capabilities?
Are there any recommended inference settings or workarounds to fix this?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I recently trained a character LoRA for the ltx-2.3 model, but I'm running into a frustrating issue during inference and was hoping to get some advice.
When I apply the character LoRA at a weight of 1.0, the generated output loses almost all movement, resulting in very static videos. Furthermore, it seems to completely override or ignore most of my text prompts.
I tried lowering the LoRA weight down to 0.5 to see if it would give the base model more room to generate motion. However, even at 0.5, simple action prompts like "walking" are still almost entirely ignored, and the character remains mostly stationary.
My questions:
Has anyone else experienced this severe loss of motion and prompt adherence when using character LoRAs with ltx-2.3?
Are there specific training parameters (like rank/alpha, learning rate, or dataset captioning methods) that help preserve the base model's motion capabilities?
Are there any recommended inference settings or workarounds to fix this?
Any insights, tips, or guidance would be greatly appreciated!

Thank you.
https://github.com/user-attachments/assets/5eae534d-fcc1-40ad-9b02-37ba1f47b584
Beta Was this translation helpful? Give feedback.
All reactions