Replies: 5 comments
-
What particular feature in Flux from |
Beta Was this translation helpful? Give feedback.
-
|
I'm not aware of a method provided by diffusers which would allow generation of the text embeddings (or image embeddings) for a given model. If there were, that would be good enough I suppose as pipelines typically support having embeddings as an input as far as I can tell. So the rest can be worked out. To put it differently, it's nice that the pipeline method takes care of all three steps: embed, transform and vae. But it also hides the intermediate states and if you want to do something with them you have to essentially do the whole pipeline yourself. |
Beta Was this translation helpful? Give feedback.
-
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Beta Was this translation helpful? Give feedback.
-
|
do you want to take a look at this custom block? |
Beta Was this translation helpful? Give feedback.
-
|
Thank you, it does look relevant and like it is configurable. Will give it a try. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Is your feature request related to a problem? Please describe.
Text to image models require generation of text embeddings as a first step. While this step is relatively quick if VRAM amount is not a constraint but it often is and therefore the text model(s) need to loaded, unload, then transformer loaded which leads to significant overhead and slowdown. If a user want to generate multiple images using the same prompt and different seeds, this process has to happen over and over (increasing batch size is usually not an option).
Describe the solution you'd like.
We can have an option to cache embedding either to RAM or to disk where the hash of the text is a key and the vectory embedding is a value. When a disk is used, the values will have to be in a folder appropriate for the model.
When the caching is enabled, the diffusers pipeline would check the cache first and if there is a hit, will use it.
If there is no hit, the embedding is generated and saved to the cache.
Describe alternatives you've considered.
I have a custom code which makes use embedding geration as a separate step
https://github.com/xhinker/sd_embed/blob/main/src/sd_embed/embedding_funcs.py
and then the embeddings are fed into the pipeline in the 2nd step.
This is effectively a solution popularised by @sayakpaul except I cache the embeddings.
This method works upto Flux.1 but new models appear e.g. Qwen Image for which there is currently no support.
Additional context.
Adding this should not break anything since the cache can be always disabled (the default) and wiped out.
Beta Was this translation helpful? Give feedback.
All reactions