Proposal: Contribution of Backend and Device Selection for Inference, and Suggestion to Add LoRA and Model Fine-Tuning Support #676
DenisMontes
started this conversation in
General
Replies: 1 comment
-
Talking only for myself here (and I'm just a minor contributor), but I don't think a PR would not be appropriate - especially if it's fully implemented and working already! I have missed myself a way to force the inference to run on CPU. So, I'd suggest submitting it anyway, to see how well it's received. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I would like to make myself available to contribute to the project. However, I was unable to find a roadmap or a "TO DO" list where I could identify areas to work on and submit new pull requests.
In the meantime, I have made some modifications in a local branch, where I implemented functionality that allows users to select both the backend and the device to be used for inference. I believe this could be a useful feature for the community. Therefore, I would like to ask if you consider it appropriate for me to submit these changes, or if you would prefer that I focus on other pending issues.
Additionally, I would like to propose adding support for LoRA and model fine-tuning. I am currently developing a tool designed to create characters using Stable Diffusion and subsequently train LoRA models—or even the base model itself—so that it can consistently generate the same character.
I would appreciate hearing your thoughts and feedback on these proposals.
Beta Was this translation helpful? Give feedback.
All reactions