Skip to content

debugging custom models #107

Open
Open
@dribnet

Description

@dribnet

TL;DR: custom training is great! is there a good config or way to debug quality of result on small-ish datasets?


I've managed to train my own custom models using the excellent additions provided by @rom1504 in #54 and have hooked this up to clip + vqgan back propagation successfully. However so far the samples from my models are a bit glitchy. For example, with a custom dataset of images such as the following:

example

I'm only able to get a sample that looks something like this:

painting_16_06

Or similarly when I train on a dataset of sketches and images like these:

Sketch (40)

My clip + vqgan back propagation of "spider" with that model turns out like this:

sunset_ink1_15_01

So there is evidence that the model is picking up some gross information such as color distributions, but the results are far from what I would expect using a simpler model such as SyleGan on the same dataset.

So my questions:

  • Is there an easy change to instead more lightly fine tune an existing model on my dataset? This would probably be sufficient for my purposes and perhaps better in a low data regime (eg: 200-2000 image training set) and hopefully more robust to collapsing, etc.
  • Is there a recommended strategy to monitor / diagnose / fix the training regimen? The reconstructions during training in the logs directory look fine. Other issues such as Very confused by the discriminator loss #93 seem to hint that discriminator loss the main metric but aren't clear on how to course correct, etc.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions