Unexpected results for training accuracy. #916
-
Hello all, I am hoping that someone will have an idea about some questions I have regarding the accuracy of the DeepEdit and DeepGrow models in monailabel. In clara SDK days I had a really high performing model that could segment the vagina, bladder, rectum, and uterus on a custom dataset. I have switched to monailabel as suggested, but have not achieved anything even remotely close. I'm focused on DeepEdit since it is easier to get inference results which is helpful for a paper I am writing. I have tried the method of labeling and uploading a couple samples and "training as you go", which has had good results even when applying the 2-image trained model on the "next sample" images. My confusion is that when I label and submit my entire dataset (19 MRI volumes) the accuracy of the model stays around 50% no matter how many epochs; and performs badly. Essentially the more samples I add, the worse the model gets. I've had an equally confusing and frustrating experience with DeepGrow 3D in monailabel. Same dataset that trained the Clara model mentioned above, but with only 2 unique labels for simplicity. The training results in high scores, but foreground clicks return inaccurate segmentation and the background clicks don't offer much in the way of "guiding" the results or correcting them. Thanks for reading. Please let me know if you have any ideas. Much appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
I can not comment on DeepEdit.. as its more of balancing act between samples while training.. However deepgrow, there should not be a change wrt MONAILabel vs CLARA.. you have written a training code (outside MONAILabel) you can compare it vs reference training task defined in MONAILabel... there can be a mismatch.. if all transforms, optimizers, loss function.. input data, pytorch dependencies etc.. is same, then there should not be any difference in performance.. The example deepgrow2D and deepgrow3D model are alias version of same Clara models (from NGC and they use the same pre-trained models) I am not sure if you did notice.. MONAILabel only defines one kind of definition for training the model.. and trying to make few things defined in simple ways.. feel free to add/change/customize the things.. but training a good model first needs lots of training.. and more examples.. if i remember correctly, for those clara deepgrow 2d/3d models, i had to run training over couple of days.. 200-300 epochs (specially for 3D) to get some good results.. over the MSD dataset.. new data... needs new tuning.. |
Beta Was this translation helpful? Give feedback.
-
Thanks for opening this discussion, @sabino-ramirez 1/ The default pre-transforms might not be the most appropriate for the MRI scans you have. Especially the ones that involve intensity changes: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/deepedit.py#L105-L112 I'd suggest customising this one to your dataset: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/deepedit.py#L107 2/ Is the labelling consistent in the training dataset? Depending on which software you've used to create the labels, label numbering could not be the same for all the samples. I mean, bladder should always be represented by the same number - The same applies to the other organs. 3/ Are the labels vagina, bladder, rectum, and uterus in all the images? It could be that there are some images that have missing labels. This impact the model performance. Hope this helps, |
Beta Was this translation helpful? Give feedback.
Thanks for opening this discussion, @sabino-ramirez
I can think of these reasons why the DeepEdit results you're getting are not great:
1/ The default pre-transforms might not be the most appropriate for the MRI scans you have. Especially the ones that involve intensity changes: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/deepedit.py#L105-L112 I'd suggest customising this one to your dataset: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/trainers/deepedit.py#L107
2/ Is the labelling consistent in the training dataset? Depending on which software you've used to create the labels, label numbering could not be the…