Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new picai seg #310

Merged
merged 67 commits into from
May 28, 2023
Merged

new picai seg #310

merged 67 commits into from
May 28, 2023

Conversation

itaijj
Copy link
Collaborator

@itaijj itaijj commented May 7, 2023

No description provided.

@itaijj itaijj requested a review from mosheraboh May 7, 2023 11:30
@itaijj itaijj mentioned this pull request May 7, 2023
@itaijj itaijj requested a review from alex-golts May 14, 2023 16:36
@itaijj itaijj requested a review from simona-rc May 14, 2023 16:40
alex-golts
alex-golts previously approved these changes May 16, 2023
Copy link
Collaborator

@alex-golts alex-golts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
some small comments inline

run:
running_modes : ['train', 'infer', 'eval']
train:
target : seg3d
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be useful to add comments describing the meaning and possible values of "non obvious" params

Copy link
Collaborator Author

@itaijj itaijj May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added comments in hydra on non trivial params, will also add readme once we finalize the code

batch_size: 16
learning_rate : 1e-3
weight_decay : 0
run_sample : 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's this param?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is to limit the dataset size,
I use it for the test ( instead of saving "golden" version like Moshiko used to do)

total_mask_prob: 0
loss_config :
segmentation_lambda : 0.2
classification_lamba : 0.8
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo (lamba?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

those params will be removed, it is from old version actually

from fuse.data.utils.split import dataset_balanced_division_to_folds
from fuse.dl.losses.loss_default import LossDefault

# from report_guided_annotation import extract_lesion_candidates
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove comments

Copy link
Collaborator Author

@itaijj itaijj May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still keep it because maybe it will turn to detection task and then I need those imports


# assert (
# "PICAI_DATA_PATH" in os.environ
# ), "Expecting environment variable CMMD_DATA_PATH to be set. Follow the instruction in example README file to download and set the path to the data"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean PICAI_DATA_PATH, no?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

if "infer" in cfg["run.running_modes"]:
run_infer(NDict(cfg["infer"]), NDict(cfg["paths"]), NDict(cfg["train"]))
#
# analyze - skipping as it crushes without metrics
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so why leave the run_eval function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, will try to convert to either of the 2 solutions.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have metric now, so you can delete this comment.

trainer:
accelerator : gpu
devices : 1
num_epochs : 100
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you do 100 epochs just for a unit test?

Copy link
Collaborator Author

@itaijj itaijj May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually override it in the test,
I will try to read the same config file as in regular run instead of creating a test config

cfg = NDict(hydra.utils.instantiate(cfg))
cfg["paths.working_dir"] = root
cfg["train.run_sample"] = 100
cfg["train.trainer.num_epochs"] = 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh you modify it here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

sample_dict[key_out + seq] = image_data
return sample_dict
except:
return None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe you should display a clearer warning when this happens?

Copy link
Collaborator Author

@itaijj itaijj May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will add a print error message before return None

super().__init__(**kwargs)
self._dir_path = dir_path
self._data_dir = data_dir
# self._sequences = seuqences
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove comment

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@itaijj itaijj merged commit 7e60c7e into master May 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants