diff --git "a/nohup.out" "b/nohup.out" new file mode 100644--- /dev/null +++ "b/nohup.out" @@ -0,0 +1,2232 @@ +12/12/2024 14:55:29 - INFO - __main__ - Distributed environment: NO +Num processes: 1 +Process index: 0 +Local process index: 0 +Device: cuda + +Mixed precision type: fp16 + +{'force_upcast', 'scaling_factor'} was not found in config. Values will be initialized to default values. +12/12/2024 14:55:33 - INFO - src.models.unet_2d_condition - loaded temporal unet's pretrained weights from ckpts/sd-image-variations-diffusers/unet ... +{'encoder_hid_dim', 'resnet_out_scale_factor', 'time_embedding_act_fn', 'upcast_attention', 'addition_embed_type_num_heads', 'cross_attention_norm', 'reverse_transformer_layers_per_block', 'conv_in_kernel', 'addition_time_embed_dim', 'resnet_skip_time_act', 'time_embedding_type', 'num_attention_heads', 'attention_type', 'projection_class_embeddings_input_dim', 'encoder_hid_dim_type', 'mid_block_only_cross_attention', 'transformer_layers_per_block', 'addition_embed_type', 'dropout', 'class_embeddings_concat', 'resnet_time_scale_shift', 'time_cond_proj_dim', 'time_embedding_dim', 'timestep_post_act', 'conv_out_kernel', 'class_embed_type'} was not found in config. Values will be initialized to default values. +12/12/2024 14:55:42 - INFO - src.models.unet_2d_condition - Loaded 0.0M-parameter motion module +12/12/2024 14:55:42 - INFO - src.models.unet_3d - loaded temporal unet's pretrained weights from ckpts/sd-image-variations-diffusers/unet ... +{'resnet_time_scale_shift', 'class_embed_type', 'upcast_attention'} was not found in config. Values will be initialized to default values. +12/12/2024 14:55:52 - INFO - src.models.unet_3d - Load motion module params from ckpts/MotionModule/mm_sd_v15_v2.ckpt +12/12/2024 14:55:55 - INFO - src.models.unet_3d - Loaded 453.20928M-parameter motion module +12/12/2024 14:56:01 - INFO - __main__ - Total trainable params 546 +12/12/2024 14:56:01 - INFO - __main__ - ***** Running training ***** +12/12/2024 14:56:01 - INFO - __main__ - Num examples = 10 +12/12/2024 14:56:01 - INFO - __main__ - Num Epochs = 10000 +12/12/2024 14:56:01 - INFO - __main__ - Instantaneous batch size per device = 4 +12/12/2024 14:56:01 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4 +12/12/2024 14:56:01 - INFO - __main__ - Gradient Accumulation steps = 1 +12/12/2024 14:56:01 - INFO - __main__ - Total optimization steps = 30000 + 0%| | 0/30000 [00:00