runtime error
Exit code: 1. Reason: ββββββββ| 4/4 [03:07<00:00, 46.92s/it] You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:01<00:00, 4.00it/s][A Loading checkpoint shards: 100%|ββββββββββ| 4/4 [00:01<00:00, 4.00it/s] generation_config.json: 0%| | 0.00/243 [00:00<?, ?B/s][A generation_config.json: 100%|ββββββββββ| 243/243 [00:00<00:00, 1.38MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 37, in <module> model.to(device) File "/home/user/.pyenv/versions/3.10.18/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3157, in to return super().to(*args, **kwargs) File "/home/user/.pyenv/versions/3.10.18/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1174, in to return self._apply(convert) File "/home/user/.pyenv/versions/3.10.18/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/home/user/.pyenv/versions/3.10.18/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) File "/home/user/.pyenv/versions/3.10.18/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in convert return t.to( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.02 GiB. GPU 0 has a total capacity of 14.74 GiB of which 488.19 MiB is free. Process 2162620 has 14.26 GiB memory in use. Of the allocated memory 14.02 GiB is allocated by PyTorch, and 146.13 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Container logs:
Fetching error logs...