GGUF models for Depth-Anything V2
Depth-Anything is a model for monocular depth estimation. The weights in this repository are converted for lightweight inference on consumer hardware with vision.cpp.
- Original repository: DepthAnything/Depth-Anything-V2 (Github)
- Original weights: depth-anything (HuggingFace)
Run
Example inference with vision.cpp:
CLI
vision-cli depth-anything -m Depth-Anything-V2-Small-F16.gguf -i input.png -o depth.png
C++
image_data image = image_load("input.png");
backend_device device = backend_init();
depthany_model model = depthany_load_model("Depth-Anything-V2-Small-F16.gguf", device);
image_data depth = depthany_compute(model, image);
image_data output = image_f32_to_u8(depth, image_format::alpha_u8);
image_save(output, "depth.png");
License
- Depth-Anything-V2-Small: Apache-2.0
- Depth-Anything-V2-Base: CC-BY-NC-4.0
- Depth-Anything-V2-Large: CC-BY-NC-4.0
- Downloads last month
- 178
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Acly/Depth-Anything-V2-GGUF
Base model
depth-anything/Depth-Anything-V2-Small-hf