This will work on Python3.13.1x...unfirtunately, but it can't work perfectly.

Update 24.10.2025

It can't work on ComfyUI_LayerStyle_Advance unfortunately...however it works on Insptre-Pack.

Updated 16.11.2025

Uploaded an another built version "uncomplete-mediapipe-v1-0.10.21-cp313-cp313-win_amd64.whl", but it can't work perfectly. However, it works on Reactor at least. I uploaded it only for mediapipe.

But newest Reactor for ComfyUI&A1111 can work without mediapipe.


📌 About the ProtectAI “LiteRT Model Contains Unknown Operators” Warning

This repository provides a custom-built Mediapipe wheel (Python 3.13, Windows), compiled directly from the official Mediapipe source code. Some security scanners—including ProtectAI Guardian—may display the warning:

“LiteRT Model Contains Unknown Operators”

This warning is not an actual security issue. It occurs because Mediapipe internally contains TensorFlow Lite (LiteRT) models such as:

  • *.tflite models
  • FlatBuffer-based graph definitions
  • Custom operators required by Mediapipe tasks

TensorFlow Lite uses several custom operations (Custom Ops) that are not part of the minimal LiteRT operator set. ProtectAI flags these custom ops as “unknown,” even though they are:

  • Official Mediapipe components
  • Required for normal operation
  • Safe and expected in any Mediapipe build

This is a false positive caused by how LiteRT models are analyzed. There is no malicious code, no dynamic execution payloads, and no non-standard behavior in this wheel.

✔ Summary

  • The warning is normal for any Mediapipe build.
  • Custom TFLite operators are officially used by Mediapipe.
  • The wheel is built from the unmodified upstream source.
  • There is no security risk and the warning can be safely ignored.

If additional verification is needed, users may inspect the wheel’s contents or FlatBuffer schemas, but the presence of TFLite custom ops is expected by design.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support