ππ Exciting times for the document AI community!
We're thrilled to announce the release of some of the largest OCR datasets available to the public. π₯ With over 26 million pages , 18 billion text tokens, and 6TB of data, these resources are a significant leap forward for document AI research.
This enables you to stream them directly, integrating seamlessly with your projects using the Hugging Face datasets library. On the hub, you can find them here:
We owe a huge thank you to Peter Wyatt, Kate Tasker, Rachel Taketa, Ali Furkan Biten, Ruben Tito, and their colleagues for their contributions. Their work putting these datasets together has been invaluable. π€
Looking Ahead:
We're on a mission to enhance document AI capabilities, and these datasets are just the beginning. With your engagement and innovation, we're confident in the community's ability to develop robust OCR solutions. We encourage you to explore these datasets, experiment with the code, and contribute to the collective progress in document AI.
For detailed information on usage and licensing, please refer to the dataset cards on the Hugging Face hub.
This is the closest Iβve seen of a scalable AI/LLM Operating System - it has all the major ingredients of a feasible AI OS 1 architecture:
- Extends classical OS functionalities with an LLM Kernel. - Multi agent-centric approach. - Optimized resource allocation system that allows for LLM-based tasks and Classical OS tasks to coexist. - An Agent Scheduler that can perform classical os operations (FIFO, RR). - A Context Manager to improve alignment. - Lazy Memory Manager for agents (ensures data is stored and accessible only while the agent is active) - An Enhanced security module for the AI-driven environment.
It does hit all checkpoints, doesnβt it? An upscale version of @karpathyβs.