Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval

SIGGRAPH Asia 2025

[Project page] [ArXiv] [Dataset]

# File Structure To prepare the dataset for use, merge the parts into a single zip file using the following command: ```bash cat Context-as-Memory-Dataset_* > Context-as-Memory-Dataset.zip ``` After extracting `Context-as-Memory-Dataset.zip`, the dataset will be organized as follows: ``` Context-as-Memory-Dataset ├── frames │   ├── AncientTempleEnv_0 │   │   ├── 0000.png │   │   ├── 0001.png │   │   ├── 0002.png │   │   └── ... │   ├── AncientTempleEnv_1 │   │   ├── 0000.png │   │   ├── 0001.png │   │   ├── 0002.png │   │   └── ... │   └── ... │   ├── jsons │   ├── AncientTempleEnv_0.json │   ├── AncientTempleEnv_1.json │   └── ... │ ├── overlap_labels │   ├── AncientTempleEnv_0 │   │   ├── 0.json │   │   ├── 1.json │   │   ├── 2.json │   │   └── ... │   ├── AncientTempleEnv_1 │   │   ├── 0.json │   │   ├── 1.json │   │   ├── 2.json │   │   └── ... │   └── ... │   └── captions.txt ``` # Explanation of Dataset Parts - **`frames/`**: 100 subdirectories, each containing 7,601 video frame images. - **`jsons/`**: 100 JSON files, each storing the camera pose (position + rotation) of every frame in the corresponding long video. - **`overlap_labels/`**: 100 subdirectories, each containing 7,601 JSON files, where each file records the indices of overlapping frames corresponding to that frame. - **`captions.txt`**: Captions annotated for a segment of a long video, from a given starting frame to an ending frame. - We also provide a simple code file, `tools.py`, which can convert (x, y, z, yaw, pitch) into RT, and can also select a specific frame as the reference frame to align the RT of other frames to its coordinate system.