Operating rooms (ORs) are complex, high-stakes environments requiring precise
understanding of interactions among medical staff, tools, and equipment for
enhancing surgical assistance, situational awareness, and patient safety.
Current datasets fall short in scale, realism and do not capture the multimodal
nature of OR scenes, limiting progress in OR modeling. To this end, we
introduce MM-OR, a realistic and large-scale multimodal spatiotemporal OR
dataset, and the first dataset to enable multimodal scene graph generation.
MM-OR captures comprehensive OR scenes containing RGB-D data, detail views,
audio, speech transcripts, robotic logs, and tracking data and is annotated
with panoptic segmentations, semantic scene graphs, and downstream task labels.
Further, we propose MM2SG, the first multimodal large vision-language model for
scene graph generation, and through extensive experiments, demonstrate its
ability to effectively leverage multimodal inputs. Together, MM-OR and MM2SG
establish a new benchmark for holistic OR understanding, and open the path
towards multimodal scene analysis in complex, high-stakes environments. Our
code, and data is available at this https URL