Segment Anything
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, Ross Girshick
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/segment-anythingOfficialIn paperpytorch★ 53,742
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/IDEA-Research/Grounded-Segment-Anythingpytorch★ 17,472
- github.com/kornia/korniapytorch★ 11,125
- github.com/PaddlePaddle/PaddleSegpaddle★ 9,319
- github.com/geekyutao/inpaint-anythingpytorch★ 7,606
- github.com/ux-decoder/segment-everything-everywhere-all-at-oncepytorch★ 4,771
- github.com/syscv/sam-hqpytorch★ 4,201
- github.com/umass-foundation-model/3d-llmpytorch★ 1,187
- github.com/computational-cell-analytics/micro-sampytorch★ 666
Abstract
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| MVSEC-SEG | SAM | mIoU | 0.26 | — | Unverified |
| RGBE-SEG | SAM | mIoU | 0.26 | — | Unverified |