Referring expression generation
Generate referring expressions
Papers
Showing 1–10 of 84 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | ColonGPT (w/ LoRA, w/o extra data) | Accuray | 99.96 | — | Unverified |
| 2 | LLaVA-v1.5 (w/ LoRA, w/ extra data) | Accuray | 99.32 | — | Unverified |
| 3 | LLaVA-Med-v1.5 (w/ LoRA, w/o extra data) | Accuray | 99.3 | — | Unverified |
| 4 | MGM-2B (w/o LoRA, w/ extra data) | Accuray | 98.75 | — | Unverified |
| 5 | LLaVA-v1.5 (w/ LoRA, w/o extra data) | Accuray | 98.58 | — | Unverified |
| 6 | MGM-2B (w/o LoRA, w/o extra data) | Accuray | 98.17 | — | Unverified |
| 7 | MobileVLM-1.7B (w/ LoRA, w/ extra data) | Accuray | 97.87 | — | Unverified |
| 8 | MobileVLM-1.7B (w/o LoRA, w/ extra data) | Accuray | 97.78 | — | Unverified |
| 9 | LLaVA-Med-v1.0 (w/o LoRA, w/o extra data) | Accuray | 97.74 | — | Unverified |
| 10 | LLaVA-Med-v1.0 (w/o LoRA, w/ extra data) | Accuray | 97.35 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | LLaVA-Med-v1.5 (w/ LoRA, w/ extra data) | Accuray | 70 | — | Unverified |
| 2 | LLaVA-v1 (w/ LoRA, w/ extra data) | Accuray | 46.85 | — | Unverified |