Few-Shot Learning
Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: Penalty Method for Inversion-Free Deep Bilevel Optimization
Papers
Showing 1–10 of 2964 papers
All datasetsMedConceptsQADTDFGVC-AircraftMini-ImageNet - 5-Shot LearningMini-Imagenet 5-way (1-shot)Stanford CarsMini-ImageNet - 1-Shot LearningPubMedQACaltech101CaseHOLDEuroSATFlowers-102
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | gpt-4-0125-preview | Accuracy | 61.91 | — | Unverified |
| 2 | gpt-4-0125-preview | Accuracy | 52.49 | — | Unverified |
| 3 | gpt-3.5-turbo | Accuracy | 41.48 | — | Unverified |
| 4 | gpt-3.5-turbo | Accuracy | 37.06 | — | Unverified |
| 5 | johnsnowlabs/JSL-MedMNX-7B | Accuracy | 25.63 | — | Unverified |
| 6 | yikuan8/Clinical-Longformer | Accuracy | 25.55 | — | Unverified |
| 7 | BioMistral/BioMistral-7B-DARE | Accuracy | 25.06 | — | Unverified |
| 8 | yikuan8/Clinical-Longformer | Accuracy | 25.04 | — | Unverified |
| 9 | PharMolix/BioMedGPT-LM-7B | Accuracy | 24.92 | — | Unverified |
| 10 | PharMolix/BioMedGPT-LM-7B | Accuracy | 24.75 | — | Unverified |