How Many Citations Do You Need for EB1?Practitioner-observed thresholds for EB1A, EB1B, O-1A, and NIW — 2026 Guide
If your immigration attorney just asked "is your citation count enough," this is the page to bookmark before you pay the retainer. We map every common visa category to the citation count, h-index, and distribution profile USCIS adjudicators actually reward.
Quick answer: citation thresholds by visa
EB1A typically requires 100+ Google Scholar citations with a rising trajectory. EB1B is generally satisfied by 50+ citations for early-career researchers. Pure citation count isn't sufficient — adjudicators weigh the distribution across independent citing groups, top-venue placements, and your overall extraordinary-ability evidence package.
| Visa | Typical Citations | Typical h-index | Notes |
|---|---|---|---|
| EB1A — Extraordinary Ability | 100+ | 10+ | Sustained acclaim required |
| EB1B — Outstanding Researcher | 50+ | 6+ | Tenure-track or industry research |
| O-1A — Extraordinary Ability | 30+ | 4+ | Lower bar, temporary visa |
| NIW — National Interest Waiver | 30+ | 4+ | Field-specific |
These are practitioner-observed benchmarks compiled from approved I-140 petitions, not USCIS-published thresholds. USCIS does not publish a numeric bar. Field norms vary — theoretical math runs lower, applied ML and biomedicine run higher.
EB1A vs EB1B citation requirements
EB1A and EB1B are both first-preference employment-based green cards, but they evaluate very different evidentiary profiles. The citation bar reflects that.
EB1A
Extraordinary Ability
Requires sustained national or international acclaim. The petitioner must be among the small percentage at the top of their field. No employer sponsor required — you can self-petition.
- 100+ citations in applied fields, with rising trajectory
- 1,000+ citations in mature theoretical fields (math, theoretical physics)
- h-index 10+ typical, higher in long-publishing fields
- Citations from 15+ countries for the international-acclaim prong
- 3 of 10 Kazarian criteria satisfied
EB1B
Outstanding Researcher
Requires international recognition as outstanding in a specific academic area, plus 3+ years of research experience and a permanent research-track role at a U.S. employer (university or qualifying industry research division).
- 50+ citations typical for early-career researchers
- h-index 6+ typical entry point
- Citations from 10+ countries sufficient
- 2 of 6 EB1B criteria satisfied
- Permanent research role required (no self-petition)
Why citation distribution matters more than count
USCIS adjudicators are explicitly trained to look past the headline Google Scholar number. Their internal guidance and recent RFEs (Requests for Evidence) repeatedly probe the independence and geographic spread of citations. A petition that leads with raw count and ignores distribution risks an RFE.
Weaker profile
500 citations
- • 70% from same lab + collaborators
- • Concentrated in 2 countries
- • Mostly conference workshops
- • High self-citation rate (>25%)
Stronger profile
200 citations
- • Spread across 80+ independent labs
- • 30 countries, 6 continents
- • Mix of top-tier journals + flagship conferences
- • Self-citation rate <8%
The 200-citation profile is the more compelling I-140 evidence package. Adjudicators read it as sustained international acclaim. The 500-citation profile reads as local community recognition, which doesn't clear the EB1A bar. A geographic citation map makes the distribution visible in a single figure — which is why attorneys increasingly attach one as Exhibit B or C.
How adjudicators evaluate citations
USCIS officers follow a two-step Kazarian analysis: (1) does the evidence meet the regulatory criteria, and (2) does the totality demonstrate the required acclaim? Citations show up in both steps, but they're weighed differently.
1Raw count clears the regulatory threshold
For criterion 5 (original contributions of major significance) and criterion 6 (scholarly authorship), the headline Google Scholar number is the first thing adjudicators see. Below the working benchmarks (100+ for EB1A, 50+ for EB1B), expect an RFE no matter how strong the rest of the package is.
2Distribution and venue determine the totality decision
In the totality step, adjudicators discount: self-citations, intra-lab citations, citations from predatory journals (Beall's List heuristic), and citations from low-impact venues. They reward: citations in top-quartile journals, citations from independent groups in other countries, and citations from named senior researchers.
3Trajectory matters
A rising citation curve over the last 24 months reads as sustained acclaim, which is the EB1A standard. A flat or declining curve raises questions about whether the acclaim is current. Petitions for early-career researchers should chart the cumulative-citation curve explicitly.
4Field benchmarks matter
A 100-citation profile in theoretical mathematics is exceptional; the same number in applied machine learning is mid-tier. Strong petitions cite a field benchmark — for example, Times Higher Education subject-rank citation medians — to anchor the count against peers.
Using a citation map as I-140 evidence
A geographic citation map directly addresses the criteria adjudicators score lowest in cited-research petitions: international acclaim and independence of citing sources. It transforms a paragraph of bullet-point claims into a single labeled exhibit.
What goes in the exhibit
- The world map. 2048×1024 PNG showing every citing institution as a marker. Country count and continent count printed at the bottom.
- Top citing institutions list. 10–20 named institutions, ideally including Harvard / MIT / Oxford / Stanford / Tsinghua-tier names.
- Per-country citation counts. CSV appendix listing every citing institution with country, lat/long, and per-paper citation counts.
- Self-citation disclosure. Petition language should explicitly state that the citation map excludes co-author groups.
- Data-source attribution. Footnote pointing to your Google Scholar profile so USCIS can verify.
Common citation mistakes that trigger RFEs
These are the patterns that show up in denial letters and RFEs we see most often. They are almost always avoidable with the right exhibit framing.
Submitting raw Google Scholar number with no breakdown
If your petition leads with "500 citations on Google Scholar" and stops, the adjudicator will manually count self-citations and likely RFE you on independence. Always present filtered totals (excluding self / co-author citations) up front.
Citing yourself extensively without disclosure
A 25%+ self-citation rate is a known red flag. Disclose it proactively in the brief and present an "independent citation" subtotal. Hiding it invites an RFE that will probably ask for the same breakdown.
Counting predatory-journal citations
Citations from predatory or pay-to-publish journals (MDPI in some fields, OMICS, most journals on Beall's List) get discounted to zero by experienced adjudicators. Audit your citing-papers list and exclude them before computing the headline number.
Ignoring venue quality
10 citations in Nature or NeurIPS outweigh 100 in low-impact venues for the totality determination. Strong petitions name the top-quartile journal and conference venues citing your work.
No geographic context
If your petition asserts "international acclaim" and provides only a citation count, the adjudicator has to take it on faith. A citation map exhibit converts that assertion into a verifiable, visible figure.
Real cases — what worked, what didn't (anonymized)
Three composite cases drawn from approved and denied petitions we've seen referenced in immigration-attorney case studies. Names and identifying details are anonymized; profiles are representative.
Postdoc, applied ML, 4 years out of PhD
Profile: 240 citations, h-index 9, 3 papers in NeurIPS / ICML. Self-citation rate 6%. Cited by groups in 22 countries including Stanford, ETH Zurich, Tsinghua, and DeepMind.
What worked: Petition led with the geographic citation map as Exhibit B, cited the rising trajectory (citations grew 3.2x in 24 months), and attached recommendation letters from named senior researchers at three of the citing institutions. Approved without RFE.
Industry researcher, biomedical, 6 years post-PhD
Profile: 380 citations, h-index 11. Strong on raw numbers but 35% of citations were from co-author lab + collaborators.
What happened: Initial petition led with raw 380 number, did not disclose the self/co-author share. RFE asked for independent citation breakdown. Response added a citation map filtered to exclude co-author groups (250 independent citations across 18 countries) plus a peer-comparison table. Approved on RFE response.
Assistant professor, materials science, 8 years post-PhD
Profile: 620 citations, h-index 14. Looks strong on paper, but 70% of citations were from a 4-lab consortium the petitioner had been part of, and 22% were from MDPI-tier journals.
What happened: Adjudicator independently computed an 80-citation independent-and-quality-filtered subtotal, well below the EB1A bar for materials science (where field norms are higher). Denied. Refiled successfully as EB1B 14 months later with a stronger venue mix.
The pattern: raw count alone never carries the petition. Distribution, venue, and independence are what convert a number into acclaim.
Free citation map for your I-140 petition
Generate the exhibit your immigration attorney is going to ask for anyway. 2048×1024 PNG, full CSV of every citing institution, no watermark, no account required.
Free citation map for your I-140 petitionFree forever. No account required. See sample petition language.
Disclaimer. This guide reflects practitioner-observed patterns and attorney commentary, not official USCIS policy. USCIS publishes no numerical citation threshold for EB1, EB2-NIW, or O-1. Outcomes depend on field norms, the totality of your evidence, and adjudicator discretion. Do not rely on this page as a substitute for licensed immigration-attorney advice.