
When reviewers lack a shared framework, the selection process doesn't find the best startups; it finds the startups each reviewer individually likes. GALI research found that programs implementing standardised scoring rubrics saw measurable improvement in cohort quality metrics within two cycles.
The four-pillar framework is used by over 80% of top accelerators:
Four steps:
→ Define non-negotiables — criteria resulting in automatic rejection if not met.
→ Set weighted criteria using the four-pillar framework.
→ Write explicit score anchors — replace "good traction" with "Score 5 = paying customers with 2+ months of revenue data."
→ Calibrate your team with pilot applications before the review period opens.
Recommended structure (20–30 minutes):
Red flags: defensive responses to challenge, inability to cite specific numbers, no answer to "why are you better than your closest competitor."
The most common biases: affinity bias (preference for similar founders), recency bias (later applications score higher), narrative bias (compelling storytelling overrides weak fundamentals). Mitigation: blind first-pass scoring removes names and photos, diverse review panels surface differing perspectives, and structured interview questions applied consistently prevent deviation.
What do accelerators look for in startup teams?
Domain expertise, execution track record, and coachability. The most common reason for rejection is a founding team without sufficient relevant experience to execute in their chosen space.
How do accelerators evaluate market size?
Reviewers look for TAM of at least $500M, preferably $1B+. Bottom-up calculations carry more weight than top-down industry figures. Market growth trajectory matters as much as absolute size.
How many startups apply to top accelerators?
Y Combinator receives 40,000+ per batch. Techstars receives 3,000–5,000 per city program. Regional programs at their first or second cohort typically see 100–500 applications. Acceptance rates consistently fall in the 1–5% range.
What is a blind review in accelerator selection?
Blind review anonymises application materials — removing names, photos, and university affiliations — before the first scoring pass to reduce affinity bias. It applies to written applications only; interviews are non-anonymous.
AcceleratorApp's scoring tools bring consistency to cohort selection → See How It Works