April 2023. I'm sitting at my desk at Kapital Bank in Baku, Azerbaijan, staring at 14 browser tabs. hellojob.az. vakansiya.az. banker.az. boss.az. LinkedIn. offer.az. jobsearch.az. rabota.az. Tab after tab after tab.
Every single morning. The same ritual. Open 14 sites. Scroll through listings. Try to remember which ones I already saw yesterday. Miss half of them anyway because some sites update at weird hours.
I was a fraud analyst. A good one, I think. But every morning before I started catching fraudsters, I'd spend 30 minutes just checking if something better existed. Something that didn't involve 14 tabs and a bad memory.
That something didn't exist. So I built it.
This is the story of birjob.com — Azerbaijan's biggest job aggregation platform. 10,000+ active listings from 99 sources. 30,000+ candidate profiles. 459 blog articles. Running on $25 a month.
No venture capital. No co-founder. No budget. Just a bank employee who was tired of browser tabs.
The Numbers Nobody Talks About
Here's something most tech content won't tell you: not every startup story starts in San Francisco.
Azerbaijan's startup ecosystem ranks #74 globally. Baku, the capital, sits at #297 by city. The entire ecosystem grew +24.5% in 2025, which sounds impressive until you learn there are only 151 startups total in the country. Total funding in Baku? $4.97 million. For the entire city.
There's no real VC scene. The first two local venture funds — Caucasus Ventures and InMerge Ventures — just launched recently. Before that? Nothing. If you wanted to build something, you funded it yourself or you didn't build it.
I funded it myself. With $25 a month.
Compare that to the average VC-backed SaaS burning $50K-100K/month on infrastructure alone. I'm not saying my approach is better for everyone. I'm saying it's the only approach that existed for me. And it turns out, bootstrapped SaaS companies grow at a median rate of 23% annually. The top 25% reach $1M ARR in just 2 years — only 4 months slower than VC-backed companies. And 82% of bootstrapped founders report higher satisfaction than their VC-backed counterparts, according to research cited by Harvard Business Review.
You don't need VC money. You need a $12 domain and stubbornness.
The $25 Stack
People ask me about my tech stack like they're expecting some exotic answer. It's not exotic. It's boring. Boring is good. Boring means cheap and reliable.
Here's every single thing running birjob.com and what it costs:
| Component | Tool | Monthly Cost |
|---|
| Orchestration | GitHub Actions (cron at 08:00 UTC) | Free (2,000 min/month) |
| Database | PostgreSQL on Neon | ~$5 |
| Frontend | Next.js 14 on Vercel | $20 (Pro plan) |
| CDN | Cloudflare | Free |
| Object Storage | AWS S3 | ~$0.50 |
| Email | Resend | Free tier |
| Error Tracking | Sentry | Free tier |
| AI Analytics | Google Gemini 2.5 Flash | Free tier |
| Notifications | Telegram Bot | Free |
| Total | | ~$25.50 |
That's it. Twenty-five dollars and fifty cents. The Vercel Pro plan is the biggest expense, and I could probably drop to the hobby tier if I optimized harder. But $20/month for a production frontend with edge functions, analytics, and automatic deployments? I'll take it.
The free tier ecosystem in 2026 is insane. GitHub Actions gives you 2,000 free minutes per month for private repos and unlimited for public ones. They even reduced pricing by up to 39% in January 2026. Neon gives you 0.5GB of storage with scale-to-zero on the free tier. Cloudflare's free plan includes unlimited bandwidth.
Every one of those free tiers is a business decision by a company betting you'll grow into a paid customer. They're subsidizing your startup. Let them.
What I Actually Built
Let me get technical. birjob.com isn't complicated in concept — it's a job aggregator. Scrape jobs from many sources, deduplicate them, present them in one place. The complexity is in the execution.
The Scraper Architecture
I have 99 Python scraper scripts spread across 128 repositories on GitHub. Each script targets a specific job board or company career page. They all run on GitHub Actions with a cron trigger at 08:00 UTC every morning.
Here's what a simplified scraper looks like:
import httpx
import hashlib
from datetime import datetime
def scrape_jobsite(url: str) -> list[dict]:
"""Fetch jobs from a site's hidden JSON API."""
resp = httpx.get(
f"{url}/api/vacancies",
headers={"Accept": "application/json"},
timeout=30
)
resp.raise_for_status()
raw_jobs = resp.json().get("data", [])
jobs = []
for item in raw_jobs:
title = item["title"].strip()
company = item.get("company", "Unknown")
apply_link = item["url"]
# MD5 content hash for cross-source dedup
content_hash = hashlib.md5(
f"{title.lower()}|{company.lower()}".encode()
).hexdigest()
jobs.append({
"title": title,
"company": company,
"apply_link": apply_link,
"content_hash": content_hash,
"source": "jobsite.az",
"scraped_at": datetime.utcnow().isoformat(),
})
return jobs
Notice something? No Playwright. No headless browser. No Selenium. Just plain HTTP requests.
Here's the thing most people don't realize: 95% of "dynamic" websites have hidden JSON APIs. Open DevTools, check the Network tab, and you'll find them. That company career page that looks like it needs JavaScript rendering? It's fetching from /api/jobs behind the scenes. Hit that endpoint directly and you skip the entire browser rendering pipeline.
Direct HTTP calls are roughly 15x faster than headless browser scraping. My full pipeline — all 99 scrapers — runs in 3-4 minutes. That's a throughput of about 35.7 requests per second across all sources. With Playwright, the same pipeline would take 45-60 minutes and cost real money in compute.
Three-Level Deduplication
When you aggregate from 99 sources, duplicates are your biggest enemy. The same job might appear on hellojob.az, boss.az, and the company's own career page. Sometimes with slightly different titles or formatting.
I built a three-level deduplication system:
Level 1: Python hash normalization. Before inserting anything, each scraper normalizes the job title (lowercase, strip whitespace, remove special characters) and generates an MD5 hash of the normalized title + company name. This catches obvious duplicates within a single source.
Level 2: Database UPSERT on apply_link. The PostgreSQL database has a unique constraint on the apply_link column. When a scraper tries to insert a job with a URL that already exists, the UPSERT updates the existing record instead of creating a duplicate. This catches cross-source duplicates that share the same application URL.
Level 3: MD5 content hash for cross-source matching. For jobs posted on multiple boards with different URLs but identical content, the content_hash column catches them. Same job title + same company = same hash, regardless of which site it came from. This catches about 85%+ of remaining duplicates.
from psycopg2.extras import execute_values
def upsert_jobs(conn, jobs: list[dict]):
"""Insert jobs with three-level deduplication."""
query = """
INSERT INTO jobs (title, company, apply_link, content_hash, source, scraped_at)
VALUES %s
ON CONFLICT (apply_link) DO UPDATE SET
scraped_at = EXCLUDED.scraped_at,
title = EXCLUDED.title
WHERE jobs.scraped_at < EXCLUDED.scraped_at
"""
values = [
(j["title"], j["company"], j["apply_link"],
j["content_hash"], j["source"], j["scraped_at"])
for j in jobs
]
execute_values(conn.cursor(), query, values)
conn.commit()
Is it perfect? No. Some duplicates still slip through — jobs with slightly different company names ("Kapital Bank" vs "KAPITAL BANK ASC" vs "Joint-Stock Commercial Bank Kapital") are hard to catch with simple hashing. But it handles the vast majority of cases without any machine learning or fuzzy matching.
The Database
45 tables. 50+ indexes. PostgreSQL on Neon.
The schema evolved organically. I didn't design it upfront. I added tables when I needed them: jobs, companies, sources, scraper_runs, candidates, blog_posts, categories, tags, analytics, error_logs. Some tables are elegant. Some are held together with duct tape and good intentions.
The Failures
I could write a whole article just about the things that went wrong. Here are the highlights.
I Lost 3,000 Jobs
Early on, I had a scraper bug that deleted jobs instead of updating them. A bad SQL query — a DELETE where there should've been an UPDATE. By the time I noticed, 3,000 job listings were gone. Poof.
I didn't have backups. Of course I didn't have backups. I was a fraud analyst writing scrapers at midnight. Backups weren't on my mind.
After that day, I implemented soft deletes everywhere. Nothing gets hard-deleted from the database anymore. When a job expires, it gets a deleted_at timestamp. When a scraper "removes" a job, it's just a soft delete. I can always recover.
Lesson learned the expensive way: never trust a DELETE in production. Ever.
Cloudflare Blocks Everything
About 15% of the sites I scrape are behind Cloudflare's bot protection. And Cloudflare is good at its job. Really good.
Some days, a scraper that worked fine yesterday just... stops. Cloudflare updated their fingerprinting, or the site turned on a stricter protection level, or my IP got flagged for too many requests. The scraper returns a 403 or a Cloudflare challenge page, and I have to figure out what changed.
I've tried rotating user agents. I've tried adding random delays. I've tried mimicking browser TLS fingerprints. Some of it works, some of the time. It's an arms race with no permanent solution.
The honest truth? 15% of my scrapers are broken at any given time. I fix them, and new ones break. It's a constant maintenance treadmill.
3-5 Hours Per Week, Every Week
This is the part nobody tells you about running scrapers. They break. Constantly. Sites change their HTML structure. APIs add authentication. Rate limits get stricter. Cloudflare rotates challenges. Companies redesign their career pages.
I spend 3-5 hours every week just maintaining scrapers. Not building new features. Not growing the product. Just keeping existing scrapers alive. It's the most tedious part of running birjob.com, and it never ends.
If you're thinking about building a scraper-based product, budget for this. It's not a one-time build cost. It's a permanent operational cost.
The SEO Grind
459 blog articles. Four hundred and fifty-nine.
I wrote bilingual content — Azerbaijani and English — covering job search tips, career advice, company reviews, salary guides. All for SEO. All to get Google to notice birjob.com.
Was it worth it? Probably. Organic search is our biggest traffic channel. But writing 459 articles as a solo founder while also maintaining 99 scrapers and building features? I don't recommend it for your mental health.
Some of those articles are good. Some are glorified keyword-stuffing. I'm not proud of all of them. But they rank, and they bring users, and that's what matters when you're bootstrapping.
The Backlink Hack
Here's something I haven't seen other founders talk about openly: I built an entire second website purely as a backlink engine.
reklamyeri.az is an advertising and media news site. 182 articles across 20 categories covering advertising trends, marketing strategies, and digital media in Azerbaijan. It looks like a legitimate content site because it is one — the articles are real and useful.
But the primary purpose is strategic. Every article on reklamyeri.az contains contextual links back to birjob.com. "Looking for marketing jobs? Check out the latest listings on birjob.com." Natural, relevant backlinks from a domain with its own authority.
In backlink analysis, those 182 articles with contextual links would cost an estimated $15,000+ if you bought them through a link-building service. I built the site for the cost of a domain registration and the time to write the content.
Is this gray hat? Maybe. But every media company does this. Every SaaS blog exists partly for SEO juice. I'm just transparent about it.
The key is that reklamyeri.az provides genuine value. The articles are useful. The site has its own audience. The backlinks are contextual and relevant, not spammy. Google's guidelines say links should be editorially placed and relevant. These are.
The $0 Startup Playbook
If I had to distill everything I've learned into a framework for other bootstrappers, it would be this:
1. Pick a local market problem.
Global markets are crowded. But your local market? Nobody's building for it. Azerbaijan had no job aggregator. Your country probably has a similar gap — maybe in real estate listings, restaurant reviews, government services, or local classifieds. Local problems are easier to monopolize because the big players don't care about markets with 10 million people.
2. Abuse free tiers.
In 2026, you can run a legitimate SaaS on free tiers alone if you're clever. GitHub Actions for compute. Neon or Supabase for databases (Supabase gives you 500MB and 50K monthly active users free, though it pauses after 1 week of inactivity). Cloudflare for CDN. Resend for email. Sentry for error tracking. The only thing you need to pay for is a domain name and maybe hosting once you outgrow free tiers.
3. Ship weekly.
Not monthly. Not "when it's ready." Weekly. Every Friday, something new goes live. A new scraper. A filter feature. A blog post. An email notification. Small increments compound. After a year of weekly ships, you've made 52 improvements. That's a product.
4. Don't raise money until you have users.
This is the most counterintuitive advice for people in ecosystems without VC. You think you need money to start. You don't. You need money to scale. And you shouldn't be thinking about scale until you have users who would be angry if your product disappeared.
I ran birjob.com for over a year before I even thought about monetization. The product needed to be good first. Money is a distraction when you're still figuring out what to build.
5. Automate the boring stuff.
My entire pipeline runs without me touching anything. GitHub Actions triggers at 08:00 UTC. Scrapers run. Data deduplicates. The frontend updates. A Telegram bot pings me if something fails. I wake up, check the notification, and either everything worked or I know exactly what broke.
If you're doing anything manually more than twice, write a script. Your time is the most expensive resource you have.
6. Write about it.
Not just for SEO. Writing forces you to think clearly about what you're building. Every blog post I wrote for birjob.com made me understand the product better. And the side effect is organic traffic.
What I Actually Think
I'm going to be direct here because I think a lot of startup advice is dishonest.
VC is a tool, not a goal. Somewhere along the way, "raising a round" became the definition of startup success. It's not. It's a financing mechanism. You take VC money when you have product-market fit and need to grow faster than revenue allows. Taking it before that point means you're paying for the privilege of having someone else's priorities on your roadmap.
$25/month means you literally cannot run out of money. This is the most underrated advantage of bootstrapping. My burn rate is the cost of a pizza dinner. If birjob.com gets zero users next month, I lose $25. If a VC-backed competitor gets zero users next month, they lose $50,000 and have to explain it to a board. The asymmetry is massive.
The cost of failure is a weekend of debugging, not a board meeting. When my scrapers break — and they break often — I spend a Saturday fixing them. I don't write a post-mortem. I don't schedule a review meeting. I don't update investors. I just fix it. This speed of iteration is impossible at scale, and it's the biggest advantage small bootstrapped products have.
The best time to bootstrap is when you can't afford not to. I didn't choose bootstrapping because I read a Paul Graham essay about it. I chose it because there were no VCs in Baku and I had $25 to spare. Constraints breed creativity. When you can't buy your way out of a problem, you build your way out. That makes you a better engineer.
Look. I'm not saying everyone should bootstrap. If you're in San Francisco with access to tier-1 VCs and you're building something that needs massive upfront investment — take the money. But if you're in Baku, or Lagos, or Dhaka, or any city where the VC infrastructure doesn't exist, know this: you don't need it. The tools are free. The knowledge is free. The only thing it costs is your time and stubbornness.
birjob.com is proof. 10,000+ active job listings. 30,000+ candidates. 99 scraper scripts. One person. $25 a month.
The 14 browser tabs are closed. They have been for a while now.
Sources
- Azerbaijan Startup Ecosystem — StartupBlink
- Baku Startup Ecosystem — StartupBlink
- Benchmarking Metrics for Bootstrapped SaaS Companies — SaaS Capital 2025
- About Billing for GitHub Actions — GitHub Docs
- Indie Hacker SaaS Stack 2026 — TLDL
- GitHub Actions Pricing Reduced — GitHub Blog
- Bootstrapping a Startup — FounderPath