⚡ Google Cloud & Web Scraping Engineering Intern Remote – India | 3–6 Months | Perform or Don’t Apply WHO WE ARE We’re not another “real estate company.” We’re an automation company disguised as one — building a data infrastructure that eats the real estate market alive. Every property lead, every contact, every data point — scraped, enriched, and processed by systems you’ll help build. WHAT YOU’LL DO You won’t “assist.” You’ll own . You’ll deploy scraping clusters across 100+ Google Cloud instances , optimize cost and throughput, and build anti-fragile automation pipelines. If you’re still thinking about “college projects,” this isn’t for you. If you want to see your code touch millions of records in live systems , welcome aboard. Architect and deploy high-scale web scrapers (Python + Playwright + GCP). Manage distributed queues, proxies, and CAPTCHA bypass. Monitor costs, speed, and data integrity across large pipelines. Work directly with founders and AI engineers — no middle layers. THE KIND OF PERSON WHO THRIVES HERE You move fast. You don’t wait for permission. You think in systems. You see failure once and fix it forever. You’re resource-hungry. You’ll Google, code, and test until it works. You’re obsessed with results. You measure yourself by output, not effort. MINIMUMS Solid Python fundamentals. Comfort with APIs, async, and headless browsers. Understanding of Google Cloud (Compute, Cloud Functions, Storage). BONUSES Experience with Supabase, n8n, or data enrichment. Previous scraping at scale (anything beyond hobby level). Interest in AI automation and real-world product impact. WHAT YOU GET Real infrastructure experience. Founders who actually review your code. Potential long-term contract if you dominate. Remote, flexible hours — but expect accountability. Job Types: Full-time, Internship, Volunteer Contract length: 12 months Pay: ₹18,106.50 - ₹93,186.71 per month
As a Google Cloud & Web Scraping Engineering EXPERT, you will be part of a team that is not just another real estate company, but an automation company focused on building a data infrastructure to revolutionize the real estate market. Your role will involve deploying scraping clusters across 100+ Google Cloud instances, optimizing cost and throughput, and constructing anti-fragile automation pipelines. You will have the opportunity to architect and deploy high-scale web scrapers using Python, Playwright, and Google Cloud Platform (GCP), manage distributed queues, proxies, and CAPTCHA bypass, monitor costs, speed, and data integrity across large pipelines, and collaborate directly with founders and AI engineers with no middle layers. **Key Responsibilities:** - Deploy scraping clusters across 100+ Google Cloud instances - Optimize cost and throughput of scraping operations - Build anti-fragile automation pipelines - Architect and deploy high-scale web scrapers using Python, Playwright, and GCP - Manage distributed queues, proxies, and CAPTCHA bypass - Monitor costs, speed, and data integrity of pipelines - Work directly with founders and AI engineers **Qualifications Required:** - Solid Python fundamentals - Comfort with APIs, async, and headless browsers - Understanding of Google Cloud (Compute, Cloud Functions, Storage) In this role, you will thrive if you are someone who moves fast, thinks in systems, is resource-hungry, and obsessed with results. You are expected to take ownership of your work, deploy innovative solutions, and have a strong focus on delivering tangible results. If you are seeking real infrastructure experience, founders who review your code, the potential for a long-term contract based on your performance, and remote flexibility with accountability, this position offers a challenging yet rewarding opportunity to make a significant impact in the real estate automation industry. (Note: The job type for this position is Full-time),