Senior Crawler

5 years

0 Lacs

Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Job Title: Senior Crawler

Location: Fully Remote


Company Overview:

Founded in 2019, Elife Transfer is a fast-growing startup headquartered in San Francisco, California, at the heart of Silicon Valley. We are an all-in-one global ground transportation marketplace, enabling travelers to book airport transfers, ride-hailing, shared rides, private cars and rail tickets. Trusted by over 40 million travelers across 182+ countries, we are rapidly scaling to become the world’s go-to platform for seamless, end-to-end ground mobility.


Role Overview:

A Crawler Engineer is primarily responsible for designing and developing web crawler systems to scrape, clean, and analyze data from various platforms. This position requires a deep understanding of how web crawlers work, familiarity with common anti-crawling techniques and countermeasures, and the ability to handle large-scale data processing.


Key Responsibilities:

  • Design and develop efficient web crawler systems to meet business data scraping requirements.
  • Conduct scraping strategy analysis on target websites and formulate optimal scraping plans.
  • Maintain and optimize existing crawler systems to improve data scraping speed and accuracy.
  • Clean and process scraped data to ensure data quality and availability.
  • Keep track of and research the latest crawler technologies and anti-crawling mechanisms to continuously enhance the performance of crawler systems.


Requirements:

  • 5 years experience in similar positions.
  • Strong preference for candidates from ride hailing systems/ logistics(e.g., Uber/ Didi/ Lyft/ Grab/ Bolt/ Ola).
  • Bachelor's degree or above in Computer Science or a related field, with a solid foundation in computer science.
  • Proficient in Python programming, familiar with commonly used crawler frameworks (such as Scrapy, PySpider, etc.) and information extraction techniques (such as regular expressions).
  • Familiar with HTTP/HTTPS protocols, Cookie mechanisms, and web scraping principles.
  • Proficient in JavaScript, XPath.
  • Proficient in databases such as MySQL and BigQuery.
  • Have knowledge of common anti-crawling techniques and countermeasures, able to tackle various anti-crawling challenges.
  • Experience in designing and developing distributed systems, familiar with multithreading, asynchronous programming, and other technologies.
  • Have good problem-solving skills and a strong teamwork spirit, able to work under pressure.
  • Preference will be given to candidates with experience in scraping data from large platforms and handling massive datasets.


Our Apps:

  • Single page web app running on both PC and mobile devices.
  • Python/MySQL based RESTful services hosted on Amazon Web Service.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You