Jobs
Interviews

315 Scrapy Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

2 - 4 Lacs

himatnagar

Work from Office

Responsibilities: * Design, develop & maintain web scrapers using Scrapy framework. * Optimize crawl speed & handle anti-scraping measures. * Ensure data accuracy & compliance with privacy policies. Work from home

Posted 1 week ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

mohali, punjab

On-site

Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer with core Python expertise and hands-on experience in Generative AI (GenAI) to join our development team. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. Generative AI Experience: Working knowledge and experience in building applications using GenAI technologies is mandatory. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Previous working experience on LLM and AI Agents is a Plus. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Automation: Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

2.0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Job Title: Senior Python Developer – Web Scraping & Automation Company : Actowiz Solutions Location: Ahmedabad Job Type : Full-time Working Days : 5 Days a Week About Us Actowiz Solutions is a leading provider of data extraction, web scraping, and automation solutions. We empower businesses with actionable insights by delivering clean, structured, and scalable data through cutting-edge technology. Join our fast-growing team and lead projects that shape the future of data intelligence. Role Overview We are seeking an experienced Senior Python Developer with proven expertise in Scrapy (must-have) and strong skills in web scraping and automation. The ideal candidate will design, develop, and optimize large-scale scraping solutions that power data-driven decision-making. Key Responsibilities • Design, develop, and maintain scalable web scraping frameworks using Scrapy (mandatory). • Work with additional libraries/tools such as BeautifulSoup, Selenium, Playwright, Requests, etc. • Implement robust error handling, data parsing, and data storage mechanisms (JSON, CSV, SQL/NoSQL databases). • Build and optimize asynchronous scraping workflows and handle multithreading/multiprocessing. • Collaborate with product managers, QA, and DevOps teams to ensure timely delivery. • Research and adopt new scraping technologies to improve performance, scalability, and efficiency. Requirements • 2+ years of experience in Python development with Scrapy expertise (must-have). • Proficiency with automation libraries such as Playwright or Selenium. • Experience with REST APIs, asynchronous programming, and concurrency. • Familiarity with databases (SQL/NoSQL) and cloud-based data pipelines. • Strong problem-solving skills and ability to meet deadlines in an Agile environment. Preferred Qualifications • Knowledge of DevOps tools such as Docker, GitHub Actions, or CI/CD pipelines. Benefits • Competitive salary. • 5-day work week (Monday–Friday). • Flexible and collaborative work environment. • Ample opportunities for career growth and skill development.

Posted 1 week ago

Apply

4.0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Job Title : Lead Data Scraping Engineer Location: Ahmedabad/WFO Experience Level: Senior (4+ years) Employment Type : Full-time Job Summary: We are seeking a highly skilled and experienced Lead Data Scraping Engineer to join our team. The ideal candidate will have a minimum of 4 years of hands-on experience in IT scraping, with at least 2 years leading a team of 5+ developers. This role requires deep technical knowledge in advanced scraping techniques, reverse engineering, automation, and leadership skills to drive the team towards success. Key Responsibilities: • Design and develop scalable data scraping solutions using tools like Scrapy and Python libraries. • Lead and mentor a team of 5+ developers, managing project timelines and deliverables. • Implement advanced blocking and captcha-solving techniques to bypass scraping restrictions. • Conduct source code reverse engineering and automate web and app interactions. • Manage proxies, IP rotation, and SSL unpinning to ensure effective scraping. • Maintain and improve API integrations and data pipelines. • Ensure code quality through effective version control, error handling, and documentation. • Collaborate with cross-functional teams for project planning and execution. • Monitor performance and provide solutions under high-pressure environments. Required Skills and Experience: • Data Scraping: Minimum 4 years in IT scraping industry • Leadership: Minimum 2 years leading a team of 5+ developers • Scraping Tools: Scrapy, Threading, requests, web automation • Technical Proficiency: o Advanced Python o Captcha solving and blocking handling o Source reverse engineering o Proxy management & IP rotation o App automation, SSL Unpin, Frida o API Management, Version Control Systems o Error Handling, SQL, MongoDB, Pandas Leadership Skills: • Basic project management • Moderate documentation • Team handling • Pressure management • Flexibility and adaptability • High accountability Preferred (Good to Have): • Experience with Linux • Knowledge of Appium, Fiddler, Burp Suite

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

dwarka, delhi, india

On-site

Position: Data Mining Analyst This is a Delhi-based position and work from office only! Work Location: Sector 23 Dwarka, Delhi We are seeking a skilled Data Mining Analyst with expertise in automating data extraction processes from web platforms. The ideal candidate will be experienced in Python, Selenium, Pandas, SQL, and APIs, with the ability to design and implement efficient and scalable data scraping systems. If you have a passion for working with data and a solid understanding of web technologies, we want to hear from you! Key Responsibilities: Design, develop, and maintain robust web scraping solutions to extract structured and unstructured data from various websites and APIs. Use tools like Python, Selenium, BeautifulSoup, Scrapy, and Pandas for data scraping and processing. Build and manage automated scripts to scrape dynamic websites, including handling JavaScript-driven content. Optimize scraping workflows to ensure data extraction is efficient, accurate, and scalable. Work with APIs to gather and integrate data, ensuring proper rate limits and authentication handling. Clean, preprocess, and store extracted data in databases (SQL) or cloud-based systems. Collaborate with data analysts and other stakeholders to provide required data for further analysis and reporting. Debug and troubleshoot issues in scraping pipelines and scripts. Ensure compliance with ethical data scraping standards, including legal considerations like website terms of use and robots.txt policies. Required Skills & Qualifications: Experience : 1-3 years of hands-on experience in web scraping and data extraction. Technical Skills : Strong proficiency in Python. Experience with web scraping frameworks and libraries like Selenium, Scrapy, BeautifulSoup, and Requests. Experience with data manipulation libraries like Pandas. Familiarity with API integration (REST, GraphQL, etc.). Proficiency in SQL for data querying, database design, and managing large datasets. Knowledge of JavaScript and front-end technologies to work with dynamic web pages. Experience with version control (Git) and collaborative development environments. Other Skills : Problem-solving skills with attention to detail. Ability to write clean, maintainable code and automate workflows. Good understanding of HTTP, HTML, CSS, and JavaScript. Familiarity with cloud services (AWS, Azure, GCP) is a plus. PythonNice to Have: Experience with cloud-based scraping tools or services (e.g., AWS Lambda, Google Cloud Functions). Familiarity with distributed scraping and data pipeline management. Experience with large-scale data collection and storage systems. Knowledge of ethical and legal issues related to web scraping. About Nuvoretail (www.nuvoretail.com) Nuvoretail Enlytical Technologies Private Limited is an e-commerce analytics and automation company. Our proprietary digital shelf analytics and automation platform called Enlytical.ai helps e-commerce brands solve the complexities in today’s e-commerce landscape by offering a unified and all- encompassing business view on the various aspects of e-commerce business. Our platform leverages insights drawn from multiple data points that help our clients win in e-commerce by gaining a competitive edge with data-driven insights for sharper decision-making. The insights cover all aspects of e-commerce such as digital product portfolio analysis, supply chain analytics, e-commerce operations automation, pricing, and competitor benchmarking, and Amazon advertising automation using our proprietary algorithms. As a leading e-commerce service provider, we offer the most comprehensive end-to-end e-commerce solutions to brands, both in India and abroad. Right from preparing a road map to writing our client’s e- commerce success story to assisting them In increasing their online sales, we do everything via our diverse e-commerce services and bespoke strategies and technology. Our services span across the brand’s e-commerce enablement including content and digital asset creation for product listing, On Platform, and Off Platform marketing services with deep expertise in Amazon Marketing Services (AMS), Amazon SEO through keyword research, e-Commerce operations across various e-commerce platforms, website development, social media marketing, and AI-enabled e-Commerce MIS Dashboards. Awards & Recognition: Thanks to the faith reposed on us by our clients, NuvoRetail has been featured as "The Most Promising Ecommerce Technology Service Providers in India 2020” by CIOReviewIndia Magazine. Our leadership is often acknowledged by leading e-commerce services, digital marketing, consulting, and other e- commerce programs around the world. We are now one of the very few companies in India that have become an Amazon Ads Advanced partner.

Posted 1 week ago

Apply

3.0 - 7.0 years

8 - 18 Lacs

mohali

Remote

Python Developer is responsible for building, debugging, and implementing application projects using the Python programming language and developing program specifications and coded modules according to specifications and client standards. Responsibilities Advanced Python Programming: Extensive experience in Python, with a deep understanding of Python principles, design patterns, and best practices. Proficiency in developing scalable and efficient Python code, with a focus on automation, data processing, and backend services. Demonstrated ability with automation libraries like PyAutoGUI for GUI automation tasks, enabling the automation of mouse and keyboard actions. Experience with Selenium for web automation: Capable of automating web browsers to mimic user actions, scrape web data, and test web applications. Python Frameworks and Libraries: Strong experience with popular Python frameworks and libraries relevant to data processing, web application development, and automation, such as Flask or Django for web development, Pandas and NumPy for data manipulation, and Celery for task scheduling. SQL Server Expertise : Advanced knowledge of SQL Server management and development. API Development and Integration: Experience in developing and consuming APIs. Understanding of API security best practices. Familiarity with integrating third-party services and APIs into the existing ecosystem. Version Control and CI/CD: Proficiency in using version control systems, such as Git. Experience with continuous integration and continuous deployment (CI/CD) pipelines Unit Testing and Debugging: Strong understanding of testing practices, including unit testing, integration testing. Experience with Python testing. Skilled in debugging and performance profiling. Containerization and Virtualization: Familiarity with containerization and orchestration tools, such as Docker and Kubernetes, to enhance application deployment and scalability. Requirements & Skills Analytical Thinking : Ability to analyze complex problems and break them down into manageable parts. Strong logical reasoning and troubleshooting skills. Communication: Excellent verbal and written communication skills. Ability to effectively articulate technical challenges and solutions to both technical and non-technical team members. Team Collaboration : Experience working in agile development environments. Ability to work collaboratively in cross-functional teams and with stakeholders from different backgrounds. Continuous Learning : A strong desire to learn new technologies and frameworks. Keeping up to date with industry trends and advancements in healthcare RCM, AI, and automation technologies. . Additional Preferred Skills: Industry-Specific Knowledge is a plus: Familiarity with healthcare industry standards and regulations, such as HIPAA, is highly advantageous. Understanding of healthcare revenue cycle management processes and challenges. Experience with healthcare data formats and standards (e.g., HL7, FHIR) is beneficial. Educational Qualifications: Bachelors degree in a related field., Relevant technical certifications are a plus

Posted 1 week ago

Apply

2.0 years

8 - 18 Lacs

delhi

On-site

Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person

Posted 1 week ago

Apply

0 years

9 - 13 Lacs

gurgaon

On-site

Job description Python +Web Scrapping Developer · Work experience as a Python Developer · Expertise in at least one popular Python framework (like Django, Flask or Pyramid) · Familiarity with front-end technologies (like JavaScript and HTML5) · Strong experience using Web Services and API's in python. · Hands on experience on web scrapping, Beautiful Soup, Snowflake, Scrapy, ZMQ/Kafka · Implementation of security and data protection. · Integration of data storage solutions. · Performance tuning, improvement, balancing, usability, automation. · Work collaboratively with the design team to understand end user requirements to provide technical solutions and for the implementation of new software features. Specialization - Python, Web crawling, web scrapping, Beautiful Soup, Snowflake, Scrapy, ZMQ/Kafka/RMQ, ETL, BigQuery , sql server. Responsibilities: -- Develop and customized tools to automate critical business functions and meet project objectives. -- Use skills in python and selenium to design and deploy automated tools. -- Understand databases and manage data and databases for internal reporting purposes. -- Build solutions to source data from a variety of enterprise sources and/or external data sources to create complex competitive intelligence reports. -- Would be able to scrap the data from different websites based business need and as per the requirement. Job Types: Full-time, Permanent Pay: ₹900,000.00 - ₹1,300,000.00 per year Benefits: Provident Fund Work Location: In person Speak with the employer +91 7042844223

Posted 1 week ago

Apply

0.0 - 5.0 years

0 - 1 Lacs

delhi, delhi

On-site

Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

kochi, kerala

On-site

Techversant is currently looking for an experienced Python Team Lead to join our team. As a Python Team Lead, you will be responsible for overseeing the development team, driving best practices, and ensuring the successful delivery of high-quality software solutions. We are seeking an individual with over 6 years of experience in Python development, who possesses excellent leadership skills, and has a strong passion for mentoring and guiding team members. Key Responsibilities: - Create technical designs to support various client requirements - Evaluate project requirements and prepare estimates - Write code that passes unit tests and thrives in an agile and test-driven development environment - Manage software, code, and modules in production - Work on projects individually or as part of a team, following AGILE methodologies Preferred Skills: - Strong hands-on experience in Django, Flask, and Python - Knowledge of RDBMS such as PostgreSQL, MySQL - Familiarity with HTML5, HTML, CSS, Javascript, jQuery - Experience with frontend frameworks like ReactJS or Angular - Proficiency in Linux platform and understanding of deployment architecture - Hands-on experience with Docker and Kubernetes - Experience with CICD, including Jenkins and GIT Workflow - Knowledge of cloud services such as AWS, Azure, GCP, or DigitalOcean - Proficiency in source control management tools, with Git being preferred - Excellent communication, interpersonal, and presentation skills - Positive approach, self-motivated, and well-organized If you are a seasoned Python developer with a proven track record of leadership and a desire to contribute to high-quality software solutions, we encourage you to apply for this position. Join us at Techversant and be part of a dynamic team dedicated to delivering excellence in software development.,

Posted 1 week ago

Apply

1.0 - 3.0 years

2 - 5 Lacs

mumbai

Work from Office

We are seeking an experienced Web Scraping Engineer with deep expertise in Scrapy to develop, maintain, and optimize web crawlers. The ideal candidate will have a strong background in extracting, processing, and managing large-scale web data efficiently. Responsibilities : - Write and maintain web scraping scripts using Python Optimize custom web scraping tools and workflows. - Familiarity with Python and web scraping frameworks (Scrapy, Selenium, BeautifulSoup, Requests, Playwright). - Troubleshoot and resolve scraping challenges, including CAPTCHAs, rate limiting, and IP blocking. - Experience with proxy management (Rotating IPs, VPNs, Residential proxies, etc.) - Handle dynamic content using headless browsers and solve CAPTCHAs & IP bans. - Collaborate with senior developers to improve code quality and efficiency. Skills Required : - Python and scraping libraries such as Scrapy and BeautifulSoup - Experience with proxy management, CAPTCHA bypass techniques, and anti-bot evasion. - Ability to optimize crawlers for performance and minimize website detection risks. - Strong background in databases and data storage systems . - Prior Experience of scraping large-scale E-Commerce websites would be plus. - Willingness to learn and take feedback constructively. - Excellent communication and leadership abilities.

Posted 1 week ago

Apply

4.0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Position: Lead Python Developer Experience: 4+ Years Location: Ahmedabad (Onsite) Employment Type: Full-Time Salary: As per industry standards Position Summary: We are seeking a skilled and experienced Backend Developer with strong expertise in TypeScript, Python, and web scraping. You will be responsible for designing, developing, and maintaining scalable backend services and APIs that power our data-driven products. Your role will involve collaborating with cross-functional teams, optimizing system performance, ensuring data integrity, and contributing to the design of efficient and secure architectures. Job Responsibility: ● Design, develop, and maintain backend systems and services using Python and TypeScript. ● Develop and maintain web scraping solutions to extract, process, and manage large-scale data from multiple sources. ● Work with relational and non-relational databases, ensuring high availability, scalability, and performance. ● Implement authentication, authorization, and security best practices across services. ● Write clean, maintainable, and testable code following best practices and coding standards. ● Collaborate with frontend engineers, data engineers, and DevOps teams to deliver robust solutions and troubleshoot, debug, and upgrade existing applications. ● Stay updated with backend development trends, tools, and frameworks to continuously improve processes. Utilize core crawling experience to design efficient strategies for scraping the data from different websites and applications. ● Collaborate with technology teams, data collection teams to build end to end technology-enabled ecosystems and partner in research projects to analyze the massive data inputs. ● Responsible for the design and development of web crawlers, able to independently solve various problems encountered in the actual development process. ● Stay updated with the latest web scraping techniques, tools, and industry trends to continuously improve the scraping processes. Job Requirements: ● 4+ years of professional experience in backend development with TypeScript and Python. ● Strong understanding of TypeScript-based server-side frameworks (e.g., Node.js, NestJS, Express) and Python frameworks (e.g., FastAPI, Django, Flask). ● Experience with tools and libraries for web scraping (e.g., Scrapy, BeautifulSoup, Selenium, Puppeteer) ● Hands-on experience with Temporal for creating and orchestrating workflows ● Proven hands-on experience in web scraping, including crawling, data extraction, deduplication, and handling dynamic websites. ● Proficient in implementing proxy solutions and handling bot-detection challenges (e.g., Cloudflare). ● Experience working with Docker, containerized deployments, and cloud environments (GCP or Azure). ● Proficiency with database systems such as MongoDB and ElasticSearch. ● Hands-on experience with designing and maintaining scalable APIs. ● Knowledge of software testing practices (unit, integration, end-to-end). ● Familiarity with CI/CD pipelines and version control systems (Git). ● Strong problem-solving skills, attention to detail, and ability to work in agile environments. ● Great communication skills and ability to navigate in undirected situations. Job Exposure: ● Opportunity to apply creative methods in acquiring and filtering the North American government, agencies data from various websites, sources ● In depth industry exposure on data harvesting techniques to build, scale the robust and sustainable model, using open-source applications ● Effectively collaboration with IT team to design the tailor-made solutions basis upon clients’ requirement ● Unique opportunity to research on various agencies, vendors, products as well as technology tools to compose a solution

Posted 1 week ago

Apply

2.0 years

1 - 2 Lacs

india

Remote

Location: Remote / Flexible Job Type: Contract / Freelance / Full-time (open for discussion) About the Role We are looking for an experienced Web Scraper Developer to build and maintain robust scrapers for leading Indian beauty platforms (Nykaa, Tira Beauty, and Purplle). The scrapers should be designed for scalability, stealth, and resilience , ensuring smooth operation even when websites update their structures. You will be responsible for architecting, coding, and optimizing scrapers that can handle large volumes of product data quickly and efficiently , with minimal downtime and easy maintainability. Responsibilities Design and implement scrapers for Nykaa, Tira Beauty, and Purplle to extract product listings, prices, descriptions, ingredients, and availability. Ensure scrapers are modular and resilient to DOM/classname changes (easy fixes with minimum effort). Implement anti-bot evasion techniques (stealth browsing, rotating proxies, headless browsing, rate limiting, retries). Enable multi-worker/concurrent scraping for faster data collection. Optimize for speed and low resource usage . Create simple configurations or scripts to quickly update/fix selectors when sites change layouts. Implement logging, error handling, and monitoring for scraper reliability. Deliver clean, well-documented code with clear setup instructions. Requirements Strong hands-on experience with Python (Scrapy, Playwright, Selenium, BeautifulSoup, Requests) or Node.js (Puppeteer, Playwright, Axios, Cheerio) . Familiarity with stealth scraping techniques : proxy rotation, headless browsing, human-like request simulation. Experience in building scalable scraping systems with multiple workers/threads. Knowledge of data storage pipelines (MySQL, PostgreSQL, MongoDB, or even CSV/JSON). Ability to build scrapers that are resilient to frequent site changes . Good debugging and problem-solving skills. Nice to Have Experience with cloud deployment (AWS, GCP, Azure, or VPS environments) . Prior work on e-commerce scraping . Understanding of product data normalization (brands, categories, pricing). What We Offer Flexible working hours (remote). Competitive compensation (hourly/project-based or retainer). Long-term opportunity to manage and scale scraping pipelines for multiple platforms. Job Types: Part-time, Internship, Contractual / Temporary, Freelance Contract length: 24 months Pay: ₹10,000.00 - ₹20,000.00 per month Expected hours: 20 per week Experience: Web Scraping: 2 years (Required)

Posted 2 weeks ago

Apply

2.0 years

4 - 4 Lacs

mohali

On-site

Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer with core Python expertise and hands-on experience in Generative AI (GenAI) to join our development team. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. Generative AI Experience: Working knowledge and experience in building applications using GenAI technologies is mandatory. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Previous working experience on LLM and AI Agents is a Plus. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Automation: Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

punjab

On-site

You will be part of a dedicated team at Zapbuild, specializing in building cutting-edge technology solutions for the transportation and logistics industry. Our primary goal is to support and propel the Transportation & Logistics sector by providing adaptive and innovative solutions that enable players to thrive in the face of rapidly evolving supply chains. Your responsibilities will revolve around various aspects such as achieving sustainability objectives, harnessing new technologies for streamlined logistics operations, and minimizing supply chain expenses. As your technology partner, our commitment remains unwavering - we are dedicated to keeping you at the forefront of the industry with forward-looking solutions. This entails ensuring that you are fully prepared to tackle upcoming challenges, embrace technological advancements, and adapt to changes in your supply chain landscape. Key technical skills that you should possess include proficiency in Python and Django rest framework, hands-on experience with MySQL (covering schema design, query optimization, and performance tuning), familiarity with web scraping using tools like BeautifulSoup, Scrapy, or Selenium, knowledge of Docker for containerizing applications and managing environments, front-end development capabilities using JavaScript, HTML, and basic CSS, expertise in building and consuming RESTful APIs, understanding of version control using Git, and experience with Pandas/Numpy. The ideal candidate for this role should have 2 to 4 years of relevant experience and be able to join within a notice period of up to 30 days. The work schedule includes 5 days a week, with office hours from 9:30 am to 6:30 pm. If you are passionate about leveraging technology to drive innovation in the transportation and logistics industry, and if you are looking for a dynamic work environment where you can contribute your expertise to shape the future of supply chain solutions, we encourage you to apply for this exciting opportunity at Zapbuild.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

pune, maharashtra

On-site

As a Web Scraper at our Pune office, you will undertake the full-time responsibility of extracting, transforming, and loading data from various websites and online sources. Your primary role will involve developing robust web scraping scripts using Python and relevant libraries to ensure data quality and accuracy for our B2B marketing and sales solutions. Additionally, you will collaborate with internal teams to support data-driven decision-making. Your key responsibilities will include designing, developing, and maintaining efficient web scraping scripts, performing data extraction, transformation, and loading processes, implementing data quality assurance measures, collaborating with internal teams for requirements gathering, troubleshooting scraper performance, and staying updated with industry trends in web scraping technologies. To excel in this role, you should possess a Bachelor's degree in Computer Science or a related field, along with 1 to 5 years of hands-on experience in web scraping. Strong proficiency in Python, experience with core web scraping libraries such as Beautiful Soup and Scrapy, and knowledge of web automation tools like Selenium will be essential. You must also have excellent problem-solving skills, communication abilities, and a basic understanding of data security and ethical considerations related to web scraping. Your success in this role will be driven by your analytical and critical thinking abilities, attention to detail, problem-solving capabilities, communication skills, and eagerness to work both independently and collaboratively within a team. If you are proactive, open to learning new technologies, and passionate about leveraging data for informed decision-making, we encourage you to apply for this exciting opportunity.,

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

india

Remote

Full Stack Developer (MERN + Python) Urgent Hiring - Immediate Joiners Preferred Location: Fully Remote, Onsite after 3 months(if suitable for company and candidate) Type: Full-time Compensation: ₹4 to ₹10 LPA (fixed, in-hand) We're looking to fill this position this week itself. So if you're currently available or can start within a few days, you’ll be given preference. About Iced Automations Iced Automations LLC is a fast-moving automation company that helps cold email agencies streamline and scale their operations. We build systems on top of Smartlead, Instantly, n8n, Clay, Slack, and other platforms to help our clients save time and grow faster. We’ve worked with over 100 agencies and are now shifting focus towards building SaaS products, internal tools, and deeper automation infrastructure. What You'll Be Doing You will be building and deploying full-stack web apps, automation tools, and internal products end-to-end. No micro tasks or ticket-based work - you’ll own projects, ship fast, and iterate directly with the founder. Some things you might do: Build web apps using Node.js, Angular or React, and MongoDB Use Python and Selenium or Playwright for browser automations Integrate payment gateways like Stripe or Razorpay Deploy projects to AWS or other cloud platforms Work with tools like n8n, Make, OpenAI APIs, or Langchain Research and use AI-powered dev tools to speed things up Minimum Requirements Strong with MERN stack Basic to intermediate Python, especially for web scraping or browser automation Able to build and deploy a complete app (frontend, backend, database, and hosting) Familiarity with APIs, authentication, and payment gateway integration Comfortable figuring things out independently, fast learner Good to Have (or should be ready to learn quickly) Docker, cloud infrastructure (AWS/GCP), DNS setup n8n, LangChain, Zapier, or similar automation tools AI integration experience (OpenAI, Claude, etc.) Web scraping with Playwright, Scrapy, or Selenium Who We're Looking For We don't care about degrees or brand names. What matters is your ability to build things and learn fast. This role is open to: Developers with 1 to 3 years of experience Freshers who have built strong personal projects or done serious freelance work Indie hackers or ex-freelancers who can move fast and think in products We need someone who is willing to take ownership, work without handholding, and build production-ready projects quickly. Work Culture Remote trial for 2 to 3 months, followed by in-office work from Delhi NCR if there's a good fit Work on your own schedule. We don’t care about hours, just ownership and delivery No micromanagement, but no excuses either. Deliver what you own Average workload is 35 to 45 hours/week. Some weeks will be lighter, others heavier This is not a corporate job and we’re not pretending to be a family. If you help us grow, we will compensate you well. If not, we part ways Urgent Hiring - Immediate Joiners Preferred We're looking to fill this position this week itself . So if you're currently available or can start within a few days, you’ll be given preference. To increase your chances, send a Loom video introducing yourself and walking us through something you've built. Alternatively, you can just email us with your craziest project. Email: abhishek+hr@icedautomations.com

Posted 2 weeks ago

Apply

2.0 - 4.0 years

2 - 4 Lacs

himatnagar

Work from Office

Responsibilities: * Develop web applications using Python & Scrapy * Collaborate with cross-functional teams on project requirements * Optimize scraped data for performance & accuracy Work from home Annual bonus

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Title: AI Developer-Intelligent Web Automation Location: Noida Employment Type: Full Time/Contract Experience: 0-3 Years in AI+ Automation Position Overview: We are looking for an experienced AI/ML Developer with strong expertise in web automation to design and build an intelligent system that can interact with websites, perform smart actions (clicking, downloading, form filling, navigation), and extract data automatically. The role involves combining AI techniques with web crawling/automation tools to create a scalable and efficient solution. Key Responsibilities Design and develop an AI-powered web automation system capable of interacting with websites intelligently. Implement web crawling, scraping, and automation workflows (e.g., navigating pages, handling dynamic content, downloading files). Integrate Natural Language Processing (NLP) or Reinforcement Learning techniques to make the system adaptable and smart (e.g., understanding page layouts, deciding next actions). Work with automation frameworks (e.g., Selenium, Playwright, Puppeteer, Scrapy ) and AI frameworks (e.g., PyTorch, TensorFlow, Hugging Face ). Ensure compliance with robots.txt, rate-limiting, and ethical scraping practices . Optimize the solution for scalability, reliability, and performance . Collaborate with stakeholders to refine requirements and deliver end-to-end features. Required Skills & Qualifications Strong proficiency in Python (preferred) or similar programming languages. Hands-on experience with web automation frameworks (Selenium, Playwright, Puppeteer, Scrapy). Solid knowledge of AI/ML techniques – NLP, reinforcement learning, or deep learning. Experience in data extraction, crawling, and structured data processing (JSON, XML, CSV, databases) . Familiarity with cloud platforms (AWS, Azure, GCP) for deployment and scaling. Strong understanding of HTTP, DOM, JavaScript rendering, cookies, and authentication mechanisms . Problem-solving mindset and ability to build intelligent decision-making systems . Good to Have Experience with LangChain, LLMs (ChatGPT, GPT-4, etc.) , or AI agents. Knowledge of RPA (Robotic Process Automation) tools like UiPath, Blue Prism, or Automation Anywhere. Background in distributed crawling (e.g., using Kafka, Celery, or queue systems). Prior experience in building search engines, bots, or intelligent assistants . Interested candidates are encouraged to apply directly via LinkedIn with their updated profile/resume or their CV to careers@stamenssoftware.com.

Posted 2 weeks ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

mohali, punjab

On-site

Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer with core Python expertise and hands-on experience in Generative AI (GenAI) to join our development team. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. Generative AI Experience: Working knowledge and experience in building applications using GenAI technologies is mandatory. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Previous working experience on LLM and AI Agents is a Plus. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Automation: Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 - 2 Lacs

mumbai

Work from Office

Role & responsibilities Technical Skills: Proficiency in Python and libraries like BeautifulSoup, Scrapy, and Selenium. • Experience with regular expressions (Regex) for data parsing. • Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. • Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills: • Knowledge of containerization tools like Docker. Preferred candidate profile Develop and maintain automated web scraping scripts using Python libraries such as BeautifulSoup, Scrapy, and Selenium. • Optimize scraping pipelines for performance, scalability, and resource efficiency. • Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. • Process and clean raw data, ensuring accuracy and integrity in extracted datasets. • Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. • Leverage APIs when web scraping is not feasible, managing authentication and request optimization. • Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. • Ensure compliance with legal and ethical web scraping practices, implementing security safeguards.

Posted 2 weeks ago

Apply

2.0 years

8 - 12 Lacs

gurgaon

Remote

We are seeking a highly motivated and results-driven Sales and Revenue Manager to join our sales team. The successful candidate will be responsible for supporting overall sales operations, strengthening client relationships, and contributing to revenue growth. A core aspect of this role will be lead generation and data acquisition from digital platforms to build and maintain a qualified sales pipeline. Key Responsibilities Lead Generation: Identify, qualify, and generate prospective leads through online tools, platforms, and databases (e.g., LinkedIn, industry events, company websites, and email sources). Client facing capabilities to include ability to pitch new business and ideas to potential leads and clients. Sales Operations Support: Assist in managing the sales pipeline, ensuring systematic follow-up and accurate documentation of prospects and client interactions. CRM Management: Maintain accurate and up-to-date records of sales activities within the Customer Relationship Management (CRM) system. Market Research: Conduct research to monitor market trends, analyze competitor activity, and identify new business opportunities. Pipeline Development: Establish and maintain a consistent pipeline of qualified prospects. Tools & Reporting: Prepare presentations, reports, and sales materials using MS Office (PowerPoint, Excel) and Canva. Qualifications Experience: Minimum of 2 years of experience in Business Development or Sales. Prior experience in an agency or news media environment is mandatory . Technical Skills: Familiarity with lead generation platforms, CRM systems (Salesforce, HubSpot), and web scraping tools (Scrapy, BeautifulSoup, LinkedIn Sales Navigator). Communication: Strong verbal and written communication skills with the ability to engage effectively with clients and internal teams. Organizational Skills: Demonstrated ability to manage multiple priorities, meet deadlines, and perform effectively in a fast-paced environment. Analytical & Problem-Solving: Ability to assess challenges, identify solutions, and support informed decision-making. Education: Bachelor’s degree in Business, Marketing, Communications, or a related field is required. Job Type: Full-time Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Food provided Provident Fund Work from home Work Location: In person

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a skilled professional with over 7 years of experience, you will be responsible for reviewing and understanding business requirements to ensure timely completion of development tasks with rigorous testing to minimize defects. Collaborating with a software development team is crucial to implement best practices and enhance the performance of Data applications, meeting client needs effectively. In this role, you will collaborate with various teams within the company and engage with customers to comprehend, translate, define, and design innovative solutions for their business challenges. Your tasks will also involve researching new Big Data technologies to evaluate their maturity and alignment with business and technology strategies. Operating within a rapid and agile development process, you will focus on accelerating speed to market while upholding necessary controls. Your qualifications should include a BE/B.Tech/MCA degree with a minimum of 6 years of IT experience, including 4 years of hands-on experience in design and development using the Hadoop technology stack and various programming languages. Furthermore, you are expected to have proficiency in multiple areas such as Hadoop, HDFS, MR, Spark Streaming, Spark SQL, Spark ML, Kafka/Flume, Apache NiFi, Hortonworks Data Platform, Hive, Pig, Sqoop, NoSQL Databases (HBase, Cassandra, Neo4j, MongoDB), Visualization & Reporting frameworks (D3.js, Zeppelin, Grafana, Kibana, Tableau, Pentaho), Scrapy for web crawling, Elastic Search, Google Analytics data streaming, and Data security protocols (Kerberos, Open LDAP, Knox, Ranger). A strong knowledge of the current technology landscape, industry trends, and experience in Big Data integration with Metadata Management, Data Quality, Master Data Management solutions, structured/unstructured data is essential. Your active participation in the community through articles, blogs, or speaking engagements at conferences will be highly valued in this role.,

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

punjab

On-site

About Us: At Zapbuild, we specialize in developing advanced technology solutions tailored for the transportation and logistics sector. Our primary goal is to support the industry and its stakeholders in embracing adaptive and innovative solutions to navigate the ever-evolving supply chains effectively. Whether your objectives revolve around achieving sustainability targets, adopting cutting-edge technologies for streamlined logistics, or reducing operational costs within the supply chain, we are committed to keeping you at the forefront with forward-looking solutions. Our focus as your technology partner is to ensure that you are fully prepared to tackle upcoming challenges, embrace advancements, and adapt to changes within your supply chain landscape. Job Requirement: - Proficiency in Python and Django rest framework. - Hands-on experience with MySQL, encompassing schema design, query optimization, and performance tuning. - Familiarity with web scraping utilizing Python tools such as BeautifulSoup, Scrapy, or Selenium. - Proficiency in Docker for containerizing applications and managing environments. - Competency in front-end development with JavaScript, HTML, and basic CSS. - Experience in constructing and utilizing RESTful APIs. - Knowledge of version control using Git. - Experience with Pandas/Numpy. Qualification: - Freshers or individuals with up to 6 months of experience. - Willingness to work from the office. - Availability during 9:30 am to 6:30 pm timings. - Currently residing in Mohali, Punjab. Job Types: Full-time, Permanent, Fresher Benefits: - Health insurance - Provident Fund Location Type: In-person Schedule: - Day shift - Fixed shift - Monday to Friday Application Question(s): - Do you possess a solid understanding of Python and Django rest framework - Are you available to join us immediately Education: Bachelor's degree (Required) Location: Mohali, Punjab (Required) Shift availability: Day Shift (Required) Work Location: In person,

Posted 2 weeks ago

Apply

2.0 years

4 - 4 Lacs

mohali

On-site

Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer with core Python expertise and hands-on experience in Generative AI (GenAI) to join our development team. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. Generative AI Experience: Working knowledge and experience in building applications using GenAI technologies is mandatory. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Previous working experience on LLM and AI Agents is a Plus. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Automation: Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies