Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 - 0 Lacs
haryana
On-site
You are someone who thrives in a full-stack Python + QA + Database + AI environment and is excited about technical growth. If you are a passionate Python engineer with hands-on experience in building web services, supporting QA/automation workflows, and working confidently across SQL/NoSQL databases, this opportunity is for you. Responsibilities: - Develop, test, and maintain Python-based backend services using Django, FastAPI, or Flask. - Execute efficient web scraping tasks using BeautifulSoup or Scrapy. - Automate browser workflows using Selenium. - Write scalable database queries and schema designs for SQL (PostgreSQL/MySQL) and MongoDB. - Design and implement QA automation scripts and frameworks. - (Optional but desirable) Integrate and utilize AI/ML tools, packages, or APIs. Must-Have Skills: - Proficiency in Python (3.x) and modern frameworks (Django/FastAPI/Flask). - Web scraping experience with BeautifulSoup, Scrapy, or equivalent. - Hands-on automation using Selenium. - Strong database expertise in SQL and MongoDB. - Solid understanding of testing principles, automation techniques, and version control (Git). Nice-to-Have: - Exposure to AI or machine learning libraries (e.g. TensorFlow, PyTorch, OpenAI APIs). - Familiarity with CI/CD pipelines (Jenkins, GitHub Actions) and containerization (Docker). Why This Role is Great: - Balanced challenge across backend development, QA automation, and data work. - Opportunity to explore AI/ML tools in real-world projects. - Engage in end-to-end ownership: backend QA deployment pipelines. Job Location: Gurugram Work Model: On-site Budget: 15-18 LPA,
Posted 2 weeks ago
3.0 - 8.0 years
6 - 15 Lacs
Bengaluru
Remote
Role & responsibilities As a Data Engineer focused on web crawling and platform data acquisition, you will design, develop, and maintain large-scale web scraping pipelines to extract valuable platform data. You will be responsible for implementing scalable and resilient data extraction solutions, ensuring seamless data retrieval while working with proxy management, anti-bot bypass techniques, and data parsing. Optimizing scraping workflows for performance, reliability, and efficiency will be a key part of your role. Additionally, you will ensure that all extracted data maintains high quality and integrity. Preferred candidate profile We are seeking candidates with: Strong experience in Python and web scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup. Knowledge of distributed web crawling architectures and job scheduling. Familiarity with headless browsers, CAPTCHA-solving techniques, and proxy management to handle dynamic web challenges. Experience with data storage solutions, including SQL, and cloud storage. Understanding of big data technologies like Spark and Kafka (a plus). Strong debugging skills to adapt to website structure changes and blockers. A proactive, problem-solving mindset and ability to work effectively in a team-driven environment.
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Web Scraping & Data Automation Intern at D-Vivid Consultant, you will play a crucial role in developing, maintaining, and optimizing scripts to collect data from various platforms such as Instagram, Facebook, Reddit, and more. Your primary responsibilities will include utilizing automation tools and libraries like Make.com (Integromat), Python (BeautifulSoup, Scrapy, Selenium), and browser automation frameworks to ensure efficient data collection. You will be responsible for cleaning and organizing the extracted data to facilitate marketing intelligence and lead generation campaigns. Additionally, you will conduct research to enhance scraping practices and handle dynamic website content effectively. Collaboration with the marketing and tech teams will be essential to identify data requirements and enhance output quality. Furthermore, you will be expected to maintain meticulous documentation of workflows, scripts, and scraping logs to ensure transparency and compliance with legal, ethical, and privacy standards in data scraping activities. The ideal candidate for this role should possess proficiency in at least one scraping library/tool, a basic understanding of Python or JavaScript, and hands-on experience with automation platforms like Make.com. A strong grasp of HTML, CSS, and JavaScript for DOM navigation is crucial, along with a passion for data-driven marketing and social media platforms. Problem-solving skills, attention to detail, and the ability to work independently while managing timelines effectively are key attributes for success in this role. Desirable skills include experience in scraping or analyzing data from social media platforms, familiarity with proxy management, headless browsers, and anti-bot detection strategies, as well as knowledge of data handling libraries like Pandas or NumPy. Prior internship or project experience in automation or data scraping would be advantageous. As part of the internship at D-Vivid Consultant, you will benefit from flexible working hours, a remote-friendly experience, an Internship Certificate upon successful completion, a Letter of Recommendation (LOR) from leadership, and the potential for a Pre-Placement Offer (PPO) based on performance. Additionally, you will receive mentorship from industry leaders and gain exposure to real-world automation projects, enhancing your professional growth and development.,
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Director Data Science and Principal Data Scientist Experience range - 10+ years Type of job - Full time, In office Location - Bengaluru Roles And Responsibilities Convert broad vision and concepts into a structured data science roadmap, and guide a team to successfully execute on it. Handling end-to-end client AI & analytics programs in a fluid environment. Your role will be a combination of hands-on contribution, technical team management, and client interaction. Proven ability to discover solutions hidden in large datasets and to drive business results with their data-based insights Contribute to internal product development initiatives related to data science. Drive excellent project management required to deliver complex projects, including effort/time estimation. Be proactive, with full ownership of the engagement. Build scalable client engagement level processes for faster turnaround & higher accuracy Define Technology/ Strategy and Roadmap for client accounts, and guides implementation of that strategy within projects Manage the team-members, to ensure that the project plan is being adhered to over the course of the project Build a trusted advisor relationship with the IT management at clients and internal accounts leadership. Mandated Skills A B-Tech/M-Tech/MBA from a top tier Institute preferably in a quantitative subject 10+ years of hands-on experience in applied Machine Learning, AI and analytics Experience of scientific programming in scripting languages like Python, R, SQL, NoSQL, Spark with ML tools & Cloud Technology (AWS, Azure, GCP) Experience in Python libraries such as numpy, pandas, scikit-learn, tensor-flow, scrapy, BERT etc. Strong grasp of depth and breadth of machine learning, deep learning, data mining, and statistical concepts and experience in developing models and solutions in these areas Expertise with client engagement, understanding complex problem statements, and offering solutions in the domains of Supply Chain, Manufacturing, CPG, Marketing etc. Desired Skills Deep understanding of ML algorithms for common use cases in both structured and unstructured data ecosystems. Comfortable with large scale data processing and distributed computing Providing required inputs to sales, and pre-sales activities A self-starter who can work well with minimal guidance Excellent written and verbal communication skills Data Science,AI,ML
Posted 2 weeks ago
4.0 years
3 - 10 Lacs
Mohali
On-site
Job Description : Should have 4+ years hands-on experience in algorithms and implementation of analytics solutions in predictive analytics, text analytics and image analytics Should have handson experience in leading a team of data scientists, works closely with client’s technical team to plan, develop and execute on client requirements providing technical expertise and project leadership. Leads efforts to foster innovative ideas for developing high impact solutions. Evaluates and leads broad range of forward looking analytics initiatives, track emerging data science trends, and knowledge sharing Engaging key stakeholders to source, mine and validate data and findings and to confirm business logic and assumptions in order to draw conclusions. Helps in design and develop advanced analytic solutions across functional areas as per requirement/opportunities. Technical Role and Responsibilities Demonstrated strong capability in statistical/Mathematical modelling or Machine Learning or Artificial Intelligence Demonstrated skills in programming for implementation and deployment of algorithms preferably in Statistical/ML based programming languages in Python Sound Experience with traditional as well as modern statistical techniques, including Regression, Support Vector Machines, Regularization, Boosting, Random Forests, and other Ensemble Methods; Visualization tool experience - preferably with Tableau or Power BI Sound knowledge of ETL practices preferably spark in Data Bricks cloud big data technologies like AWS, Google, Microsoft, or Cloudera. Communicate complex quantitative analysis in a lucid, precise, clear and actionable insight. Developing new practices and methodologies using statistical methods, machine learning and predictive models under mentorship. Carrying out statistical and mathematical modelling, solving complex business problems and delivering innovative solutions using state of the art tools and cutting-edge technologies for big data & beyond. Preferred to have Bachelors/Masters in Statistics/Machine Learning/Data Science/Analytics Should be a Data Science Professional with a knack for solving problems using cutting-edge ML/DL techniques and implementing solutions leveraging cloud-based infrastructure. Should be strong in GCP, TensorFlow, Numpy, Pandas, Python, Auto ML, Big Query, Machine learning, Artificial intelligence, Deep Learning Exposure to below skills: Preferred Tech Skills : Python, Computer Vision,Machine Learning,RNN,Data Visualization,Natural Language Processing,Voice Modulation,Speech to text,Spicy,Lstm,Object Detection,Sklearn,Numpy, NLTk,Matplotlib,Cuinks, seaborn,Imageprocessing, NeuralNetwork,Yolo, DarkFlow,DarkNet,Pytorch, CNN,Tensorflow,Keras,Unet, ImageSegmentation,ModeNet OCR,OpenCV,Pandas,Scrapy, BeautifulSoup,LabelImg ,GIT. Machine Learning, Deep Learning, Computer Vision, Natural Language Processing,Statistics Programming Languages-Python Libraries & Software Packages- Tensorflow, Keras, OpenCV, Pillow, Scikit-Learn, Flask, Numpy, Pandas, Matplotlib,Docker Cloud Services- Compute Engine, GCP AI Platform, Cloud Storage, GCP AI & MLAPIs Job Types: Full-time, Permanent, Fresher Pay: ₹30,000.00 - ₹90,000.00 per month Education: Bachelor's (Preferred) Experience: AI/Machine learining: 4 years (Preferred) Work Location: In person
Posted 2 weeks ago
4.0 years
3 - 9 Lacs
Noida
On-site
Job Title: Python Developer – Data Extraction (Web Scraping) Experience: 4–5 Years Location: Concrete Software Solutions Pvt. Ltd., 9C, Techzone-4, Greater Noida West Job Type: Full-Time Company Overview Concrete Software Solutions Pvt. Ltd. is a leading IT services and product development company, providing innovative digital solutions across domains. We are currently expanding our data engineering team and looking for an experienced Python Developer with expertise in web scraping and data extraction , especially from grocery and e-commerce websites. Job Summary We are seeking a highly skilled and motivated Python Developer who specializes in data extraction and web scraping . The ideal candidate should have 4–5 years of experience in developing scalable scraping solutions, parsing complex HTML/JavaScript-heavy websites, and structuring data into usable formats. Key Responsibilities: Develop and maintain Python scripts for automated data extraction from grocery and e-commerce websites. Handle dynamic content scraping (AJAX, JavaScript-rendered pages) using tools like Selenium, Playwright, or Puppeteer. Parse and clean extracted data into structured formats (JSON, CSV, XML, databases). Build and optimize scrapers to avoid IP blocks and handle anti-scraping mechanisms (e.g., CAPTCHA, rate limiting). Schedule scraping jobs and monitor performance and data accuracy. Collaborate with data analysts and product teams to understand data requirements and deliver high-quality outputs. Ensure data pipeline integrity and implement error handling, retries, and logging mechanisms. Required Skills: Strong experience in Python , especially libraries like requests, BeautifulSoup, Selenium, Scrapy, or Playwright. Expertise in web scraping , particularly from grocery, retail, or price comparison websites. Familiarity with browser automation tools and handling JavaScript-heavy websites . Solid understanding of HTML, CSS, XPath, JSON, and APIs . Experience in working with databases (MySQL, MongoDB, or PostgreSQL). Familiarity with task schedulers (like Celery or Cron), and logging frameworks. Version control using Git . Basic knowledge of cloud platforms (AWS, GCP, or Azure) is a plus. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Experience working in a data-centric or e-commerce environment. Good communication skills and ability to work in a team environment. Problem-solving mindset and keen attention to detail. What We Offer: Competitive salary and benefits Exposure to real-world projects in data analytics and automation Friendly and collaborative work culture Opportunity to grow with a dynamic and expanding organization Salary Range : 25K to 75K Per Month To Apply: Send your resume to hr@cssinfotech.in with subject line: Python Developer – Data Extraction Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹75,000.00 per month Benefits: Flexible schedule Health insurance Provident Fund Location Type: In-person Schedule: Day shift Experience: Python: 4 years (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person Speak with the employer +91 9582102222 Expected Start Date: 21/07/2025
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About Us At Zyte, we eat data for breakfast and you can eat your breakfast anywhere and work for Zyte. Founded in 2010, we are a globally distributed team of over 240 Zytans working from over 28 countries who are on a mission to enable our customers to extract the data they need to continue to innovate and grow their businesses. We believe that all businesses deserve a smooth pathway to data For more than a decade, Zyte has led the way in building powerful, easy-to-use tools to collect, format, and deliver web data, quickly, dependably, and at scale. And today, the data we extract helps thousands of organizations make smarter business decisions, secure competitive advantage, and drive sustainable growth. Today, over 3,000 companies and 1 million developers rely on our tools and services to get the data they need from the web. About The Job: As a Developer Support Engineer, you'll be the go-to expert for helping customers integrate with our Web Scraping API (Zyte API) and Cloud Spider Deployment Platform (Scrapy Cloud). You'll troubleshoot API issues, resolve scraping blocks, help deploy and debug crawlers in the cloud, and assist with usage and billing questions — whatever it takes to keep our customers get-going with their business! We are big fans of Continuous Improvement and use metrics to measure and improve our processes. Whenever possible we suggest improvements to our products and write our own tools in order to give the best possible service to our customers. About you: You are extremely well organised and self-motivated - essential because we're a remote team. You are a creative problem solver with think-outside-the-box and can-do attitude and have a passion for great customer service. Roles & Responsibilities: Debug and resolve API errors, failed requests, and integration issues Investigate website blocking behavior (e.g., bot detection, captchas, IP bans) Recommend and adjust scraping settings like browser fingerprinting, header tuning, proxies, and rendering modes Assist with custom setups and edge-case data extraction requests Support customers in deploying and managing spiders on our cloud platform Analyze logs and runtime behavior of spiders to diagnose failures or inefficiencies Help customers retrieve structured data using AI extraction and quickfixes Assist users with billing questions, plan upgrades, and usage diagnostics Write clear and concise explanations, technical walkthroughs, and documentation Collaborate with engineering/product teams to communicate bugs or suggest improvements based on user feedback Strong customer focus with a mindset for preventing future recurrence of issues and be an advocate for customers to get the best value possible Demonstrate leadership and ability to work independently to resolve complex technical issues Maintain technical documentation, troubleshooting guides and AI bots Provide assistance to internal groups in Zyte to troubleshoot issues and make configuration changes Effectively collaborate within the team and with other teams to constantly improve the processes and tools for greater efficiency and better customer satisfaction Be available to participate in the weekend shift - approximately one weekend every month for additional compensation Requirements 3+ years of support or equivalent experience in a customer facing role Good understanding of HTTP, browser behavior, browser stack, headless browsers, web scraping techniques, and anti-bot mechanisms Strong grasp of Python to be able to write and debug code. Prefer familiarity with additional languages such as Javascript, Node.js, Typescript, Java, Javascript, .net/C#, Golang Experience debugging REST APIs (using Postman, curl, or directly in code) Experience with web scraping tools or libraries (e.g., Puppeteer, Playwright, Selenium, BeautifulSoup, Scrapy) Good understanding of web applications, client utilities, browserstack, headless browsers and developer tools in browsers. Familiarity with tools such as Wireshark, tcpdump, Burp Suite etc to intercept and debug network traffic. Understand browser engines, browser fingerprinting, and ad-blocker mechanisms Comfortable with Linux/UNIX or Mac Terminal command-line for efficient scripting and automation Excellent verbal and written English skills and ability to articulate a complex system or problem based on the type of audience Strong team player with good analytical and technical writing skills Ability to multi-task and manage multiple priorities and commitments Benefits By joining the Zyte team, you will: Become part of a self-motivated, progressive, multi-cultural team Have the freedom & flexibility to work remotely Get the chance to work with cutting-edge open source technologies and tools
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Senior Python Backend Developer to join its team of experts. Skill : Senior Python Backend Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a skilled Web Scraping Data Analyst, you will be responsible for collecting, cleaning, and analyzing data from various online sources. Your expertise in Python-based scraping frameworks, data transformation, and experience with proxy/VPN rotation and IP management will be crucial in building data pipelines that support our analytics and business intelligence initiatives. Your key responsibilities will include designing, developing, and maintaining robust web scraping scripts using tools like Python, BeautifulSoup, Scrapy, Selenium, etc. You will also implement IP rotation, proxy management, and anti-bot evasion techniques, deploy scraping tools on cloud-based or edge servers, and monitor scraping jobs for uptime and efficiency. Additionally, you will parse and structure unstructured or semi-structured web data into clean, usable datasets, collaborate with data analysts and data engineers to integrate web-sourced data into internal databases and reporting systems, conduct exploratory data analysis (EDA), and ensure compliance with website scraping policies, robots.txt, and relevant data privacy regulations. To excel in this role, you should have proficiency in Python and experience with libraries like Requests, BeautifulSoup, Scrapy, Pandas. Knowledge of proxy/VPN usage, IP rotation, and web traffic routing tools (e.g., Smartproxy, BrightData, Tor, etc.), familiarity with cloud platforms (AWS, Azure, or GCP) and Linux-based environments, experience deploying scraping scripts on edge servers or containerized environments (e.g., Docker), solid understanding of HTML, CSS, JSON, and browser dev tools for DOM inspection, strong analytical mindset with experience in data cleansing, transformation, and visualization, good knowledge of SQL and basic data querying, and ability to handle large volumes of data and build efficient data pipelines. Preferred qualifications for this role include experience with headless browsers like Puppeteer or Playwright, familiarity with scheduling tools like Airflow or Cron, background in data analytics or reporting using tools like Tableau, Power BI, or Jupyter Notebooks, and knowledge of anti-captcha solutions and browser automation challenges. This is a full-time position with the work location being in person.,
Posted 2 weeks ago
0.0 - 4.0 years
0 - 0 Lacs
Noida, Uttar Pradesh
On-site
Job Title: Python Developer – Data Extraction (Web Scraping) Experience: 4–5 Years Location: Concrete Software Solutions Pvt. Ltd., 9C, Techzone-4, Greater Noida West Job Type: Full-Time Company Overview Concrete Software Solutions Pvt. Ltd. is a leading IT services and product development company, providing innovative digital solutions across domains. We are currently expanding our data engineering team and looking for an experienced Python Developer with expertise in web scraping and data extraction , especially from grocery and e-commerce websites. Job Summary We are seeking a highly skilled and motivated Python Developer who specializes in data extraction and web scraping . The ideal candidate should have 4–5 years of experience in developing scalable scraping solutions, parsing complex HTML/JavaScript-heavy websites, and structuring data into usable formats. Key Responsibilities: Develop and maintain Python scripts for automated data extraction from grocery and e-commerce websites. Handle dynamic content scraping (AJAX, JavaScript-rendered pages) using tools like Selenium, Playwright, or Puppeteer. Parse and clean extracted data into structured formats (JSON, CSV, XML, databases). Build and optimize scrapers to avoid IP blocks and handle anti-scraping mechanisms (e.g., CAPTCHA, rate limiting). Schedule scraping jobs and monitor performance and data accuracy. Collaborate with data analysts and product teams to understand data requirements and deliver high-quality outputs. Ensure data pipeline integrity and implement error handling, retries, and logging mechanisms. Required Skills: Strong experience in Python , especially libraries like requests, BeautifulSoup, Selenium, Scrapy, or Playwright. Expertise in web scraping , particularly from grocery, retail, or price comparison websites. Familiarity with browser automation tools and handling JavaScript-heavy websites . Solid understanding of HTML, CSS, XPath, JSON, and APIs . Experience in working with databases (MySQL, MongoDB, or PostgreSQL). Familiarity with task schedulers (like Celery or Cron), and logging frameworks. Version control using Git . Basic knowledge of cloud platforms (AWS, GCP, or Azure) is a plus. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Experience working in a data-centric or e-commerce environment. Good communication skills and ability to work in a team environment. Problem-solving mindset and keen attention to detail. What We Offer: Competitive salary and benefits Exposure to real-world projects in data analytics and automation Friendly and collaborative work culture Opportunity to grow with a dynamic and expanding organization Salary Range : 25K to 75K Per Month To Apply: Send your resume to hr@cssinfotech.in with subject line: Python Developer – Data Extraction Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹75,000.00 per month Benefits: Flexible schedule Health insurance Provident Fund Location Type: In-person Schedule: Day shift Experience: Python: 4 years (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person Speak with the employer +91 9582102222 Expected Start Date: 21/07/2025
Posted 2 weeks ago
4.0 years
0 Lacs
Mohali district, India
On-site
We’re seeking a highly skilled Python Web Scraping Engineer with 3–4 years of hands-on experience in building robust data scrapers for a wide range of websites — from static pages to dynamic, JS-heavy platforms. The ideal candidate should have a deep understanding of Python-based scraping frameworks , web technologies, and anti-scraping defenses. Key Responsibilities: Design and build scalable scraping scripts using Python (Scrapy, BeautifulSoup, Selenium, Playwright) Extract structured and unstructured data from eCommerce, fashion, marketplace, and social media websites Handle dynamic content, lazy loading, and login-based content scraping Process and clean scraped data for AI models and data analytics pipelines Collaborate with AI/ML engineers to deliver enriched datasets for training and automation Bypass anti-scraping techniques using rotating proxies, headless browsers, CAPTCHAs solvers , etc. Implement scraping best practices to maintain anonymity and minimize IP bans Store scraped data in JSON, CSV, MySQL, MongoDB , or push it to APIs Schedule scraping tasks using cron jobs , Airflow , or similar tools Monitor data quality and update scripts when site structures change Maintain scraping logs and update stakeholders with progress reports Required Skills & Qualifications: 3–4 years of experience in web scraping with Python Proficiency in Scrapy, Selenium, Playwright, BeautifulSoup Experience with CAPTCHA-solving tools , proxy rotation , and IP management Familiarity with REST APIs, JSON, XML Experience storing data in databases (MongoDB, PostgreSQL, MySQL, etc.) Knowledge of version control (Git) and basic DevOps is a plus Preferred (Nice to Have): Experience in AI/ML projects where scraped data supports model training Exposure to cloud environments (AWS, GCP, Azure) Prior experience scraping at scale or working with large datasets Familiarity with text/image data extraction (OCR, NLP preprocessing)
Posted 2 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
We’re Hiring: Data Scraping Engineer 📍 Location: [Remote/City, Country] | 🕒 Full-Time Techno Province is looking for a skilled and detail-oriented Data Scraping Engineer to join our growing tech team! You’ll be responsible for designing and developing robust web scraping solutions that help fuel powerful travel insights and automation across global platforms. 🔧 What You’ll Do: Build and maintain scalable web scraping tools and crawlers Extract data from travel websites, APIs, and unstructured sources Clean, structure, and store scraped data in usable formats (JSON, CSV, DB) Monitor scraping pipelines and troubleshoot issues (captcha, IP blocks, etc.) Work with developers and data teams to integrate scraped data into products ✅ What We’re Looking For: 2–4 years of experience in web scraping / data extraction Strong Python skills (Scrapy, BeautifulSoup, Selenium, etc.) Understanding of HTTP, headers, sessions, proxies, and anti-bot mechanisms Familiarity with cloud deployment (AWS/GCP) and database storage Bonus: Experience with travel industry data or large-scale crawling systems 🚀 What You’ll Get: Opportunity to work on high-impact travel tech projects Flexible working hours (remote-friendly) Collaborative team environment Growth opportunities within a fast-moving company 👋 Think you're a fit? Drop us a message or send your resume to [connect@technoprovince.com] with the subject line: Data Scraping Engineer Application Let’s build smart systems together. #TechHiring #DataScraping #PythonJobs #TravelTech #JobOpening #HiringNow #TechnoProvince
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a skilled Web Scraping Data Analyst, your primary responsibility will involve collecting, cleaning, and analyzing data from various online sources. You will leverage your expertise in Python-based scraping frameworks to design, develop, and maintain robust web scraping scripts using tools such as Python, BeautifulSoup, Scrapy, Selenium, and more. Additionally, you will be tasked with implementing IP rotation, proxy management, and anti-bot evasion techniques to ensure efficient data collection. Your role will be instrumental in constructing data pipelines that drive our analytics and business intelligence initiatives. Collaboration will be a key aspect of your work as you engage with data analysts and data engineers to integrate web-sourced data into internal databases and reporting systems. Furthermore, you will be involved in conducting exploratory data analysis (EDA) to derive valuable insights from the scraped data. It will be essential to adhere to website scraping policies, robots.txt guidelines, and relevant data privacy regulations to ensure compliance. To excel in this role, you should possess proficiency in Python and have experience with libraries like Requests, BeautifulSoup, Scrapy, and Pandas. Knowledge of proxy/VPN usage, IP rotation, and web traffic routing tools will be crucial for effective data collection. Familiarity with cloud platforms such as AWS, Azure, or GCP, as well as Linux-based environments, will be advantageous. Experience in deploying scraping scripts on edge servers or containerized environments and a solid understanding of HTML, CSS, JSON, and browser dev tools are also desirable skills. A strong analytical mindset coupled with experience in data cleansing, transformation, and visualization will be beneficial in handling large volumes of data and building efficient data pipelines. Proficiency in SQL and basic data querying will be necessary for data manipulation tasks. Preferred qualifications include experience with headless browsers like Puppeteer or Playwright, familiarity with scheduling tools like Airflow or Cron, and a background in data analytics or reporting using tools like Tableau, Power BI, or Jupyter Notebooks. This full-time role requires an in-person work location.,
Posted 2 weeks ago
3.0 years
0 Lacs
Panchkula, India
On-site
📢 We're Hiring: Full Stack Developer & Python Web Scraper 📍 Location: Panchkula (Full-Time, On-Site) 📅 Experience Required: Minimum 3+ Years We're on the lookout for a skilled Full Stack Developer who also brings strong expertise in Python-based web scraping . If you have experience or understanding of affiliate marketing , that's a major advantage. 👨💻 What you’ll be doing: Building and managing scalable scraping systems for large-scale data extraction Developing full-stack web applications (frontend + backend) Integrating affiliate marketing tools, APIs, and tracking systems Creating dashboards and tools to analyze scraped and affiliate performance data Working closely with the team to turn data into actionable insights ✅ Ideal skill set: Proficiency in Python scraping tools like Scrapy, BeautifulSoup, Selenium Strong backend experience with Python frameworks (Django, Flask) Frontend experience with React, Vue, or similar Knowledge of affiliate platforms, tracking systems (like Impact, CJ, Awin, etc.) Experience working with APIs, databases, and cloud deployment ✨ Bonus if you’ve built systems that power digital marketing or affiliate campaigns. If this sounds like you (or someone you know), feel free to DM me or send your CV to admin@chevronmedia.in . Let’s build something impactful together. #Hiring #FullStackDeveloper #WebScraping #PythonDeveloper #AffiliateMarketing #jobsinpanchkula #TechHiring
Posted 2 weeks ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
6+ year of strong hands-on experience in any Scrapy/Flask, Python, Django. •Experience with RDBMS – PostgreSQL, MySQL. •Experience with HTML5, HTML, CSS, Javascript, jQuery •Experience with frontend frameworks like ReactJS/Angular •Experience with Linux platform •Understanding of deployment architecture •Must have experience with docker •Experience in Kubernetes will be a plus •Experience in CICD. Hands on experience with Jenkins, GIT Workflow will be a plus •Experience with cloud services AWS/Azure/GCP/DigitalOcean •Experience with source control management tools (Git preferred) •Excellent communication, interpersonal and presentation skills. •Positive approach, self-motivated and well organized.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Python Backend Developer, you will play a crucial role in executing end-to-end application deliveries and ensuring their successful deployment to production while maintaining high quality standards. Your passion for building flexible and scalable solutions, coupled with meticulous attention to detail and ability to evaluate solutions effectively, will be key in this role. In this position, you will be responsible for designing and developing major software components, systems, and features for the Next Gen Rule Engine & Risk Platform for Digital Lending. Additionally, you will guide junior team members, create interfaces, integrate with other applications and databases, as well as handle debugging and troubleshooting of any issues that may arise. From a technical perspective, you are expected to have strong Python programming skills and a solid understanding of developing applications using Flask or Django. Expertise in developing REST framework applications utilizing both SQL and NoSQL databases is essential. Experience in web Restful APIs and microservices development, along with deploying applications on cloud platforms like AWS, will be highly beneficial. Proficiency in using Python Data Libraries such as Numpy and Pandas is desired, as well as any experience with message queues like Redis or Kafka. Knowledge of fundamental front-end languages like HTML, CSS, and JavaScript, along with familiarity with frontend frameworks, will be advantageous. Experience in web scraping using tools like Scrapy or Selenium would also be a valuable asset. On the non-technical side, strong logical and analytical skills, along with effective communication abilities, are important aspects of this role. Additionally, you will be responsible for handling and managing a small-sized team, showcasing your leadership and team management skills. This role presents an exciting opportunity for a motivated individual who is eager to contribute to the development of cutting-edge solutions in the field of digital lending. If you are looking to leverage your Python development expertise and take on a challenging yet rewarding role, this position could be the perfect fit for you.,
Posted 2 weeks ago
0.0 - 5.0 years
4 - 9 Lacs
Chennai
Remote
Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Required Candidate profile Knowledge of Python and related frameworks including Django and Flask. A deep understanding and multi-process architecture and the threading limitations of Python. Perks and benefits Flexible Work Arrangements.
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Mumbai Suburban
Work from Office
Job Description of Data Scraper 1.Develop and maintain automated web scraping scripts to extract data from multiple sources websites APIs databases Clean structure and store scraped data in a structured format CSV JSON SQL or cloud databases 2. Monitor scraping scripts to ensure reliability and prevent website blocks using proxies rotating useragents CAPTCHAsolving techniques Integrate scraped data into CRM dashboards or analytics platforms
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a Python Developer at Innefu Lab, you will play a crucial role in the software development life cycle, contributing from requirements analysis to deployment. Working in collaboration with diverse teams, you will design and implement solutions that align with client requirements and industry standards. Your responsibilities encompass various key areas: Software Development: You will be responsible for creating, testing, and deploying high-quality Python applications and scripts. Code Optimization: Your role involves crafting efficient, reusable, and modular code while enhancing existing codebases for optimal performance. Database Integration: You will integrate Python applications with databases to ensure data integrity and efficient data retrieval. API Development: Designing and implementing RESTful APIs to enable seamless communication between different systems. Collaboration: Working closely with UI/UX designers, backend developers, and stakeholders to ensure effective integration of Python components. Testing and Debugging: Thoroughly testing applications, identifying and rectifying bugs, and ensuring software reliability. Documentation: Creating and maintaining comprehensive technical documentation for code, APIs, and system architecture. Continuous Learning: Staying updated on industry trends, best practices, and emerging technologies related to Python development. Required Skills: - Proficient in Python, Django, Flask - Strong knowledge of Regular Expressions, Pandas, Numpy - Excellent expertise in Web Crawling and Web Scraping - Experience with scraping modules like Selenium, Scrapy, Beautiful Soup, or URLib - Familiarity with text processing, Elasticsearch, and Graph-Based Databases such as Neo4j (optional) - Proficient in data mining, Natural Language Processing (NLP), and Optical Character Recognition (OCR) - Basic understanding of databases - Strong troubleshooting and debugging capabilities - Effective interpersonal, verbal, and written communication skills - Ability to extract data from structured and unstructured sources, analyze text, images, and videos, and utilize NLP frameworks for data enrichment - Skilled in collecting and extracting intelligence from data, utilizing regular expressions, and extracting information from RDBMS databases - Experience in web scraping frameworks like Scrapy for data extraction from websites Join us at Innefu Lab, where innovative offerings and cutting-edge technologies converge to deliver exceptional security solutions. Be part of our dynamic team driving towards excellence and growth in the cybersecurity domain.,
Posted 3 weeks ago
2.0 years
3 Lacs
Mohali
On-site
Job Title: Python Developer (2+ Years Experience) Company: F33 Baseline IT Development Pvt. Ltd. Location: Mohali Contact: 9815404007 Key Responsibilities: Develop and maintain web applications using Python , Django , and Odoo Design and implement RESTful APIs Perform data extraction and automation using web scraping tools Debug and optimize code for performance and scalability Collaborate with front-end developers and project managers to meet project goals Write clean, well-documented, and testable code Required Skills: Minimum 2 years of experience in Python development Proficiency in Django framework Hands-on experience with Odoo ERP Expertise in web scraping using libraries like BeautifulSoup, Scrapy, or Selenium Good understanding of databases (PostgreSQL/MySQL) Familiarity with Git version control Excellent problem-solving and communication skills Preferred Qualifications: Bachelor’s degree in Computer Science, IT, or related field Experience with API integrations Knowledge of Linux server environments Perks & Benefits: Friendly and collaborative work environment Career growth opportunities 5-day working culture Job Type: Full-time Pay: Up to ₹30,000.00 per month Location Type: In-person Schedule: Morning shift Work Location: In person Speak with the employer +91 9888122266
Posted 3 weeks ago
4.0 years
3 - 10 Lacs
Mohali
On-site
Job Description : Should have 4+ years hands-on experience in algorithms and implementation of analytics solutions in predictive analytics, text analytics and image analytics Should have handson experience in leading a team of data scientists, works closely with client’s technical team to plan, develop and execute on client requirements providing technical expertise and project leadership. Leads efforts to foster innovative ideas for developing high impact solutions. Evaluates and leads broad range of forward looking analytics initiatives, track emerging data science trends, and knowledge sharing Engaging key stakeholders to source, mine and validate data and findings and to confirm business logic and assumptions in order to draw conclusions. Helps in design and develop advanced analytic solutions across functional areas as per requirement/opportunities. Technical Role and Responsibilities Demonstrated strong capability in statistical/Mathematical modelling or Machine Learning or Artificial Intelligence Demonstrated skills in programming for implementation and deployment of algorithms preferably in Statistical/ML based programming languages in Python Sound Experience with traditional as well as modern statistical techniques, including Regression, Support Vector Machines, Regularization, Boosting, Random Forests, and other Ensemble Methods; Visualization tool experience - preferably with Tableau or Power BI Sound knowledge of ETL practices preferably spark in Data Bricks cloud big data technologies like AWS, Google, Microsoft, or Cloudera. Communicate complex quantitative analysis in a lucid, precise, clear and actionable insight. Developing new practices and methodologies using statistical methods, machine learning and predictive models under mentorship. Carrying out statistical and mathematical modelling, solving complex business problems and delivering innovative solutions using state of the art tools and cutting-edge technologies for big data & beyond. Preferred to have Bachelors/Masters in Statistics/Machine Learning/Data Science/Analytics Should be a Data Science Professional with a knack for solving problems using cutting-edge ML/DL techniques and implementing solutions leveraging cloud-based infrastructure. Should be strong in GCP, TensorFlow, Numpy, Pandas, Python, Auto ML, Big Query, Machine learning, Artificial intelligence, Deep Learning Exposure to below skills: Preferred Tech Skills : Python, Computer Vision,Machine Learning,RNN,Data Visualization,Natural Language Processing,Voice Modulation,Speech to text,Spicy,Lstm,Object Detection,Sklearn,Numpy, NLTk,Matplotlib,Cuinks, seaborn,Imageprocessing, NeuralNetwork,Yolo, DarkFlow,DarkNet,Pytorch, CNN,Tensorflow,Keras,Unet, ImageSegmentation,ModeNet OCR,OpenCV,Pandas,Scrapy, BeautifulSoup,LabelImg ,GIT. Machine Learning, Deep Learning, Computer Vision, Natural Language Processing,Statistics Programming Languages-Python Libraries & Software Packages- Tensorflow, Keras, OpenCV, Pillow, Scikit-Learn, Flask, Numpy, Pandas, Matplotlib,Docker Cloud Services- Compute Engine, GCP AI Platform, Cloud Storage, GCP AI & MLAPIs Job Types: Full-time, Permanent, Fresher Pay: ₹30,000.00 - ₹90,000.00 per month Education: Bachelor's (Preferred) Experience: AI/Machine learining: 4 years (Preferred) Work Location: In person
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Festivals From India is helping ASIGN hire a Data Engineer to join their team. They are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure. The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely. Please note: The vetting process for this role comprises 2-3 rounds of interviews and may be followed by a brief assignment. Festivals From India is hiring for this role on behalf of the ASIGN This is an on-site, full-time position based in Chennai. Salary band for this role is available upon request. Essential Requirements: Minimum 5 years of hands-on experience in data engineering. Solid understanding and experience with ELT pipelines and modern data stack tools. Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.). Proficiency in Python and SQL. Experience working with APIs and data integration from multiple sources. Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery,Redshift). Strong problem-solving and debugging skills. Essential Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, Statistics, or a related field Proven experience (5+ years) in data engineering, data integration, and data management Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, BeautifulSoup, Selenium, Playwright) Proficiency in Python and SQL for data manipulation and pipeline development Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake) Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster) Strong understanding of relational and non-relational databases (PostgreSQL,MongoDB, etc.) Solid understanding of data modeling, ETL best practices, and data governance principles Systems knowledge and experience working with Docker. Strong and creative problem-solving skills and the ability to think critically about data engineering solutions. Effective communication and collaboration skills Ability to work independently and as part of a team in a fast-paced, dynamic environment.
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Lead Django Developer eGrove Systems is seeking a highly skilled and experienced Lead Django Developer to join our team of experts in Chennai or Madurai. This is an excellent opportunity for a seasoned developer to take on a leadership role, driving the development of innovative web applications and contributing to our dynamic and collaborative work culture. About EGrove Systems Established in 2008, eGrove Systems is a leading IT solutions provider with headquarters in East Brunswick, New Jersey, and a global presence. We specialize in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Our expertise spans custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We are committed to delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork, providing our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment. Responsibilities Lead the design, development, and deployment of robust and scalable web applications using Django. Architect and implement solutions adhering to best practices and design patterns. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Mentor junior developers and contribute to their professional growth. Participate in code reviews to maintain high code quality and standards. Stay up-to-date with emerging technologies and industry trends. Required Skills & Experience 5+ years of strong experience in Python, with at least 2 years specifically in the Django Web framework. Proven experience or knowledge in implementing various Design Patterns. Good understanding of the MVC framework and Object-Oriented Programming (OOP) principles. Hands-on experience with PostgreSQL (PGSQL) or MySQL, and MongoDB. Solid knowledge of different frameworks, packages, and libraries such as Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy, etc. Experience developing in a Linux environment, utilizing GIT for version control, and working within an Agile methodology. Good To Have Skills Knowledge in any one of the JavaScript frameworks: jQuery, Angular, or ReactJS. Experience in implementing charts and graphs using various libraries. Experience in Multi-Threading and REST API management. Experience : 5+ Years Notice Period : Immediate to 15 Days Location : Chennai / Madurai (ref:hirist.tech)
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
The position of Data Scrapper + QA Tester in Malad, Mumbai requires a skilled and proactive individual to join the team. The primary responsibilities include designing, managing, and implementing data-scraping tools to meet project requirements and performing Quality Assurance (QA) testing to ensure data accuracy and system reliability. As the Data Scrapper + QA Tester, you will be responsible for developing customized data-scraping tools based on short notice project requirements, scraping and compiling datasets from various global sources, and staying updated with the latest scraping tools and technologies to enhance efficiency. You will also need to identify and resolve challenges in data generation, optimize scraping processes, and conduct thorough QA testing to ensure data accuracy, consistency, and completeness. Collaboration with cross-functional teams to understand project goals, refine scraping and QA processes, and provide detailed documentation of tools developed, challenges encountered, and solutions implemented is essential. The ideal candidate should have proven experience in designing and implementing data-scraping tools, proficiency in programming languages commonly used for web scraping, ability to handle large datasets efficiently, and strong problem-solving skills. Preferred qualifications include experience with database management systems, familiarity with APIs and web scraping using API integrations, knowledge of data protection regulations and ethical scraping practices, and exposure to machine learning techniques for data refinement. If you are a problem-solver with expertise in data scraping and QA testing, and thrive in a fast-paced environment, we encourage you to apply for this position.,
Posted 3 weeks ago
2.0 - 3.0 years
3 - 7 Lacs
Pune
Work from Office
1.Develop and maintain service that extracts websites data using scrapers and APIs 2.Extract structured / unstructured data 3.Manipulate data through text processing, image processing, regular expressions etc. . Free meal Health insurance Accidental insurance Provident fund
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough