Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 4.0 years
6 - 12 Lacs
Ahmedabad, Bengaluru
Hybrid
Software Engineer is mainly responsible for core development of the product using best software development practices. We are looking for a highly competent and self-motivated individual for this role who can write effective, reusable, and modular code along with unit tests. And a person who can work with minimal supervision. Requirements: MCA or BE/BTech in Computer Science. Having 1-3 years of exp working in Microsoft technologies stack (C#/.NET Core, SQL Server) Sound knowledge of XPATH and HTML DOM. Develop scripts to extract data from websites and APIs. Experience with Puppeteer or Selenium. Good in algorithms and data structures. Worked in NoSQL database like Mongo DB, will be an added advantage. Knowledge of python will be added advantage. Worked in large scale distributed applications and familiar with event-based programming. Have knowledge of using various cloud services, mainly Azure. Must be familiar with Scrum methodology, CI/CD, Git, Branching/Merging and test-driven software development. Candidate worked in product-based company will be preferred. Good verbal and written communication skills.
Posted 3 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems Pvt Ltd is looking for Senior Python Developer to join its team of experts. Skill : Senior Python Developer. Exp : 4+Yrs. NP : Immediate to 15days. Location : Chennai/Madurai. Skills Requirement Hands-on software development skills, deep technical expertise across the entire software delivery process. Forward-thinking skilled individual. Structured, organized, and a good communicator. Write reusable, Testable, and Efficient code. Required Skills 3+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc. Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks : jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, start-ups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers. (ref:hirist.tech)
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Backend Python Developer to join its team of experts. Skill : Backend Python Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 3 weeks ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Python Developer – Web Scraping & Automation Company: Actowiz Solutions Location: Ahmedabad Job Type: Full-time About Us Actowiz Solutions is a leading provider of data extraction, web scraping, and automation solutions. We empower businesses with actionable insights by delivering clean, structured, and scalable data through cutting-edge technology. Role Overview We are looking for a highly skilled Python Developer with expertise in web scraping, automation tools, and related frameworks. Key Responsibilities Design, develop, and maintain scalable web scraping scripts and frameworks. Lead a team of Python developers in project planning, task allocation, and code reviews. Work with tools and libraries such as Scrapy, BeautifulSoup, Selenium, Playwright, Requests, etc. Implement robust error handling, data parsing, and storage mechanisms (JSON, CSV, databases, etc.). Optimize scraping performance and ensure compliance with legal and ethical scraping practices. Research new tools and techniques to improve scraping efficiency and scalability. Requirements 2+ years of experience in Python development with strong expertise in web scraping. Proficiency in scraping frameworks like Scrapy, Playwright, or Selenium. Deep understanding of HTTP, proxies, user agents, browser automation, and anti-bot measures. Experience with REST APIs, asynchronous programming, and multithreading. Familiarity with databases (SQL/NoSQL) and cloud-based data pipelines. Preferred Qualifications Knowledge of DevOps tools (Docker, CI/CD) is a plus. Experience with big data platforms or ETL pipelines is advantageous. Contact us Mobile : 841366964 Email:komal.actowiz@gmail.com Website: https://www.actowizsolutions.com/career.php
Posted 3 weeks ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Your mission As a Python Developer you are fostering technical excellence and innovation by delivering valuable solutions ensuring MIC‘s services stay competitive and market-relevant! Pars and transform data from various sources (websites, files, APIs) into JSON format and deliver it to cross-functional teams for downstream applications Develop and maintain data parsing pipelines using Python Scrape data from structured and unstructured web sources Store, transform, and manage data mainly in SQL Ensure data accuracy, reliability, and timeliness in delivery Collaborate with internal stakeholders Write clean, scalable, and well-documented code Your Skills And Qualifications 3+ years of hands-on experience in Python development Bachelor’s degree in Computer Engineering, Computer Science, or a related field Experience with web scraping frameworks (e.g., Scrapy, BeautifulSoup, Selenium, requests) Strong SQL knowledge and experience with relational databases (e.g., PostgreSQL, MySQL) Familiarity with JSON, REST APIs, data serialization, Git/ GitHub version control workflows Proficient in Linux/Unix environments or Windows Subsystem for Linux (WSL) Strong problem-solving and analytical skills Excellent communication, team player Ability to work independently and in international teams Proficient in English (spoken & written) We offer you Cooperative, appreciative and respectful culture International context with global players Professional onboarding: Personal training plan and mentoring MIC Academy Employee bonus and various benefits Various benefits: Bonus payment, annual leave, paid sick leave, provident fund, permanent employment. Join the international MIC-Team! MIC Offie Vadodora (on-site)
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Lead Python Developer to join its team of experts. Skill : Lead Python Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 3 weeks ago
7.0 years
0 Lacs
Greater Lucknow Area
On-site
About The Company Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our Overview : We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. Role As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence : Scalability/Performance : Lead and provide expertise in scraping at scale e-commerce marketplaces. Data Source Identification : Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives. Web Scraping Design : Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task. Data Extraction : Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure. Data Cleansing and Validation : Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process. Monitoring and Maintenance : Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms. Scalability and Performance : Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources. Compliance and Legal Considerations : Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations. Documentation : Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow. Collaboration : Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively. Security : Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping : Proven experience of 7+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects. Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services. Knowledge of browser fingerprinting. Has leadership experience. Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium. Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding. Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data. Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management. Attention to detail and ability to handle and process large volumes of data accurately. Familiarity with data cleansing techniques and data validation processes. Good communication skills and ability to collaborate effectively with cross-functional teams. Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service. Strong problem-solving skills and adaptability to changing web Qualifications : Bachelors degree in Computer Science, Data Science, Information Technology, or related fields. Experience with cloud-based solutions and distributed web scraping systems. Familiarity with APIs and data extraction from non-public sources. Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory. Prior experience in handling large-scale data projects and working with big data frameworks. Understanding of various data formats such as JSON, XML, CSV, etc. Experience with version control systems like Git. Skills : Web Scraping, Python, Selenium, HTML/CSS, XPath, Beautiful Soup and Scrapy. (ref:hirist.tech)
Posted 3 weeks ago
2.0 - 5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Position: Data Analyst with Market Research & Web Scraping Skills Location: Udyog Vihar Phase-1, Gurgaon Experience: 2-5 years in data analysis, preferably within a competitive analysis or market research Salary: Negotiable Industry: Fashion/garment/apparel Education: Bachelor's degree in Data Science, Computer Science, Statistics, Business Analytics, or a related field. Advanced degrees or certifications in data analytics or market research are a plus. Position Overview We are seeking a highly skilled Data Analyst who can combine data analysis expertise with a knack for market research and data scraping. This role requires hands-on experience in gathering and analyzing competitor data, identifying market trends, and extracting data from online sources, especially best-seller lists. You will play a critical role in providing insights into our competitive landscape, product performance, and market trends. Key Responsibilities Data Analysis & Interpretation Analyze large datasets to identify trends, patterns, and insights related to market trends and competitor performance. Conduct quantitative and qualitative analyses to support decision-making in product development and strategy. Develop dashboards, reports, and visualizations that summarize key insights for stakeholders. Market Research Perform in-depth market research to track competitor performance, emerging trends, and customer preferences. Use data-driven approaches to identify potential market opportunities and risks. Compile and present findings on market share, pricing strategies, and customer reviews of competing products. Data Scraping & Automation Design and implement data scraping solutions to gather competitor data from websites, including best-seller lists, product reviews, and pricing information. Ensure data extraction processes comply with legal standards and respect website terms of service. Maintain and update scraping scripts to adjust for changing website structures and ensure the continued relevance of data. Database Management & Data Cleaning Create and maintain organized databases with market and competitor data for easy access and retrieval. Perform data cleaning, transformation, and enrichment to ensure accuracy and consistency in datasets. Collaboration & Communication Work closely with cross-functional teams, including marketing, product development, and sales, to align data insights with company objectives. Communicate findings clearly through reports, presentations, and visualizations that drive strategic decisions. Experience: Proven experience with data scraping tools such as BeautifulSoup, Scrapy, or Selenium. Familiarity with web analytics and SEO tools, such as Google Analytics or SEMrush, is a plus. Technical Skills: Proficiency in SQL, Python, or R for data analysis and data manipulation. Experience with data visualization tools like Tableau, Power BI, or D3.js. Familiarity with data extraction libraries (e.g., Beautiful Soup, Scrapy) and knowledge of APIs for web scraping. Analytical Skills: Strong ability to interpret data, draw insights, and make strategic recommendations. Knowledge of statistical analysis techniques to support data-driven insights. Preferred Skills Experience with e-commerce data analysis and knowledge of retail or consumer behaviour analytics. Familiarity with machine learning techniques for data classification, clustering, and prediction (preferred but not required). Understanding of ethical data scraping practices and data privacy laws. Application Process Mail Updated Resume With Current Salary To Email: jobs@glansolutions.com For Any Inquiries, Contact Satish: 8802749743 Website: www.glansolutions.com Click for more jobs: Glan Solutions Job Listings Google search: Glan Management Consultancy Key Skills Data analyst, Ecommerce data Analyst, market research, Web Scraping, interpret data, data science, data engineer Posted on: 14th Nov, 2024
Posted 3 weeks ago
0 years
0 Lacs
Andhra Pradesh
On-site
Proficiency in Python, especially for data extraction and automation tasks. Strong experience with web scraping frameworks such as Scrapy, BeautifulSoup, or Selenium. Hands-on experience building automated data pipelines using tools like Airflow, Luigi, or custom schedulers. Knowledge of web data collection techniques, including handling pagination, AJAX, JavaScript-rendered content, and rate-limiting. Familiarity with RESTful APIs and techniques for API-based data ingestion. Experience with data storage solutions, such as PostgreSQL, MongoDB, or cloud-based storage (e.g., AWS S3, Google Cloud Storage). Version control proficiency, especially with Git. Ability to write clean, modular, and well-documented code. Strong debugging and problem-solving skills in data acquisition workflows. Nice-to-Haves Experience with cloud platforms (AWS, GCP, or Azure) for deploying and managing data pipelines. Familiarity with containerization tools like Docker. Knowledge of data quality monitoring and validation techniques. Exposure to data transformation tools (e.g., dbt). Understanding of ethical and legal considerations in web scraping. Experience working with CI/CD pipelines for data workflows. Familiarity with data visualization tools (e.g., Tableau, Power BI, or Plotly) for quick insights. Background in data science or analytics to support downstream use cases. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 3 weeks ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What You Will Do Day-to-day As a member of the Data Engineering team, you will be responsible for various aspects of data extraction, such as understanding the data requirements of the business group, reverse-engineering the website, its technology, and the data retrieval process, re-engineering by developing web robots to automate the extraction of the data, and building monitoring systems to ensure the integrity and quality of the extracted data. You will also be responsible for managing the changes to the website's dynamics and layout to ensure clean downloads, building scraping and parsing systems to transform raw data into a structured form, and offering operations support to ensure high availability and zero data losses. Additionally, you will be involved in other tasks such as storing the extracted data in the recommended databases, building high-performing, scalable data extraction systems, and automating data pipelines. Who We Are Looking For The ideal candidate should hold- Basic Qualifications ─ 2-4 years of experience in website data extraction and scraping ─ Good knowledge of relational databases, writing complex queries in SQL, and dealing with ETL operations on databases ─ Proficiency in Python for performing operations on data ─ Expertise in Python frameworks like Requests, UrlLib2, Selenium, Beautiful Soup, and Scrapy ─ A good understanding of HTTP requests and responses, HTML, CSS, XML, JSON, and JavaScript ─ Expertise with debugging tools in Chrome to reverse engineer website dynamics ─ A good academic background and accomplishments ─ A BCA/MCA/BS/MS degree with a good foundation and practical application of knowledge in data structures and algorithms ─ Problem-solving and analytical skills ─ Good debugging skills
Posted 4 weeks ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Mumbai, Maharashtra Work Type : Full Time Responsibilities Were looking for a Junior Python Developer who is passionate about web scraping and data extraction. If you love automating the web, navigating anti-bot mechanisms, and writing clean, efficient code, this role is for Responsibilities : Design and build robust web scraping scripts using Python. Work with tools like Selenium, BeautifulSoup, Scrapy, and Playwright. Handle challenges like dynamic content, captchas, IP blocking, and rate limiting. Ensure data accuracy, structure, and cleanliness during extraction. Optimize scraping scripts for performance and scale. Collaborate with the team to align scraping outputs with project : 6 months to 2 years of experience in web scraping using Python. Hands-on with requests, Selenium, BeautifulSoup, Scrapy, etc. Strong understanding of HTML, DOM, and browser behavior. Good coding practices and ability to write clean, maintainable code. Strong communication skills and ability to explain scraping strategies clearly. Based in Mumbai and ready to join to Have : Familiarity with headless browsers, proxy handling, and rotating user agents. Experience storing scraped data in JSON, CSV, or databases. Understanding of anti-bot protection techniques and how to bypass them (ref:hirist.tech)
Posted 4 weeks ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Proficiency in Python, especially for data extraction and automation tasks. Strong experience with web scraping frameworks such as Scrapy, BeautifulSoup, or Selenium. Hands-on experience building automated data pipelines using tools like Airflow, Luigi, or custom schedulers. Knowledge of web data collection techniques, including handling pagination, AJAX, JavaScript-rendered content, and rate-limiting. Familiarity with RESTful APIs and techniques for API-based data ingestion. Experience with data storage solutions, such as PostgreSQL, MongoDB, or cloud-based storage (e.g., AWS S3, Google Cloud Storage). Version control proficiency, especially with Git. Ability to write clean, modular, and well-documented code. Strong debugging and problem-solving skills in data acquisition workflows. Nice-to-Haves Experience with cloud platforms (AWS, GCP, or Azure) for deploying and managing data pipelines. Familiarity with containerization tools like Docker. Knowledge of data quality monitoring and validation techniques. Exposure to data transformation tools (e.g., dbt). Understanding of ethical and legal considerations in web scraping. Experience working with CI/CD pipelines for data workflows. Familiarity with data visualization tools (e.g., Tableau, Power BI, or Plotly) for quick insights. Background in data science or analytics to support downstream use cases.
Posted 4 weeks ago
1.5 years
2 - 4 Lacs
India
On-site
Job Title: Python Developer - Web Scraper Experience Required: 1.5+ years Location: Ahmedabad Job Description: We are seeking a skilled Python Developer with expertise in web scraping to join our team. The ideal candidate will have hands-on experience in designing, implementing, and maintaining efficient scraping solutions to extract, process, and analyze data from various online sources. Key Responsibilities: Develop and maintain web scraping scripts and tools using Python libraries such as Beautiful Soup, Scrapy, and Selenium. Optimize scraping processes for efficiency and reliability. Handle data extraction from various formats (HTML, JSON, XML, etc.). Identify and resolve issues related to web scraping, such as CAPTCHA handling and IP rotation. Collaborate with teams to define data requirements and deliver actionable insights. Document processes and ensure code quality through testing and debugging. Requirements: 1.5+ years of experience in Python development with a focus on web scraping. Proficiency in Python libraries/tools: Beautiful Soup, Scrapy, Selenium, Requests, etc. Experience with data manipulation and storage tools (e.g., Pandas, SQL, NoSQL databases). Familiarity with handling APIs and parsing complex data structures. Knowledge of version control systems like Git. Strong problem-solving skills and attention to detail. Job Type: Full-time Pay: ₹20,685.30 - ₹35,480.55 per month Schedule: Day shift Monday to Friday Work Location: In person
Posted 4 weeks ago
2.0 - 4.0 years
0 Lacs
India
On-site
Alternative Path is seeking skilled software developers to collaborate on client projects with an asset management firm. In this role, you will collaborate with individuals across various company departments to shape and innovate new products and features for our platform, enhancing existing ones. You will have a large degree of independence and trust, but you won't be isolated; the support of the Engineering team leads, the Product team leads, and every other technology team member is behind you. This is an opportunity to join a team-first meritocracy and help grow an entrepreneurial group inside Alternative Path. You will be asked to contribute, given ownership, and will be expected to make your voice heard. Role Summary: Performing Web Scraping using various scraping techniques and then utilizing Python’s Pandas library for data cleaning and manipulation. Then ingesting the data into a Database/Warehouse, and scheduling the scrapers using Airflow or other tools Role Overview The Web Scraping Team at Alternative Path is seeking a creative and detail-oriented developer to contribute to client projects. The team develops essential applications, datasets, and alerts for various teams within the client's organization, supporting their daily investment decisions. The mission is to maintain operational excellence by delivering high-quality proprietary datasets, timely notifications, and exceptional service. We are seeking someone who is self-motivated, self-sufficient, with a passion for tinkering and a love for automation. In your role, you will: ➢ Collaborate with analysts to understand and anticipate requirements. ➢ Design, implement, and maintain Web scrapers for a wide variety of alternative datasets. ➢ Perform Data Cleaning, Exploration, Transformation etc. of scraped data. ➢ Collaborate with cross-functional teams to understand data requirements and implement efficient data processing workflows. ➢ Author QC checks to validate data availability and integrity. ➢ Maintain alerting systems and investigate time-sensitive data incidents to ensure smooth day-to-day operations. ➢ Design and implement products and tools to enhance the Web scraping Platform. Qualifications Must have ➢ Bachelor's/master’s degree in computer science or in any related field ➢ 2-4 years of software development experience ➢ Strong Python and SQL/Database skills ➢ Strong expertise in using the Pandas library (Python) is a must ➢ Experience with web technologies (HTML/JS, APIs, etc.) ➢ Proven work experience in working with large data sets for Data cleaning, Data transformation, Data manipulation, and Data replacements. ➢ Excellent verbal and written communication skills ➢ Aptitude for designing infrastructure, data products, and tools for Data Scientists Preferred ➢ Familiarity with scraping and common scraping tools (Selenium, scrapy, Fiddler, Postman, xpath) ➢ Experience containerizing workloads with Docker (Kubernetes a plus) ➢ Experience with build automation (Jenkins, Gitlab CI/CD) ➢ Experience with AWS technologies like S3, RDS, SNS, SQS, Lambda, etc.
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About A highly motivated and passionate individual who has experience in executing end to end application deliveries; bringing them to production with high quality. Passionate about building flexible and scalable solutions with an eye for details and can weigh pros and cons to find the best possible solutions. As a Python Backend Developer, you should be comfortable around Python development frameworks & library usages. R&r Design and Develop Major software components, systems, and features for Next. Gen Rule Engine & Risk Platform for the Digital Lending Responsibilities also include Guiding Junior members, Creating Interfaces, Integration with other applications & Databases, and debugging and troubleshooting of any issues. Skills Required Technical Strong Python Programming skill, and having excellent knowledge of developing applications using Flask / Django is desired. Expertise in developing REST framework applications using both SQL and NoSQL DBs. Good Experience in Web RestFul APIs / Microservices Development. Experience in Deploying application in any of the cloud platforms like AWS is highly desireable. Experience in using Python Data Library (Numpy/Pandas) is desireable. Any Experience in Usage of Message queues like Redis/Kafka is a plus. Any Experience in fundamental front end languages such as HTML, CSS and JavaScript is a plus. Any Experience / Knowledge in any of the frontend frameworks is a plus. Experience in WebScrapping using Scrapy or Selenium is a plus. Non-Technical Strong Logical & Analytical skills with good soft communication skills Handling and managing a small sized team. (ref:hirist.tech)
Posted 4 weeks ago
7.0 years
25 - 30 Lacs
India
On-site
About The Company Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our platform. Position Overview We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence purposes. Responsibilities Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces. Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives. Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task. Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure. Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process. Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms. Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources. Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations. Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow. Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively. Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process. Requirements Proven experience of 7+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services Knowledge of browser fingerprinting Has leadership experience Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management Attention to detail and ability to handle and process large volumes of data accurately Familiarity with data cleansing techniques and data validation processes Good communication skills and ability to collaborate effectively with cross-functional teams Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service Strong problem-solving skills and adaptability to changing web environments Preferred Qualifications Bachelor’s degree in Computer Science, Data Science, Information Technology, or related fields Experience with cloud-based solutions and distributed web scraping systems Familiarity with APIs and data extraction from non-public sources Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory Prior experience in handling large-scale data projects and working with big data frameworks Understanding of various data formats such as JSON, XML, CSV, etc. Experience with version control systems like Git Skills:- Web Scraping, Python, Selenium, HTML/CSS, XPath, Beautiful Soup and Scrapy
Posted 1 month ago
0.0 - 2.0 years
2 - 3 Lacs
Pune
Work from Office
Job description KrawlNet Technologies provides services to advertiser & publisher to run the affiliate programs effectively. KrawlNet aggregates products from various retailers that can readily and effectively allow publishers & analytics to grow their business. An integral part of offerings is web-scale crawl and extraction. Our objective is to solve the business problems faced in the industry and provide associated services of cleansing, normalizing the web content. Responsibility: As a software developer, in this full-time permanent role, you will be responsible for Ensuring an uninterrupted flow of data from the various sources by crawling the web Extracting & managing large volumes of structured and unstructured data, with the ability to parse data into standardized format for ingestion into data sources Actively participate in troubleshooting, debugging & maintaining the broken crawlers Scraping difficult websites by deploying anti-blocking and anti-captcha tools Strong data analysis skills working with data quality, data consolidation and data wrangling Solid understanding of Data structures and Algorithms Comply with coding standards and technical design Requirements: Experience of complex crawling like captcha, recaptcha and bypassing proxy, etc Regular Expressions Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3. Strong fundamental C.S. skills (Data structures, algorithms, multi-threading, etc.) Good communication skills (must) Experience with web crawler projects is a plus. Required skills: Python, Perl, Scrapy, Selenium, headless browsers, Puppeteer, Node.js, Beautiful Soup, SVN, GitHub, AWS Desired: Experience in productionizing machine learning models Experience with DevOps tools such as Docker, Kubernetes Familiarity with a big data stack (e.g. Airflow, Spark, Hadoop, MapReduce, Hive, Impala, Kafka, Storm, and equivalent cloud-native services) Education: B.E / B.Tech / Bsc. Experience : 0-2 years Location: Pune (In- office) How to Apply: Please email a copy of your CV at hr@krawlnet.com
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Django Developer to join its team of experts. Skill : Django Developer Exp : 4+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidates can send your resume to annie@egrovesys.com Required Skills: - · 3+ years of Strong experience in Python & 2 years in Django Web framework. · Experience or Knowledge in implementing various Design Patterns. · Good Understanding of MVC framework & Object-Oriented Programming. · Experience in PGSQL / MySQL and MongoDB. · Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., · Experience developing in a Linux environment, GIT & Agile methodology. · Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. · Good to have experience in implementing charts, graphs using various libraries. · Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services . Established in 2008 , we are headquartered in East Brunswick, New Jersey , with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems . We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork . We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 1 month ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary: We’re looking for a passionate and driven Machine Learning Engineer with 1 year of hands-on experience working with Large Language Models (LLMs), Natural Language Processing (NLP), Python, and web scraping. The ideal candidate should have practical exposure to real-world NLP tasks and be comfortable fine-tuning models and building end-to-end pipelines. ✅ Key Responsibilities: Work on fine-tuning and integrating LLMs for various use cases. Build and maintain NLP pipelines for text classification, summarization, or Q&A systems. Perform web scraping to gather structured and unstructured data using Python libraries like BeautifulSoup, Scrapy, or Selenium. Collaborate with the team to deploy and test ML models in production environments. Continuously evaluate model performance and optimize based on feedback. 🧰 Requirements: 1 year of hands-on experience in Machine Learning, specifically in NLP and LLMs. Strong programming skills in Python. Experience with web scraping tools and libraries. Familiarity with ML libraries such as Hugging Face Transformers, spaCy, NLTK, or Scikit-learn. Basic understanding of model evaluation and tuning techniques. Good problem-solving skills and ability to work independently. 💡 Good to Have: Experience with prompt engineering or RAG-based architectures. Knowledge of API integration for AI/LLM services (e.g., OpenAI, Cohere, etc.).
Posted 1 month ago
1.0 - 2.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Bloom AI is a modern intelligence firm that accelerates decision-making through AI-driven synthesized intelligence. We empower enterprises to unlock the value of data with human-like synthesis and decision intelligence at scale. Our proprietary tools and solutions are trusted by investment managers, insurance, private equity, and Fortune 1000 companies for more informed, efficient, and productive business practices. Bloom AI is in Raleigh (U.S.) and New Delhi (India). Responsibilities: Design, develop, and maintain scalable Python-based applications with a focus on data scraping, data ingestion, and API integration. Build and manage web scrapers that are robust, fault-tolerant, and adaptable to changing website structures. Develop and integrate RESTful APIs to facilitate data exchange between internal systems and external services. Work with AWS services to deploy and scale scraping and data processing pipelines. Monitor scraper performance, implement logging and alerting, and ensure compliance with relevant data handling policies. Provide code documentation and other inputs to technical documents. Collaborate with cross-functional teams to define project requirements and scope. Requirements: 1-2 years of relevant experience. Strong experience with Python and libraries such as requests, BeautifulSoup, Scrapy, Selenium or Puppeteer for web scraping. Proven experience in designing and consuming RESTful APIs. Familiarity with Docker and CI/CD pipelines for automated testing and deployment. Understanding of version control systems, preferably Git.
Posted 1 month ago
0.0 - 3.0 years
0 - 0 Lacs
Bengaluru, Karnataka
Remote
About Sellmark Sellmark is committed to creating brands that foster memories and traditions by producing industry-leading outdoor lifestyle products. We promote a healthy outdoor lifestyle and drive innovation through positive leadership, strong ethics, and unwavering dedication. Our team-oriented culture encourages self-growth, mutual respect, and a passion for excellence—both at work and beyond. We seek individuals who bring passion to everything they do, instill confidence, trust, and respect, and inspire success while building strong relationships. If you're looking for a dynamic, professional, and supportive team, we’d love to have you join us. Job Summary We are seeking a hands-on Web Scraping & Automation Engineer to join our India-based analytics team. You’ll build proprietary scrapers to extract business data from public data sources to support US market lead generation. Over time, your skills will also be used in marketing, supply chain, new product development, and more per the needs of the organization. This is a fast-paced, outcome-oriented role, with direct business impact and a focus on innovation and reliability. - Develop modular and scalable web scraping tools (Python preferred) for structured data collection - Target platforms: Maps (Google, Apple, Bing), Government websites, Review Websites (Yelp), Independent retailer / competitor websites, E-commerce (Amazon, Scheels, Academy, Ebay), etc. - Integrate proxy rotation, captcha handling, and anti-blocking techniques - Export clean, structured data (CSV, JSON) for enrichment and analysis - Collaborate with Data Analyst for QA and reporting - Optimize scrapers for speed, efficiency, and long-term maintainability - Provide weekly progress reports and error logs to stakeholders Qualifications - Bachelor’s degree or diploma in any discipline. Data Science, Statistics, Economics preferred - 1–3 years of hands-on experience in scraping (BeautifulSoup, Selenium, Playwright, Scrapy, etc. - Familiarity with anti-bot strategies (user agents, proxies, time delays). - Comfort with Git, API requests, and basic Linux environments - Ability to troubleshoot and adapt to dynamic site structures - Experience scraping map-based platforms) - Exposure to scheduling tools (Airflow, CRON jobs) - Interest in U.S. retail/Distribution or tactical gear sectors - High attention to detail, consistency, and ability to meet deadlines. - Excellent written and verbal communication skills Work Environment & Physical Requirements While performing the duties of this job, the employee may be required to sit or stand for extended periods of time. The employee may be required to bend, twist, reach, push, pull and operate office machinery. Must be able to lift up to (50) pounds. Specific work assignments may change without notice. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions of the job. This role is based out of our Bangalore office and follows a standard Monday to Friday, 10:30 AM to 7:30 PM schedule. We do not offer hybrid or remote-first arrangements. We value the energy and collaboration that come from working together in person, and believe it plays a key role in building team culture, sparking creativity, and maintaining focus. Schedules may vary minimally depending on business needs and may occasionally require flexibility outside normal business hours. Benefits Competitive salary based on experience. Growth path within Sellmark’s global Anaytics, Insights and Data function. Wellness and development-focused company culture. Paid time off and holiday policy aligned with local regulations. Tools and support to succeed in a fast-paced, data-focused environment. Disclaimer The above information is intended to describe the general nature and level of work being performed. It is not intended to be an exhaustive list of responsibilities, duties, or skills required. Requirements for this job may be subject to change to meet business needs. Sellmark Corporation is an Equal Opportunity Employer. We do not discriminate based on race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, veteran status, or any other legally protected status. We are committed to fostering an inclusive and diverse workplace where all individuals feel valued and respected. Job Type: Full-time Pay: ₹40,000.00 - ₹50,000.00 per month Benefits: Health insurance Life insurance Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Bangalore, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): What is the earliest date that you are able to begin working in the office? Experience: Web scraping: 3 years (Preferred) Work Location: In person
Posted 1 month ago
1.0 years
0 Lacs
India
Remote
About Us: Upscrape is a fast-growing data automation and web scraping company building advanced scraping systems, custom data pipelines, and API-driven solutions for enterprise clients. We work on complex real-world challenges that require precision, scale, and expertise. As we continue to grow, we are looking to bring on an experienced developer to join our core technical team. Position Overview: We are hiring a full-time Python developer with strong experience in web scraping, browser automation, and backend API development. The ideal candidate has previously built production-level scraping systems, understands anti-bot protections, and can independently manage end-to-end data extraction workflows. This is a highly focused technical role, ideal for someone who enjoys solving real-world scraping challenges and working on meaningful projects that deliver immediate impact. Key Responsibilities: Build and maintain robust web scraping pipelines for dynamic, heavily protected websites. Develop backend APIs to serve and manage scraped data. Handle browser-based scraping using tools such as Playwright, Selenium, or Puppeteer. Implement advanced proxy management, IP rotation, and anti-blocking mechanisms. Ensure efficient error handling, retries, and system stability at scale. Collaborate closely with the founder and technical team to deliver client projects on time. Required Experience & Skills: 1+ years of hands-on scraping experience . Python (Playwright, Selenium, Requests, Async/Aiohttp, Scrapy, etc.). Experience bypassing anti-bot protections (Cloudflare, captchas, WAFs, bot detection). Proxy management at scale (residential, rotating proxies, IP pools). REST API development (Flask / FastAPI preferred). Database experience (MongoDB, PostgreSQL). Version control (Git), Docker, and Linux-based environments. Strong debugging and problem-solving ability. Clear, consistent communication. Bonus (Nice to Have): Experience with AI-powered data enrichment (LLMs, OCR, GPT-4 integrations). Familiarity with large-scale scraping architectures (millions of records). Previous work in SaaS, APIs, or productized data services. The Right Fit: We are looking for a developer who is: Self-driven - takes full ownership of tasks. Technically strong - has delivered real-world scraping solutions. Highly responsive - available for fast-paced collaboration across time zones. Outcome-focused - understands that clean, working systems matter more than theory. What We Offer: Remote full-time position. Stable long-term role with growth potential. Direct, efficient communication, no corporate bureaucracy. Work on meaningful projects with direct client impact. Competitive compensation based on skill and experience. How to Apply (Important Filter): In your application, please include: Links or code samples of scraping projects you’ve built Which tools and libraries you are most comfortable with A short explanation of how you approach extracting data from modern, highly dynamic websites that require advanced automation and protection handling.
Posted 1 month ago
4.0 - 9.0 years
14 - 22 Lacs
Pune
Work from Office
Responsibilities: * Design, develop, test and maintain scalable Python applications using Scrapy, Selenium and Requests. * Implement anti-bot systems and data pipeline solutions with Airflow and Kafka. Share CV on recruitment@fortitudecareer.com Flexi working Work from home
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Data Engineer Location: Chennai, India Experience: 5+ years Work Mode: Full-time (9am-6:30pm), In-office (Monday to Friday) Department: Asign Data Sciences About Us At Asign, we are revolutionizing the art sector with our innovative digital solutions. We are a passionate and dynamic startup dedicated to enhancing the art experience through technology. Join us in creating cutting-edge products that empower artists and art enthusiasts worldwide. Role Overview We are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure. The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely. Key Responsibilities ● Design, develop, and maintain efficient and scalable ELT data pipelines. ● Work closely with the data science and backend teams to understand data needs and transform raw inputs into structured datasets. ● Integrate multiple data sources including APIs, web pages, spreadsheets, and databases into a central warehouse. ● Monitor, test, and continuously improve data flows for reliability and performance. ● Create documentation and establish best practices for data governance, lineage, and quality. ● Collaborate with product and tech teams to plan data models that support business and AI/ML applications. Required Skills ● Minimum 5 years of hands-on experience in data engineering. ● Solid understanding and experience with ELT pipelines and modern data stack tools. ● Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.). ● Proficiency in Python and SQL. ● Experience working with APIs and data integration from multiple sources. ● Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). ● Strong problem-solving and debugging skills. Qualifications: Must-have: ● Bachelor’s/Master’s degree in Computer Science, Engineering, Statistics, or a related field ● Proven experience (5+ years) in data engineering, data integration, and data management ● Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, Beautiful Soup, Selenium, Playwright) ● Proficiency in Python and SQL for data manipulation and pipeline development ● Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake) ● Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster) ● Strong understanding of relational and non-relational databases (PostgreSQL, MongoDB, etc.) ● Solid understanding of data modeling, ETL best practices, and data governance principles ● Systems knowledge and experience working with Docker. ● Strong and creative problem-solving skills and the ability to think critically about data engineering solutions. ● Effective communication and collaboration skills ● Ability to work independently and as part of a team in a fast-paced, dynamic environment. Good-to-have: ● Experience working with APIs and third-party data sources ● Familiarity with version control (Git) and CI/CD processes ● Exposure to basic machine learning concepts and working with data science teams ● Experience handling large datasets and working with distributed data systems Why Join Us? ● Innovative Environment: Be part of a forward-thinking team that is dedicated to pushing the boundaries of art and technology. ● Career Growth: Opportunities for professional development and career advancement. ● Creative Freedom: Work in a role that values creativity and encourages new ideas. ● Company Culture: Enjoy a dynamic, inclusive, and supportive work environment.
Posted 1 month ago
8.0 - 10.0 years
5 - 8 Lacs
Kolkata
Work from Office
Note: Please don't apply if you do not have at least 5 years of Scrapy experience Location: KOLKATA ------------ We are seeking a highly experienced Web Scraping Expert (Python) specialising in Scrapy-based web scraping and large-scale data extraction. This role is focused on building and optimizing web crawlers, handling anti-scraping measures, and ensuring efficient data pipelines for structured data collection. The ideal candidate will have 6+ years of hands-on experience developing Scrapy-based scraping solutions, implementing advanced evasion techniques, and managing high-volume web data extraction. You will collaborate with a cross-functional team to design, implement, and optimize scalable scraping systems that deliver high-quality, structured data for critical business needs.Key Responsibilities Scrapy-based Web Scraping Development Develop and maintain scalable web crawlers using Scrapy to extract structured data from diverse sources. Optimize Scrapy spiders for efficiency, reliability, and speed while minimizing detection risks. Handle dynamic content using middlewares, browser-based scraping (Playwright/Selenium), and API integrations. Implement proxy rotation, user-agent switching, and CAPTCHA solving techniques to bypass anti-bot measures. Advanced Anti-Scraping Evasion Techniques Utilize AI-driven approaches to adapt to bot detection and prevent blocks. Implement headless browser automation and request-mimicking strategies to mimic human behavior. Data Processing & Pipeline Management Extract, clean, and structure large-scale web data into structured formats like JSON, CSV, and databases. Optimize Scrapy pipelines for high-speed data processing and storage in MongoDB, PostgreSQL, or cloud storage (AWS S3). Code Quality & Performance Optimization Write clean, well-structured, and maintainable Python code for scraping solutions. Implement automated testing for data accuracy and scraper reliability. Continuously improve crawler efficiency by minimizing IP bans, request delays, and resource consumption. Required Skills and Experience Technical Expertise 5+ years of professional experience in Python development with a focus on web scraping. Proficiency in using Scrapy based scraping Strong understanding of HTML, CSS, JavaScript, and browser behavior. Experience with Docker will be a plus Expertise in handling APIs (RESTful and GraphQL) for data extraction. Proficiency in database systems like MongoDB, PostgreSQL Strong knowledge of version control systems like Git and collaboration platforms like GitHub. Key Attributes Strong problem-solving and analytical skills, with a focus on efficient solutions for complex scraping challenges. Excellent communication skills, both written and verbal. A passion for data and a keen eye for detail Why Join Us? Work on cutting-edge scraping technologies and AI-driven solutions. Collaborate with a team of talented professionals in a growth-driven environment. Opportunity to influence the development of data-driven business strategies through advanced scraping techniques. Competitive compensation and benefits.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough