Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 - 0 Lacs
lucknow, uttar pradesh
On-site
QDesk DSS is a dynamic and growing Data Management and Offshore Staffing service provider based in Lucknow. We are seeking a talented and motivated Full Stack Developer - Android to join our team and contribute to the development of web applications and automation tools for internal company usage. Key Responsibilities: Full Stack Development: Design, develop, and maintain robust web applications using Python. Develop and maintain Android applications using Kotlin/Java. Implement and maintain scalable backend services using Node.js, Python, or Java. Work with databases like MongoDB, Firebase, MySQL, or PostgreSQL for data storage and management. Collaborate with cross-functional teams to define, design, and ship new features. Web Data Crawling and Analysis: Create bots to crawl web servers and gather relevant data for analysis. Develop algorithms for data processing and analysis. Database Management: Design and implement database solutions to store and retrieve data efficiently. Optimize database queries for improved performance. User Interface (UI) Design: Create user-friendly and visually appealing interfaces. Ensure responsive design for seamless user experiences across devices. Testing and Debugging: Conduct thorough testing of applications to identify and fix bugs. Collaborate with quality assurance teams to ensure high-quality deliverables. Documentation: Maintain detailed documentation for code, algorithms, and processes. Provide training and support to internal teams as needed. Qualifications: Proven experience as a Full Stack Developer or similar role. Strong proficiency in Python and its web frameworks (Django, Flask, etc.). Experience in web data crawling using tools like Scrapy or Beautiful Soup. Familiarity with front-end technologies such as HTML, CSS, and JavaScript. Knowledge of database systems (e.g., MySQL, PostgreSQL) and ORM frameworks. Understanding of version control systems (e.g., Git). Ability to work independently and as part of a collaborative team. Education and Experience: Bachelor's degree in Computer Science, Engineering, or a related field. 2-3 years of relevant work experience Additional Skills (Preferred): Experience with cloud platforms (e.g., AWS, Azure). Knowledge of containerization (e.g., Docker). Familiarity with task automation tools (e.g., Celery). Strong knowledge of Kotlin/Java. Experience with Android SDK & Jetpack Components. Understanding of MVVM architecture & Clean Code Principles. Proficiency in RESTful APIs and JSON. Knowledge of SQLite/Room Database. Familiarity with Google Play Store deployment. Opportunity for Growth: Opportunity to grow into a senior developer role. We are committed to the professional development of our team members and provide a supportive environment for advancing your career. How to Apply: Interested candidates are invited to submit their resume, a cover letter + (portfolio) or email to qdeskhrd@gmail.com. Please include "Full Stack Developer Android Application" in the subject line. Note: This is a full-time position based in Lucknow, Uttar Pradesh. Working hours are from 11 am to 8 pm, Monday to Saturday. Salary: Rs. 20,000 to 25,000 per month. QDesk DSS is an equal opportunity employer. We encourage candidates from all backgrounds to apply. Job Type: Full-time Benefits: Paid time off Schedule: Day shift Night shift Education: Bachelor's (Preferred) Experience: Angular: 1 year (Preferred) total work: 1 year (Preferred) Java: 1 year (Required) Work Location: In person,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
surat, gujarat
On-site
TransForm Solutions is a trailblazer in the business process management and IT-enabled services industry, known for delivering top-notch solutions that drive business efficiency and growth. With a focus on innovation and excellence, we empower businesses to transform their operations and achieve their full potential. As we continue to expand, we're currently seeking a dynamic Senior Web Data Scraping Engineer to join our team and help us harness the power of data. Your primary mission will involve developing cutting-edge solutions by designing, developing, and maintaining robust web scraping solutions that extract large datasets from various websites to fuel our data-driven initiatives. You will be expected to leverage your advanced Python skills to implement and optimize sophisticated scraping scripts and tools. Additionally, you will utilize industry-leading tools such as BeautifulSoup, Scrapy, Selenium, and other scraping frameworks to collect and process data efficiently. Moreover, you will innovate with AI by using ChatGPT prompt skills to automate and enhance data extraction processes, pushing the boundaries of what's possible. In this role, you will be responsible for optimizing data management by cleaning, organizing, and storing extracted data in structured formats for seamless analysis and usage. Ensuring peak performance, you will optimize scraping scripts for efficiency, scalability, and reliability. Demonstrating autonomy, you will manage tasks and deadlines independently, showcasing your ability to deliver high-quality work autonomously. Collaboration with other team members will be essential to understand data requirements and deliver actionable insights. You will also troubleshoot with precision to identify and resolve data scraping issues, ensuring data accuracy and completeness. Thorough documentation of scraping processes, scripts, and tools used will be crucial for transparency and knowledge sharing. To be successful in this role, you should possess a minimum of 3 years of experience in web data scraping, with a strong focus on handling large datasets. Advanced skills in Python programming, particularly in the context of web scraping, are essential. In-depth knowledge and experience with BeautifulSoup, Scrapy, Selenium, and other relevant scraping tools are required. Proficiency in using ChatGPT prompts to enhance and automate scraping processes, strong data management skills, excellent problem-solving abilities, meticulous attention to detail, and effective communication skills are also necessary. Proven independence in working autonomously and managing multiple tasks and deadlines effectively is a key attribute. Preferred skills include experience with API integration for data extraction, familiarity with cloud platforms such as AWS, Azure, or Google Cloud for data storage and processing, understanding of database management systems and SQL for data storage and retrieval, and proficiency in using version control systems like Git. In terms of compensation, we offer a competitive base salary commensurate with experience and skills, along with potential performance-based bonuses based on successful project outcomes and contributions. Joining TransForm Solutions means being part of a forward-thinking team that values innovation, collaboration, and excellence. You'll have the opportunity to work on groundbreaking projects, leverage the latest technologies to transform data into actionable insights, and have your professional growth recognized and rewarded. If you are a top-tier web data scraping engineer passionate about delivering impactful results, we invite you to apply now and become a key player in our journey to harness the power of data to transform businesses.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
jaipur, rajasthan
On-site
As a Python Developer/Web Scraping Specialist at our company in Jaipur, you will be a crucial member of our technology team. With a minimum of 1 year of experience in Python programming and a deep understanding of web scraping tools and techniques, you will play a vital role in designing and implementing efficient web crawlers to extract structured data from various websites and online platforms. Your responsibilities will include designing, building, and maintaining web scraping scripts and crawlers using Python. You will utilize tools such as BeautifulSoup, Selenium, and Scrapy to extract data from both dynamic and static websites. Furthermore, you will be responsible for cleaning, structuring, and storing extracted data in usable formats such as CSV, JSON, and databases. It will be essential for you to handle data parsing, anti-scraping measures, and ensure compliance with website policies. Monitoring and troubleshooting scraping tasks for performance and reliability will also be part of your day-to-day tasks. Collaboration with team members to understand data requirements and deliver accurate and timely results will be crucial. You will be expected to optimize scraping scripts for speed, reliability, and error handling while also maintaining documentation of scraping processes and codebase. The ideal candidate for this role should possess solid programming skills in Core Python and data manipulation. Strong experience in Web Scraping using tools like BeautifulSoup, Selenium, and Scrapy is a must. Familiarity with HTTP protocols, request headers, cookies, and browser automation will be highly beneficial. An understanding of HTML, CSS, and XPath for parsing and navigating web content is essential, as well as the ability to handle and solve CAPTCHA and anti-bot mechanisms. Experience working with data formats like JSON, XML, and CSV, along with knowledge of version control tools like Git, will be advantageous. A Bachelor's degree in Computer Science, IT, or a related field is preferred. Experience with task schedulers (e.g., CRON, Celery) for automated scraping, knowledge of storing data in SQL or NoSQL databases, and familiarity with proxy management and user-agent rotation are desirable qualifications for this position. If you are ready to take on this full-time role in Jaipur and meet the required qualifications and skills, we look forward to receiving your application.,
Posted 2 weeks ago
0.0 - 2.0 years
0 - 0 Lacs
mohali, punjab
On-site
Male applicants are preferred We are looking for an enthusiastic and proactive Python Developer with core Python expertise and hands-on experience in Generative AI (GenAI) to join our development team. Experience Required: 2-3 Years Mode of Work: On-Site Only (Mohali, Punjab) Mode of Interview : Face to Face( On-Site) Contact for Queries: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Note: This number will be unavailable on weekends and public holidays. Key Responsibilities: Backend Development: Assist in the development of clean, efficient, and scalable Python applications to meet business needs. Generative AI Experience: Working knowledge and experience in building applications using GenAI technologies is mandatory. API Integration: Support the creation, management, and optimization of RESTful APIs to connect backend and frontend components. Collaboration: Work closely with frontend developers to integrate backend services into ReactJS applications, ensuring smooth data flow and functionality. Testing and Debugging: Help with debugging, troubleshooting, and optimizing applications for performance and reliability. Code Quality: Write readable, maintainable, and well-documented code while following best practices. Learning and Development: Continuously enhance your skills by learning new technologies and methodologies. Required Skills and Experience Problem Solving: Strong analytical skills with an ability to identify and resolve issues effectively. Previous working experience on LLM and AI Agents is a Plus. Teamwork: Ability to communicate clearly and collaborate well with cross-functional teams. Programming Languages: Python (Core and Advanced) , JavaScript , HTML, CSS Frameworks: Django , Flask , FastAPI , LangChain Libraries & Tools: Pandas, NumPy , Selenium, Scrapy, BeautifulSoup , Git, Postman, OpenAI API, REST APIs Databases : MySQL , PostgreSQL , SQLite Cloud & Deployment: Hands-on experience with AWS services (EC2, S3, etc.) , Building and managing cloud-based scalable applications Automation: Familiarity with Retrieval-Augmented Generation (RAG) architecture. Automation of workflows and intelligent systems using Python. Preferred Qualifications: Education: A degree in Computer Science, Software Engineering, or a related field (or equivalent practical experience). Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Experience: Python: 2 years (Preferred) Work Location: In person
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be responsible for conducting in-depth online research to identify potential leads in specific geographies, utilizing advanced web scraping tools and techniques to extract accurate contact and business data from various sources. It will be your duty to validate and verify the collected data to ensure its quality and relevance. Additionally, you will maintain and manage a structured database of leads for outreach and tracking purposes. Collaboration with the sales and marketing teams to provide a consistent flow of high-quality leads is a key aspect of this role. Staying informed about industry trends, tools, and best practices in data scraping and lead generation is essential. To excel in this position, you should have proven experience in data scraping lead generation, particularly in international markets with a preference for the UK. Proficiency in web scraping tools and methods such as Python/BeautifulSoup, Scrapy, Octoparse, or similar is required. Strong attention to detail, organizational skills, and a focus on data accuracy are crucial. The ability to efficiently manage time and handle multiple tasks, along with excellent communication and coordination skills, will be beneficial. Immediate availability or a short notice period is preferred for this role.,
Posted 2 weeks ago
1.0 years
0 - 0 Lacs
puducherry
Remote
Freelance Python Developer (Web Scraping Specialist) Job Title: Freelance Python Developer – Web Scraping Location Preference: Pondicherry / Cuddalore / Villupuram (Remote also considered) Experience Required: Minimum 1 Year in Python development. Pay: ₹150 – ₹200 per hour Responsibilities: Develop and maintain Python-based web scraping solutions. Work with libraries and tools like Playwright, Scrapy, BeautifulSoup, Selenium. Extract, clean, and structure large volumes of data from websites. Debug and optimize scraping scripts for performance and accuracy. Deliver projects on time in a freelancing environment. Requirements: Minimum 1 year of experience in Python development. Strong hands-on knowledge of web scraping. Expertise in Playwright, Scrapy, BeautifulSoup, Selenium. Good understanding of databases . Ability to work independently and meet deadlines. Job Type: Freelance Pay: ₹150.00 - ₹200.00 per hour Experience: Python: 1 year (Required) Work Location: Remote
Posted 2 weeks ago
2.0 years
8 - 18 Lacs
delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 2 weeks ago
0.0 - 5.0 years
0 - 1 Lacs
delhi, delhi
On-site
Job description: We’re looking for a hands-on Data Engineer to manage and scale our data scraping pipelines across 60+ websites. The job involves handling OCR-processed PDFs, ensuring data quality, and building robust, self-healing workflows that fuel AI-driven insights. You’ll Work On: Managing and optimizing Airflow scraping DAGs Implementing validation checks, retry logic & error alerts Cleaning and normalizing OCR text (Tesseract / AWS Textract) Handling deduplication, formatting, and missing data Maintaining MySQL/PostgreSQL data integrity Collaborating with ML engineers on downstream pipelines What You Bring: 2–5 years of hands-on experience in Python data engineering Experience with Airflow, Pandas, and OCR tools Solid SQL skills and schema design (MySQL/PostgreSQL) Comfort with CSVs and building ETL pipelines Required: 1. Scrapy or Selenium experience 2. CAPTCHAs handling 3. Experience in PyMuPDF, Regex 4. AWS S3 5. LangChain, LLM, Fast API 6. Streamlit 7. Matplotlib Job Type: Full-time Day shift Work Location: In person Job Type: Full-time Pay: ₹70,000.00 - ₹150,000.00 per month Application Question(s): Total years of experience in web scraping / data extraction Have you worked with large-scale data pipelines? Are you proficient in writing complex Regex patterns for data extraction and cleaning? Have you implemented or managed data pipelines using tools like Apache Airflow? Years of experience with PDF Parsing and using OCR tools (e.g., Tesseract, Google Document AI, AWS Textract, etc.) 6. Years of experience handling complex PDF tables with merged rows, rotated layouts, or inconsistent formatting Are you willing to relocate to Delhi if selected? Current CTC Expected CTC Work Location: In person
Posted 2 weeks ago
18.0 - 22.0 years
0 Lacs
karnataka
On-site
As a professional responsible for handling end-to-end client AI & analytics programs in a dynamic environment, your role will encompass a diverse range of tasks including hands-on contribution, technical team management, and client interaction. You should possess an M-Tech/PhD from a reputable Institute, preferably in a quantitative subject, along with over 18 years of practical experience in applied Machine Learning, AI, and analytics. Proficiency in scientific programming using scripting languages like Python, R, SQL, NoSQL, and Spark, coupled with expertise in ML tools and Cloud Technology (AWS, Azure, GCP), is essential. Familiarity with Python libraries such as numpy, pandas, sci-kit-learn, tensor-flow, scrapy, BERT, etc., is also required. A solid understanding of machine learning, deep learning, data mining, and statistical concepts is crucial, along with a proven track record of developing models and solutions in these domains. Your role will also involve engaging with clients, comprehending complex problem statements, and proposing solutions in areas like Supply Chain, Manufacturing, CPG, Marketing, among others. Additionally, desirable skills include a deep knowledge of ML algorithms for common use cases in structured and unstructured data ecosystems, comfort with large-scale data processing and distributed computing, and providing necessary inputs to sales and pre-sales activities. Being a self-starter who can thrive with minimal guidance and possessing excellent written and verbal communication skills are also highly advantageous. (ref:hirist.tech),
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an AI/ML Data Analyst at our company, you will have the opportunity to develop and deploy machine learning models, optimize data pipelines, and drive AI-driven innovation. Your expertise in machine learning, artificial intelligence, automation, and data engineering will be crucial for this role. You will work with structured and unstructured data, utilizing Node JS, Python, SQL, and data scraping techniques to extract and preprocess data efficiently. Your responsibilities will include developing and deploying machine learning models for predictive analytics and automation, building and optimizing data pipelines using SQL and Python, and implementing data scraping techniques to extract data from various sources. You will collaborate with data engineers and product teams to integrate AI/ML solutions into existing systems, optimize model performance, and stay updated with the latest advancements in AI, ML, and data engineering. To succeed in this role, you will need a Bachelor's or Master's degree in Computer Science, Data Science, AI/ML, or a related field, along with 3-6 years of relevant experience in a SaaS or IT Product Development company. Proficiency in Data Analysis using NodeJS and Python, experience with AI-based Python notebooks, data interrogation techniques, and database technologies are essential. Hands-on experience in building and deploying ML models based on classification algorithms, knowledge of cloud platforms, and strong problem-solving skills are also required. Preferred skills include basic knowledge of data warehousing, experience in LLMs, Generative AI, or Transformer-based architectures, exposure to distributed computing and cloud infrastructure, and contributions to open-source AI projects, research publications, or Kaggle competitions. Please note that this job description is not exhaustive and duties, responsibilities, and activities may change with or without notice.,
Posted 2 weeks ago
3.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Job Overview We are seeking six highly motivated and detail-oriented professionals to join our Data & Insights team. As a Data Processor / Scraper / Collector, you will play a pivotal role in gathering, cleaning, and organizing entertainment-related data from various sources. Your work will directly support strategic decision-making across content development, marketing, and audience engagement. Industry: Entertainment/ Hospitality Experience Required: Minimum 3 years Key Responsibilities Collect and process structured and unstructured data from websites, APIs, and internal databases Scrape entertainment-related data (e.g., movie listings, showtimes, ratings, social media trends) using automated tools or scripts Clean, validate, and organize datasets for analysis and reporting Maintain and update Excel-based dashboards and trackers for internal teams Collaborate with content, marketing, and analytics teams to ensure data relevance and accuracy Monitor data pipelines and troubleshoot inconsistencies or errors Ensure compliance with data privacy and copyright regulations Required Skills & Qualifications Minimum 3 years of experience in data collection, scraping, or processing Strong proficiency in Microsoft Excel (pivot tables, formulas, data validation, etc.) Familiarity with web scraping tools (e.g., BeautifulSoup, Scrapy, Selenium) or willingness to learn Experience working with entertainment data (e.g., OTT platforms, box office, music, events) preferred Basic understanding of data cleaning and transformation techniques Ability to work independently and manage multiple data sources Strong attention to detail and commitment to data accuracy Preferred Extras Knowledge of Python or R for data manipulation Experience with Google Sheets, Power BI, or Tableau Understanding of SEO, social media analytics, or audience behaviour metrics Prior work with entertainment platforms, production houses, or media agencies
Posted 2 weeks ago
0 years
0 Lacs
india
Remote
Job Title: Python Developer (Hybrid Intelligence) Location: Remote (Work From Home) Job Type: Full Time Compensation Range: 5-8 LPA Working: 5.5 days (Saturday half day) Website: https://algo-one.com About the role We’re not just hiring developers — we’re hiring pioneers. The Hybrid Intelligence Developer role is for those who believe that the best code is written when human creativity meets AI speed . You’ll be building full-stack applications at lightning pace, using AI coding assistants as your co-pilot while you focus on solving problems, designing systems, and shipping features that matter. If you’re the kind of person who experiments with prompts, optimizes workflows with AI, and believes coding will never be the same again, you’ll fit right in. Link to full job description: https://algo-one.com/uploads/jd_algo_one_hid.pdf What You'll Do Engineer with impact: Design and implement efficient, reusable, and reliable Python code that powers real-world s olutions.Ship excellence: Deliver high-quality software that doesn't just meet deadlines -- it exceeds expectations.Colla borate & innovate: Work hand-in-hand with designers, product owners, and fellow developers to bring bold ideas to life.Probl em-solve like a pro: Debug, troubleshoot, and optimize systems until they're lightning-fast and rock-solid.Test beyond limits: Use rigorous testing and debugging to ensure reliability at scale.Eleva te the team: Participate in code reviews, share feedback, and help raise the bar for everyone.Stay ahead of the curve: Continuously learn and apply the latest trends and best practices in Python and software development.Move fast, stay agile: Join Agile/Scrum ceremonies, contribute to planning, and help deliver with speed and precision.Impro ve relentlessly: Refactor, optimize, and enhance existing applications so they get better every day.Docum ent like a pro: Write clear, thorough technical specs so your work is understood and reusable.Be a creator: Don't just follow requirements -- bring your own ideas to engineer smarter infrastructure and better products.Keep the edge: Stay plugged into industry trends so our products(and you!) remain ahead of the competition. What We're Looking For We’re open to all levels of talent — whether you’re just starting or a seasoned professional, what matters most is your curiosity, adaptability, and drive to build with AI. Here’s what will help you thrive in this role: Python Wizardry: Strong proficiency in Python, with hands-on experience in multithreading, concurrency, and web frameworks (Django, Flask, FastAPI, etc.). Modern Dev Practices: Familiarity with Git/GitLab, CI/CD pipelines, and containerization (Docker, Kubernetes). Systems Know-How: Comfort with caching systems (Redis), message queues (Kafka, Celery), and asynchronous tools (Asyncio, WebSockets). AI & ML Enthusiasm: Understanding of NLP concepts and libraries (NLTK, spaCy). Exposure to Large Language Models (LLMs), prompt engineering, and frameworks like LangChain. Bonus if you’ve built machine learning solutions or experimented with AI coding assistants (Copilot, ChatGPT, Tabnine, Ghostwriter). Data Handling Skills: Knowledge of relational databases (MySQL, PostgreSQL) and ORM tools (SQLAlchemy). Experience with web scraping (Beautiful Soup, Scrapy, Selenium) is a plus. Full Stack Flair: While Python is your core, familiarity with front-end basics (HTML, CSS, JS) is always welcome. Engineering Mindset: Strong analytical skills, logical problem-solving, and a meticulous approach to writing clean, maintainable code. Collaboration & Growth: A team player who communicates clearly, shares ideas, and loves learning new tools and technologies. The Right Attitude: Resilient, patient, and persistent — you don’t give up easily, and when you fail, you fail forward. Join our team and contribute to building cutting-edge solutions that make a real difference! We want to hear from you if you have a passion for this role and thrive in a collaborative environment. Apply now! My
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Python Developer & Web Crawling professional at Global Checks Pvt Ltd, you will collaborate with cross-functional teams to define, design, and implement new features. Your role will involve ensuring the performance, quality, and responsiveness of web crawling systems. You will be responsible for identifying and correcting bottlenecks, fixing bugs in web crawling processes, and helping maintain code quality, organization, and automation. It is essential to stay up-to-date with the latest industry trends and technologies. With 3+ years of experience in Web Scraping or Crawling through frameworks like Scrapy, Selenium, or other related libraries, you should be an expert in the latest version of Python. Your expertise will be crucial in fetching data from multiple online sources, cleansing it, and building APIs on top of it. A good understanding of data structure and algorithms, as well as knowledge in bypassing Bot Detection techniques, is required. Experience in Web RestFul APIs/Microservices Development is preferred. In this role, you will need to think deeply about developing large-scale scraping tools, including data integrity, health, and monitoring systems. It is important to develop a deep understanding of vast data sources on the web and know how, when, and which data to scrape, parse, and store. Working with SQL and NoSQL databases to store raw data, developing frameworks for automating and maintaining a constant flow of data from multiple sources, and having knowledge of distributed technologies are essential. You should possess a strong passion for coding, take quality, security, and performance seriously, and be able to work independently with little supervision to research and test innovative solutions. Collaboration with other engineers and cross-teams is necessary, and excellent communication skills are required to present effectively to both business and technical stakeholders. Joining Global Checks Pvt Ltd offers various perks and benefits, including learning directly from industry experts with over 20 years of experience, the opportunity to work in a dynamic and collaborative environment, professional development and growth opportunities, exposure to world-class management practices, and insurance benefits (Medical and Accidental) for all employees. Additionally, there are multi-level rewards programs for all employees. This role provides a challenging yet rewarding opportunity to contribute to a cutting-edge web crawling system and work with a talented team to drive innovation and excellence in data scraping and API development.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Lead Data Scraping Engineer, you will be a vital part of our team in Ahmedabad/WFO, bringing your senior-level expertise with over 4 years in the IT scraping industry. Your role will involve not only hands-on data scraping using tools like Scrapy and Python libraries but also leading and mentoring a team of 5+ developers to deliver successful projects within set timelines. Your primary responsibilities will include designing and implementing scalable data scraping solutions, managing project timelines, and ensuring the team's success through effective leadership. You will be expected to utilize advanced scraping techniques, reverse engineering, and automation to overcome scraping restrictions and automate interactions effectively. In addition to overseeing proxies, IP rotation, and SSL unpinning for efficient scraping, you will also be responsible for maintaining API integrations, data pipelines, and ensuring code quality through version control, error handling, and documentation. Collaboration with cross-functional teams, performance monitoring, and providing solutions under high-pressure environments will also be key aspects of your role. To excel in this position, you should have a minimum of 4 years of experience in data scraping, with at least 2 years of leadership experience managing a team of developers. Proficiency in tools like Scrapy, Threading, requests, and web automation, along with advanced Python skills, is essential. Additionally, strong technical skills in captcha solving, blocking handling, source reverse engineering, and API management are required. Your leadership skills should include basic project management, moderate documentation, team handling, pressure management, flexibility, adaptability, and high accountability. Experience with Linux, as well as knowledge of tools like Appium, Fiddler, and Burp Suite, would be considered a plus. If you are passionate about data scraping, possess strong technical expertise, and thrive in a leadership role, we encourage you to apply and be part of our dynamic team.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be responsible for developing cutting-edge solutions by designing, developing, and maintaining robust web scraping solutions to extract large datasets from various websites, supporting our data-driven initiatives. Your role involves mastering Python programming to implement and optimize sophisticated scraping scripts and tools. You will be leveraging industry-leading tools such as BeautifulSoup, Scrapy, Selenium, and other scraping frameworks to efficiently collect and process data. Additionally, you will innovate with AI technologies like ChatGPT prompt skills to automate and enhance data extraction processes, pushing the boundaries of what is possible. Your duties will also include optimizing data management by cleaning, organizing, and storing extracted data in structured formats for seamless analysis and usage. Ensuring peak performance of scraping scripts for efficiency, scalability, and reliability will be crucial. You will troubleshoot data scraping issues with precision, guaranteeing data accuracy and completeness. It is essential to maintain clear and comprehensive documentation of scraping processes, scripts, and tools for transparency and knowledge sharing purposes. To qualify for this position, you must have a minimum of 5 years of experience in web data scraping, with a strong emphasis on handling large datasets. Advanced skills in Python programming, particularly in the context of web scraping, are required. Proficiency in tools such as BeautifulSoup, Scrapy, Selenium, and other relevant scraping tools is necessary. Strong skills in data cleaning, organization, and storage, along with excellent problem-solving and analytical skills to tackle complex scraping challenges, are essential. Meticulous attention to detail is crucial for ensuring data accuracy and completeness. The ability to work independently, manage multiple tasks, and meet deadlines effectively is also a key requirement. Preferred skills for this role include experience with API integration for data extraction, familiarity with cloud platforms like AWS, Azure, or Google Cloud for data storage and processing, understanding of database management systems and SQL for data storage and retrieval, and proficiency in using version control systems like Git.,
Posted 3 weeks ago
2.0 - 6.0 years
6 - 13 Lacs
gurugram
Work from Office
Senior Engineer - VBA Python: Elevate Your Impact Through Innovation & Learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4, 500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development and work-life balance. About Corporate and Investment Banking & Investment Research (CIB & IR) As a global leader in knowledge processes, research, and analytics, youll be working with a team that specializes in global market research, working with the top-rated investment research organizations, bulge bracket investment banks, and leading asset managers. We cater to 8 of the top 10 global banks, working alongside their product and sector teams, supporting them on deal origination, execution, valuation, and transaction advisory-related projects. What you will be doing at Evalueserve: Develop automated solutions using VBA, Python, MS Access and Power BI Ability to handle end to end projects on individual basis Ability to do POCs on new tech stack and to integrate the same in the applications on functional level Ability to design small as well as large applications on enterprise level Ability to understand the business requirements and translate that into an application Ability to handle multiple projects at a time Should be able to plan projects with complete details like efforts, timelines and wireframes Ability to work under tight timelines Assist in project management of micro-automation projects Create tools, templates, SOP / training manuals, process documents etc Coordinate with Business teams, project owners and other team members to understand requirement(s) and deliver the projects with a strong focus on quality What youll need to have: Graduate or Postgraduate with 26 years of relevant experience in automation and process improvement Proven experience working across desktop and web-based applications using Python, with strong skills in VBA Expert level of experience indeveloping crawler or web scraper using Python and its libraries (Selenium, Beautiful Soup, Requests, Pandas, NumPy, Scrapy) Advanced proficiency in VBA and experiencein automation using Microsoft Office (excel, Access, PowerPoint and Word) Strong hands-on experience inAdvanced Excel, VBA, and SQL Knowledge of additional tools suchas Power BI, Power Apps, Power Automate, and Tableau will be an added advantage Good knowledge of SDLCs Follow us on https://www.linkedin.com/compan y/evalueserve/ Click here to learn more about what our Leaders talking on achievements AI-powered supply chain optimization solution built on Google Cloud. How Evalueserve is now Leveraging NVIDIA NIM to enhance our AI and digital transformation solutions and to accelerate AI Capabilitie s. Know more about how Evalueserve has climbed 16 places on the 50 Best Firms for Data Scientists in 2024! Want to learn more about our culture and what its like to work with us? Write to us at: careers@evalueserve.com Disclaimer: The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances. Please Note : We appreciate the accuracy and authenticity of the information you provide, as it plays a key role in your candidacy. As part of the Background Verification Process, we verify your employment, education, and personal details. Please ensure all information is factual and submitted on time. For any assistance, your TA SPOC is available to support you .
Posted 3 weeks ago
2.0 years
9 - 12 Lacs
gurgaon
On-site
Job Title: Python Developer Experience: 2 - 4 years Mode: 3 month contract + ext. Location: Gurgaon Job Summary: We are looking for a Python Specialist with the hands-on knowledge for developing automation tools with an analytical mindset. Responsibilities Develop Quick Process Automations (QPAs) using one of the following platforms - VBA, MS Office pack, Java, SQL, Python. Create appropriate project documentation and train operations team on how to use the developed tool. Follow standard team coding structure and maintenance of already developed solutions. Proactive to identify workarounds, fix bugs, implement changes and come up with out of the box solutions Manage project delivery throughout entire lifecycle: analyze, plan, design, build, test, deploy, maintain Self-driven, being able to work independently and take the project to completion with minimum supervision Self-learner, self-starter, reach out to colleagues when you cannot find a solution Understanding the requirement of the operations team and building the tool accordingly. Qualifications : Minimum qualifications BE/B- Tech, BCA Knowledge of one or more of the following computer languages knowledge required: Python: o Should have demonstrable, practical knowledge in the following data/file processing libraries such as Scrapy, Numpy, BeautifulSoup, Numpy, Pandas, Matplotlib, Bokeh (optional) o Should have knowledge of setting up a Python environment and all the s/w that’s needed VBA-Use Visual Basic in Microsoft Applications, including Excel, Access, Word and PowerPoint Advanced Excel skills (i.e. complex formulas) Able to debug/code functions/macros using VBA Ability to normalize complex data/define referential integrity in Access Ability to programmatically manipulate worksheet and cell properties using VBA Hands on Core java (1.7 or higher version) Strong knowledge of Java Design pattern and their implementation. JDBC with (Oracle/ SQL Server / My SQL) Webservice (REST / SOAP) Data parsing API (PDF, Excel, CSV, XML ) Web Application development with MVC design pattern Hand on Web framework (Struts, Spring, Hibernate etc..) DOS/UNIX Scripting: Should have a sound knowledge of file processing in basic OS platforms and how to manipulate files/folders, how to connect to systems etc. Database and SQL: Sound knowledge of relational databases; ability to define and create basic database, fundamentals of loading, ETL Experience on developing automated solutions on SAP and Oracle systems. Knowledge of backend services and integrations Understanding of the cloud infrastructure environment. software development experience, Preferred qualifications Knowledge of the OM domain; Machine Learning understanding Excellent communication/ interpersonal skills Working knowledge of Java/VBA would be an add on. Job Type: Contractual / Temporary Contract length: 3 months Pay: ₹80,000.00 - ₹100,000.00 per month Application Question(s): Do you have 2+ years of experience in Python ? Work Location: In person
Posted 3 weeks ago
2.0 - 4.0 years
0 Lacs
india
On-site
Alternative Path is seeking skilled software developers to collaborate on client projects with an asset management firm. In this role, you will collaborate with individuals across various company departments to shape and innovate new products and features for our platform, enhancing existing ones. You will have a large degree of independence and trust, but you won't be isolated; the support of the Engineering team leads, the Product team leads, and every other technology team member is behind you. This is an opportunity to join a team-first meritocracy and help grow an entrepreneurial group inside Alternative Path. You will be asked to contribute, given ownership, and will be expected to make your voice heard. Role Summary: Performing Web Scraping using various scraping techniques and then utilizing Python’s Pandas library for data cleaning and manipulation. Then ingesting the data into a Database/Warehouse, and scheduling the scrapers using Airflow or other tools Role Overview The Web Scraping Team at Alternative Path is seeking a creative and detail-oriented developer to contribute to client projects. The team develops essential applications, datasets, and alerts for various teams within the client's organization, supporting their daily investment decisions. The mission is to maintain operational excellence by delivering high-quality proprietary datasets, timely notifications, and exceptional service. We are seeking someone who is self-motivated, self-sufficient, with a passion for tinkering and a love for automation. In your role, you will: ➢ Collaborate with analysts to understand and anticipate requirements. ➢ Design, implement, and maintain Web scrapers for a wide variety of alternative datasets. ➢ Perform Data Cleaning, Exploration, Transformation etc. of scraped data. ➢ Collaborate with cross-functional teams to understand data requirements and implement efficient data processing workflows. ➢ Author QC checks to validate data availability and integrity. ➢ Maintain alerting systems and investigate time-sensitive data incidents to ensure smooth day-to-day operations. ➢ Design and implement products and tools to enhance the Web scraping Platform. Qualifications Must have ➢ Bachelor's/master’s degree in computer science or in any related field ➢ 2-4 years of software development experience ➢ Strong Python and SQL/Database skills ➢ Strong expertise in using the Pandas library (Python) is a must ➢ Experience with web technologies (HTML/JS, APIs, etc.) ➢ Proven work experience in working with large data sets for Data cleaning, Data transformation, Data manipulation, and Data replacements. ➢ Excellent verbal and written communication skills ➢ Aptitude for designing infrastructure, data products, and tools for Data Scientists Preferred ➢ Familiarity with scraping and common scraping tools (Selenium, scrapy, Fiddler, Postman, xpath) ➢ Experience containerizing workloads with Docker (Kubernetes a plus) ➢ Experience with build automation (Jenkins, Gitlab CI/CD) ➢ Experience with AWS technologies like S3, RDS, SNS, SQS, Lambda, etc.
Posted 3 weeks ago
0.0 years
0 - 0 Lacs
mumbai, maharashtra
On-site
Data Scraping, MongoDB, Solr / ElasticSearch We are seeking a skilled Python Developer with strong experience in web/data scraping and working knowledge of MongoDB, Solr, and/or ElasticSearch. You will be responsible for developing, maintaining, and optimizing scalable scraping scripts to collect structured and unstructured data, efficiently manage it in MongoDB, and index it for search and retrieval using Solr or ElasticSearch. Design and develop robust web scraping solutions using Python (e.g., Scrapy, BeautifulSoup, Selenium, etc.). Extract and process large volumes of data from websites, APIs, and other digital sources. Ensure scraping mechanisms are efficient, resilient to site changes, and compliant with best practices. Store, retrieve, and manage scraped data efficiently in MongoDB databases. •Index, manage, and optimize data search capabilities using Solr or ElasticSearch. •Build data validation, cleaning, and transformation pipelines. Handle challenges like CAPTCHA solving, IP blocking, and dynamic content rendering. Monitor scraping jobs and troubleshoot errors and bottlenecks. Optimize scraping speed, search indexing, storage efficiency, and system scalability. Collaborate with product managers to define data requirements. Job Type: Full-time Pay: ₹20,000.00 - ₹25,000.00 per month Application Question(s): How immediate you can join ? Location: Mumbai, Maharashtra (Required) Work Location: In person
Posted 3 weeks ago
3.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Job Overview We are seeking six highly motivated and detail-oriented professionals to join our Data & Insights team. As a Data Processor / Scraper / Collector, you will play a pivotal role in gathering, cleaning, and organizing entertainment-related data from various sources. Your work will directly support strategic decision-making across content development, marketing, and audience engagement. Industry: Entertainment/ Hospitality Experience Required: Minimum 3 years Key Responsibilities Collect and process structured and unstructured data from websites, APIs, and internal databases Scrape entertainment-related data (e.g., movie listings, showtimes, ratings, social media trends) using automated tools or scripts Clean, validate, and organize datasets for analysis and reporting Maintain and update Excel-based dashboards and trackers for internal teams Collaborate with content, marketing, and analytics teams to ensure data relevance and accuracy Monitor data pipelines and troubleshoot inconsistencies or errors Ensure compliance with data privacy and copyright regulations Required Skills & Qualifications Minimum 3 years of experience in data collection, scraping, or processing Strong proficiency in Microsoft Excel (pivot tables, formulas, data validation, etc.) Familiarity with web scraping tools (e.g., BeautifulSoup, Scrapy, Selenium) or willingness to learn Experience working with entertainment data (e.g., OTT platforms, box office, music, events) preferred Basic understanding of data cleaning and transformation techniques Ability to work independently and manage multiple data sources Strong attention to detail and commitment to data accuracy Preferred Extras Knowledge of Python or R for data manipulation Experience with Google Sheets, Power BI, or Tableau Understanding of SEO, social media analytics, or audience behaviour metrics Prior work with entertainment platforms, production houses, or media agencies
Posted 3 weeks ago
12.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Roles and Responsibilities: Convert broad vision and concepts into a structured data science roadmap, and guide a team to successfully execute on it. Handling end-to-end client AI & analytics programs in a fluid environment. Your role will be a combination of hands-on contribution, technical team management, and client interaction. Proven ability to discover solutions hidden in large datasets and to drive business results with their data-based insights Contribute to internal product development initiatives related to data science. Drive excellent project management required to deliver complex projects, including effort/time estimation. Be proactive, with full ownership of the engagement. Build scalable client engagement level processes for faster turnaround & higher accuracy Define Technology/ Strategy and Roadmap for client accounts, and guides implementation of that strategy within projects Manage the team-members, to ensure that the project plan is being adhered to over the course of the project Build a trusted advisor relationship with the IT management at clients and internal accounts leadership. Mandated Skills A B-Tech/M-Tech/MBA from a top tier Institute preferably in a quantitative subject 15+ overall experience with 12+ years of hands-on experience in applied Machine Learning, AI and analytics Experience of scientific programming in scripting languages like Python, R, SQL, NoSQL, Spark with ML tools & Cloud Technology (AWS, Azure, GCP) Experience in Python libraries such as numpy, pandas, scikit-learn, tensor-flow, scrapy, BERT etc. Strong grasp of depth and breadth of machine learning, deep learning, data mining, and statistical concepts and experience in developing models and solutions in these areas Expertise with client engagement, understanding complexproblem statements, and offering solutions in the domains of Supply Chain, Manufacturing, CPG, Marketing etc. Desired Skills Deep understanding of ML algorithms for common use cases in both structured and unstructured data ecosystems. Comfortable with large scale data processing and distributed computing Providing required inputs to sales, and pre-sales activities A self-starter who can work well with minimal guidance Excellent written and verbal communication skills AI,ML
Posted 3 weeks ago
3.0 - 8.0 years
5 - 15 Lacs
bengaluru
Remote
Role & responsibilities As a Data Engineer focused on web crawling and platform data acquisition, you will design,develop, and maintain large-scale web scraping pipelines to extract valuable platform data. Youwill be responsible for implementing scalable and resilient data extraction solutions, ensuringseamless data retrieval while working with proxy management, anti-bot bypass techniques, anddata parsing. Optimizing scraping workflows for performance, reliability, and efficiency will be akey part of your role. Additionally, you will ensure that all extracted data maintains high qualityand integrity. Preferred candidate profile We are seeking candidates with: Strong experience in Python and web scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup. Knowledge of distributed web crawling architectures and job scheduling. Familiarity with headless browsers, CAPTCHA-solving techniques, and proxy management to handle dynamic web challenges. Experience with data storage solutions, including SQL, and cloud storage. Understanding of big data technologies like Spark and Kafka (a plus). Strong debugging skills to adapt to website structure changes and blockers. A proactive, problem-solving mindset and ability to work effectively in a team-driven environment.
Posted 3 weeks ago
1.0 - 3.0 years
0 Lacs
delhi
On-site
Job Type: Full-Time Experience Level: 1–3 years preferred (open to strong freshers with demonstrable skills) About the Role: We are seeking a Database Executive with strong skills in web scraping, data extraction, and data organization to support our data acquisition efforts. This role involves identifying data sources , automating data collection , and organizing data into structured formats that support business insights, lead generation, or content operations. You will work closely with internal teams (marketing, sales, analytics, etc.) to build robust datasets from public and semi-structured online sources, ensuring accuracy, relevance, and compliance with data practices. Key Responsibilities: Data Acquisition & Scraping: Identify relevant websites, directories, and platforms to scrape or extract data from. Develop web scraping scripts using tools such as Python (BeautifulSoup, Scrapy, Selenium) or other automated extraction tools. Monitor websites for structural changes or anti-scraping measures and adapt scripts accordingly. Work with APIs when available to pull data more efficiently and reliably. Data Cleaning & Structuring: Clean and normalize raw data to ensure it is free from duplicates, errors, and inconsistencies. Convert scraped data into structured formats (CSV, Excel, JSON, SQL) for easy integration into databases or CRM systems. Tag, categorize, and format data to align with internal taxonomy or business use cases. Database Management: Create and manage databases using platforms like MySQL, PostgreSQL, MongoDB , or Google Sheets/Excel for lighter use cases. Ensure data is stored in a secure, scalable, and searchable format. Regularly update and maintain datasets to reflect new additions, changes, or deletions. Collaboration & Reporting: Work with the sales, marketing, or analytics teams to understand data requirements and deliver relevant datasets. Document scraping logic, data sources, and scripts for reusability and transparency. Provide timely reports on data coverage, quality, and update frequency. Required Skills & Qualifications: Proficiency in Python for scripting and automation (Beautiful Soup, Scrapy, Requests, Selenium, Pandas, etc.) Solid understanding of HTML, CSS, XPath, JSON, and Regex for data parsing. Experience with SQL and managing relational databases. Familiarity with tools like Excel, Google Sheets , and data handling libraries. Understanding ethical web scraping practices, bot detection, CAPTCHA, and rate-limiting. Ability to work independently, manage deadlines, and prioritize tasks across multiple projects. Attention to detail in maintaining clean, accurate, and well-structured datasets. Preferred Qualifications: Experience using APIs (REST, GraphQL) for data extraction. Knowledge of cloud databases or data warehouses (e.g., AWS RDS, Google Big Query). Exposure to CRM platforms (e.g., HubSpot, Salesforce) for lead or contact data syncing. Ability to visualize data using BI tools (Tableau, Google Data Studio, etc.) is a plus. Prior experience in domains like real estate, e-commerce, market research, or lead generation . Tools You Might Use: Python: Beautiful Soup, Scrapy, Selenium SQL: MySQL, PostgreSQL Data Tools: Excel, Google Sheets, Pandas APIs: REST, GraphQL Others: Postman, Zapier, Airflow (for automation)
Posted 3 weeks ago
0 years
2 - 3 Lacs
mohali
On-site
About Zaobuild : "Zapbuild builds future-ready technology solutions for the transportation and logistics industry. We are deeply invested in helping the Transportation & Logistics industry and its players move forward with adaptive and innovative solutions, in order to thrive in rapidly transforming supply chains". Job Requirement: 1. Proficiency in Python and Django rest framework. 2. Hands-on experience with MySQL , including schema design, query optimization, and performance tuning. 3. Experience in web scraping using Python tools like BeautifulSoup, Scrapy, or Selenium. 4. Docker experience for containerizing applications and managing environments. 5. Front-end development skills with JavaScript, HTML, and basic CSS. 6. Experience building and consuming RESTful APIs. 7. Knowledge of version control using Git. 8. Experience in Pandas/ Numpy. Qualification: 1. Fresher to 3 months experience 2. Willing to work from Office 3. Comfortable for 9:30am to 6:30 pm timings Job Types: Full-time, Permanent, Fresher Pay: ₹20,000.00 - ₹25,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Work Location: In person
Posted 3 weeks ago
2.0 - 4.0 years
2 - 4 Lacs
himatnagar
Work from Office
Responsibilities: * Design, develop & maintain web scrapers using Python & Scrapy * Ensure data accuracy & compliance with website terms * Collaborate with cross-functional teams on project requirements Work from home
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |