Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
1 - 1 Lacs
India
On-site
Job Title: Python Web Scraper & UI/UX Designer Location: Chennai Employment Type: Full-time About the Role: We are building lean, fast, and user-focused applications. This role blends two key capabilities: Python-based web scraping and UI/UX design . The ideal candidate can both automate the extraction of web data and design the user interface through which that data is presented. This is an application-facing role, meaning the scraping logic you build and the designs you create will be integrated directly into the products we ship. You will work closely with both developers and product owners to ensure seamless user experiences backed by live or regularly updated data. Responsibilities: Web Scraping & Automation: Build and maintain scalable web scrapers using tools such as BeautifulSoup, Scrapy, Selenium, or Playwright. Handle dynamic content, logins, pagination, rate limiting, and CAPTCHAs as needed. Deliver structured, cleaned data outputs in formats ready for integration (e.g., JSON, CSV, or direct database inputs). Integrate scraped data into application pipelines, ensuring relevance and reliability. Monitor and troubleshoot scraping scripts for accuracy and performance. UI/UX Design: Translate business and user requirements into wireframes, mockups, and UI prototypes. Design clear, intuitive user flows and interfaces for web or mobile screens. Use tools such as Figma or Adobe XD to create design systems, components, and layouts. Work with developers to ensure accurate implementation of design intent. Iterate designs based on feedback, usability testing, and evolving product needs. Required Skills: Strong hands-on experience with Python for web scraping and data automation. Proficiency with tools like BeautifulSoup, Selenium, or Scrapy. Solid understanding of HTML, CSS, and browser DOM structures. Proficiency in Figma or Adobe XD for wireframing and visual design. Good sense of visual aesthetics, typography, and user-centered design principles. Strong debugging, documentation, and version control skills. Added Advantage (Nice to Have): Experience integrating scraped data into frontend or backend systems. Familiarity with Flutter/Dart or any frontend framework. Exposure to Firebase, REST APIs, or cloud deployments (AWS, GCP, etc.). Knowledge of basic frontend code (HTML, CSS, JS) for design-to-dev handoff. Understanding of database formats like Firestore, MongoDB, or SQL. Eligibility: Bachelor's degree in Computer Science, Information Technology, Design, or related field. Freshers with completed personal or academic projects are welcome to apply. Must demonstrate scraping and UI/UX skills via portfolio or GitHub. Why Join Us: Build real products that integrate live data and elegant interfaces. Work in a collaborative, fast-paced team with direct impact on user experience. Opportunity to work across multiple modules — data, design, and deployment. Flexible environment with a focus on innovation, autonomy, and ownership. Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹15,000.00 per month Benefits: Health insurance Life insurance Schedule: Monday to Friday Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Siruseri, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Willingness to travel: 50% (Preferred) Work Location: In person
Posted 10 hours ago
1.5 years
0 Lacs
India
Remote
Urgent Opening: Web Scraping – Data Crawling, AI/ML Location: Permanent Work From Home Job Type: Full-Time | Permanent Experience: 1.5+ Years (Preferred) About the Role We are looking for a skilled and experienced Python Developer with strong expertise in data crawling, web scraping, AI/ML, and CAPTCHA solving techniques. The ideal candidate is passionate about automation, data pipelines, and problem-solving with a deep understanding of the web ecosystem. This is a permanent remote opportunity, ideal for professionals looking to work in a flexible and innovative environment while delivering high-quality solutions in data acquisition and intelligent automation. Key Responsibilities Design and implement scalable data crawling/scraping solutions using Python. Develop tools to bypass or solve CAPTCHAs (e.g., reCAPTCHA, hCaptcha) using AI/ML or third-party APIs. Write efficient and robust data extraction and parsing logic for large-scale web data. Build and maintain AI/ML models for tasks such as image recognition, pattern detection, and anomaly detection. Optimize crawling infrastructure for speed, reliability, and anti-blocking strategies (rotating proxies, headless browsers, etc.). Integrate with APIs and databases to store, manage, and process scraped data. Monitor and troubleshoot scraping systems and adapt to changes in target websites. Collaborate with the team to define requirements, plan deliverables, and implement best practices. Required Skills & Qualifications 1.5+ years of hands-on experience with Python in web scraping/data crawling. Strong experience with Scrapy and Selenium. Deep understanding of CAPTCHA types and proven experience in solving or bypassing them. Proficient in AI/ML frameworks: TensorFlow, PyTorch, scikit-learn, or OpenCV. Experience with OCR tools (Tesseract, EasyOCR) and image pre-processing techniques. Familiarity with anti-bot techniques, headless browsers, and proxy rotation. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and website structure. Strong problem-solving skills and attention to detail. Perks & Benefits Permanent Work from Home Flexible work hours Competitive salary based on experience Opportunities for skill development and upskilling Performance-based incentives How to Apply Interested candidates can email their updated resume and portfolio (if any) to jyoti@transformez.in with the subject line: Python Developer – Data Crawling & AI.
Posted 15 hours ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
About the Role: Grade Level (for internal use): 08 Job Title: Associate Data Engineer Location: Bangalore (Hybrid) The Team: The Automotive Insights - Supply Chain and Technology and IMR department at S&P Global is dedicated to delivering critical intelligence and comprehensive analysis of the automotive industry's supply chain and technology. Our team provides actionable insights and data-driven solutions that empower clients to navigate the complexities of the automotive ecosystem, from manufacturing and logistics to technological innovations and market dynamics. We collaborate closely with industry stakeholders to ensure our research supports strategic decision-making and drives growth within the automotive sector. Join us to be at the forefront of transforming the automotive landscape with cutting-edge insights and expertise. Responsibilities and Impact: Develop and maintain automated data pipelines to extract, transform, and load data from diverse online sources, ensuring high data quality. Build, optimize, and document web scraping tools using Python and related libraries to support ongoing research and analytics. Implement DevOps practices for deploying, monitoring, and maintaining machine learning workflows in production environments. Collaborate with data scientists and analysts to deliver reliable, well-structured data for analytics and modeling. Perform data quality checks, troubleshoot pipeline issues, and ensure alignment with internal taxonomies and standards. Stay current with advancements in data engineering, DevOps, and web scraping technologies, contributing to team knowledge and best practices. What We’re Looking For: Basic Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. 1 to 3 years of hands-on experience in data engineering, including web scraping and ETL pipeline development using Python. Proficiency with Python programming and libraries such as Pandas, BeautifulSoup, Selenium, or Scrapy. Exposure to implementing and maintaining DevOps workflows, including model deployment and monitoring. Familiarity with containerization technologies (e.g., Docker) and CI/CD pipelines for data and ML workflows. Familiarity with the cloud platforms (preferably AWS). Key Soft Skills: Strong analytical and problem-solving skills, with attention to detail. Excellent communication and collaboration abilities for effective teamwork. Ability to work independently and manage multiple priorities. Curiosity and a proactive approach to learning and applying new technologies. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH203 - Entry Professional (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317963 Posted On: 2025-08-01 Location: Bangalore, Karnataka, India
Posted 18 hours ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Data Engineering team, your primary responsibility will be to handle various aspects of data extraction. This includes understanding the data requirements of the business group, reverse-engineering the website and its technology, developing web robots to automate data extraction, and building monitoring systems to ensure data integrity. You will also play a key role in managing changes to website dynamics and layout for clean downloads, creating scraping and parsing systems to structure raw data, and providing operations support for high availability and zero data losses. Moreover, you will be involved in storing extracted data in recommended databases, constructing scalable data extraction systems, and automating data pipelines. The ideal candidate for this role should possess the following qualifications: - 2-4 years of experience in website data extraction and scraping. - Proficiency in relational databases, including writing complex SQL queries and ETL operations. - Strong command of Python for data operations. - Expertise in Python frameworks such as Requests, UrlLib2, Selenium, Beautiful Soup, and Scrapy. - Familiarity with HTTP requests and responses, HTML, CSS, XML, JSON, and JavaScript. - Skill in using debugging tools in Chrome for reverse engineering website dynamics. - Academic background in BCA/MCA/BS/MS with a solid foundation in data structures and algorithms. - Strong problem-solving, analytical, and debugging skills. If you meet these qualifications and are enthusiastic about working in a dynamic data engineering environment, we encourage you to apply for this position.,
Posted 20 hours ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be responsible for developing cutting-edge solutions by designing, developing, and maintaining robust web scraping solutions to extract large datasets from various websites, supporting our data-driven initiatives. Your role will involve mastering Python programming to implement and optimize sophisticated scraping scripts and tools. You will leverage industry-leading tools such as BeautifulSoup, Scrapy, Selenium, and other scraping frameworks to efficiently collect and process data. Additionally, you will innovate with AI technologies like ChatGPT to automate and enhance data extraction processes, pushing the boundaries of what is possible. It will be crucial to optimize data management by cleaning, organizing, and storing extracted data in structured formats for seamless analysis and usage. Ensuring peak performance by optimizing scraping scripts for efficiency, scalability, and reliability will also be part of your responsibilities. You will troubleshoot data scraping issues with precision to maintain data accuracy and completeness, along with maintaining clear and comprehensive documentation of scraping processes, scripts, and tools for transparency and knowledge sharing. As a qualified candidate, you should have a minimum of 5 years of experience in web data scraping, with a strong emphasis on handling large datasets. Advanced skills in Python programming, especially in the context of web scraping, are essential for this role. You are expected to have in-depth knowledge and experience with tools such as BeautifulSoup, Scrapy, Selenium, and other relevant scraping tools. Strong expertise in data cleaning, organization, and storage, along with excellent problem-solving and analytical skills to address complex scraping challenges, will be required. Meticulous attention to detail is crucial to ensure data accuracy and completeness, and the ability to work independently, manage multiple tasks, and meet deadlines effectively is essential. Preferred skills for this role include experience with API integration for data extraction, familiarity with cloud platforms like AWS, Azure, or Google Cloud for data storage and processing, understanding of database management systems and SQL for data storage and retrieval, and proficiency in using version control systems like Git.,
Posted 21 hours ago
0 years
0 Lacs
India
Remote
About Rivan Solutions Rivan Solutions is a web scraping and web designing company. We are based in Hyderabad but we serve the entire world. About the work from home job/internship Selected intern's day-to-day responsibilities include: 1. Scraping websites using Python and BeautifulSoup/Scrapy or Selenium 2. Delivering the web scraping script within 6 to 7 hours. 3. Working with Python and Beautiful Soup to scrape and automate the websites 4. Storing data into SQL and NoSQL 5. Being regular to meetings and delivering on time Preferred Passouts on or before 2025 are preferred. Skill(s) required JSON, Python, SQL Who can apply Only those candidates can apply who: 1. are available for the work from home job/internship 2. can start the work from home job/internship between 3rd Aug 2025 to 3rd Oct2025 3. are available for duration of 6 months 4. have relevant skills and interests * Women wanting to start/restart their career can also apply. Perks Certificate, Letter of recommendation, Flexible work hours Job Types: Full-time, Internship Contract length: 6 months Pay: ₹1,000.00 per month
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities Develop Cutting-Edge Solutions : Design, develop and maintain robust web scraping solutions that extract large datasets from various websites, fueling our data-driven initiatives. Master Python Programming : Utilize advanced Python skills to implement and optimize sophisticated scraping scripts and tools. Leverage Advanced Tools : Employ industry-leading tools such as BeautifulSoup, Scrapy, Selenium, and other scraping frameworks to collect and process data efficiently. Innovate with AI : Use ChatGPT prompt skills to automate and enhance data extraction processes, pushing the boundaries of whats possible. Optimize Data Management : Clean, organize, and store extracted data in structured formats for seamless analysis and usage. Ensure Peak Performance : Optimize scraping scripts for performance, scalability, and reliability, ensuring top-notch efficiency. Troubleshoot with Precision : Identify and resolve data scraping issues, ensuring the datas accuracy and completeness. Document Thoroughly : Maintain clear and comprehensive documentation of scraping processes, scripts, and tools used for transparency and knowledge sharing. Qualifications Experience : Minimum of 5 years in web data scraping, with a strong focus on handling large datasets. Python Expertise : Advanced skills in Python programming, particularly in the context of web scraping. Tool Proficiency In-depth knowledge and experience with BeautifulSoup, Scrapy, Selenium, and other relevant scraping tools. Data Management : Strong skills in data cleaning, organization, and storage. Analytical Acumen : Excellent problem-solving and analytical skills to tackle complex scraping challenges. Detail-Oriented : Meticulous attention to detail to ensure data accuracy and completeness. Independence : Proven ability to work independently, managing multiple tasks and deadlines effectively. Preferred Skills API Integration : Experience with integrating APIs for data extraction. Cloud Platforms : Familiarity with cloud platforms such as AWS, Azure, or Google Cloud for data storage and processing. Database Knowledge : Understanding of database management systems and SQL for data storage and retrieval. Version Control : Proficiency in using version control systems like Git. (ref:hirist.tech)
Posted 1 day ago
2.0 years
0 Lacs
Rajkot, Gujarat, India
On-site
Setting up LAMP environment and hosting and configuring php projects. Troubleshooting network, processes, disk related issues. Installation and configuration of Samba and NFS. Setting up routers/modems and linux firewalls (iptables). Configuring MySql replication and clusters. Installation and configuration of PXE server for installing Linux over the Network. Technical writing skills producing clear and unambiguous technical documentation and user stories. Writing shell script to automate the tasks. Developing spiders using Scrapy framework (python) for crawling and scrapping the web. Troubleshooting issues of php scripts and supporting web developers to develop websites. Experience of performance tuning on apache server. Experience of testing server configuration and website using different testing tools. i.e. ab and siege. Position : 01 Required Experience : 6 months 2 years Technical Skills : Installation and configuration of Linux servers, troubleshooting network, processes, disk related issues. Apply Now
Posted 2 days ago
0 years
0 Lacs
India
Remote
Data Science Intern Location: Remote Duration: 2-6 months Type: Unpaid Internship About Collegepur Collegepur is an innovative platform dedicated to providing students with comprehensive information about colleges, career opportunities, and educational resources. We are building a dynamic team of talented individuals passionate about data-driven decision-making. Job Summary We are seeking a highly motivated Data Science Intern to join our team. This role involves working on data collection, web scraping, analysis, visualization, and machine learning to derive meaningful insights that enhance our platform’s functionality and user experience. Responsibilities: Web Scraping: Collect and extract data from websites using tools like BeautifulSoup, Scrapy, or Selenium. Data Preprocessing: Clean, transform, and structure raw data for analysis. Exploratory Data Analysis (EDA): Identify trends and insights from collected data. Machine Learning: Develop predictive models for data-driven decision-making. Data Visualization: Create dashboards and reports using tools like Matplotlib, Seaborn, Power BI, or Tableau. Database Management: Work with structured and unstructured data, ensuring quality and consistency. Collaboration: Work with cross-functional teams to integrate data solutions into our platform. Documentation: Maintain records of methodologies, findings, and workflows. Requirements: Currently pursuing or recently completed a degree in Data Science, Computer Science, Statistics, Mathematics, or a related field . Experience in web scraping using BeautifulSoup, Scrapy, or Selenium. Proficiency in Python/R and libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Familiarity with SQL and database management. Strong understanding of data visualization tools . Knowledge of APIs and cloud platforms (AWS, GCP, or Azure) is a plus. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Perks and Benefits: Remote work with flexible hours . Certificate of completion and Letter of Recommendation . Performance-based LinkedIn recommendations . Opportunity to work on real-world projects and enhance your portfolio . If you are passionate about data science and web scraping and eager to gain hands-on experience, we encourage you to apply! (recruitment@collegepur.com)
Posted 2 days ago
3.0 years
0 Lacs
India
Remote
Job Title: AI Engineer – Web Crawling & Field Data Extraction Location: [Remote] Department: Engineering / Data Science Experience Level: Mid to Senior Employment Type: Contract to Hire About the Role: We are looking for a skilled AI Engineer with strong experience in web crawling, data parsing, and AI/ML-driven information extraction to join our team. You will be responsible for developing systems that automatically crawl websites, extract structured and unstructured data, and intelligently map the extracted content to predefined fields for business use. This role combines practical web scraping, NLP techniques, and AI model integration to automate workflows that involve large-scale content ingestion. Key Responsibilities: Design and develop automated web crawlers and scrapers to extract information from various websites and online resources. Implement robust and scalable data extraction pipelines that convert semi-structured/unstructured data into structured field-level data. Use Natural Language Processing (NLP) and ML models to intelligently interpret and map extracted content to specific form fields or schemas. Build systems that can handle dynamic web content, captchas, JavaScript-rendered pages, and anti-bot mechanisms. Collaborate with frontend/backend teams to integrate extracted data into user-facing applications. Monitor crawler performance, ensure compliance with legal/data policies, and manage scheduling, deduplication, and logging. Optimize crawling strategies using AI/heuristics for prioritization, entity recognition, and data validation. Create tools for auto-filling forms or generating structured records from crawled data. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or related field. 3+ years of hands-on experience with web scraping frameworks (e.g., Scrapy, Puppeteer, Playwright, Selenium). Proficiency in Python, with experience in BeautifulSoup, lxml, requests, aiohttp, or similar libraries. Experience with NLP libraries (e.g., spaCy, NLTK, Hugging Face Transformers) to parse and map extracted data. Familiarity with ML-based data classification, extraction, and field mapping. Knowledge of structured data formats (JSON, XML, CSV) and RESTful APIs. Experience handling anti-scraping techniques and rate-limiting controls. Strong problem-solving skills, clean coding practices, and the ability to work independently. Nice-to-Have Experience with AI form understanding (e.g., LayoutLM, DocAI, OCR). Familiarity with Large Language Models (LLMs) for intelligent data labeling or validation. Exposure to data pipelines, ETL frameworks, or orchestration tools (Airflow, Prefect). Understanding of data privacy, compliance, and ethical crawling standards. Why Join Us? Work on cutting-edge AI applications in real-world automation. Be part of a fast-growing and collaborative team. Opportunity to lead and shape intelligent data ingestion solutions from the ground up.
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Software Engineer - Content Parsing The Opportunity We're looking for a talented and detail-oriented Software Engineer - Content Parsing to join our dynamic team. In this role, you'll be crucial in extracting, categorizing, and structuring vast amounts of content from various web sources. You'll leverage your expertise in Python and related parsing technologies to build robust, scalable, and highly accurate parsing solutions. This position requires a strong focus on data quality, comprehensive testing, and the ability to implement effective alerting and notification systems. Responsibilities : Design, develop, and maintain robust and scalable HTML parsing solutions to extract diverse web content. Implement advanced content categorization logic to accurately classify and tag extracted data based on predefined schemas and business rules, incorporating AI/ML techniques where applicable. Develop and integrate alerting and notification systems to monitor parsing performance, identify anomalies, and report on data quality issues. Write comprehensive unit, integration, and end-to-end test cases to ensure the accuracy, reliability, and robustness of parsing logic, covering all boundary conditions and edge cases. Optimize parsing performance and efficiency to handle large volumes of data. Troubleshoot and resolve parsing issues, adapting to changes in website structures and content formats. Contribute to the continuous improvement of our parsing infrastructure and methodologies, including the research and adoption of new AI-driven parsing techniques. Manage and deploy parsing solutions in a Linux environment. Collaborate with DevOps engineers to improve the scaling, deployment, and operational efficiency of parsing solutions. This role requires occasional weekend work as content changes are typically deployed on weekends, necessitating monitoring and immediate adjustments. Qualifications : Bachelor's degree in Computer Science or a closely related technical field is required. Experience in software development with a strong focus on data extraction and parsing. Proficiency in Python and its ecosystem, particularly with libraries for web scraping and parsing (e.g., Beautiful Soup, lxml, Scrapy, Playwright, Selenium). Demonstrated experience in building or parsing complex and unstructured HTML content into structured data formats. Understanding and practical experience with content categorization techniques (e.g., keyword extraction, rule-based classification, basic NLP concepts). Proven ability to design and implement effective alerting and notification systems (e.g., integrating with Slack, PagerDuty, email, custom dashboards). Attention to details with unit testing skills, with a meticulous approach to covering all boundary conditions, error cases, and edge scenarios. Experience working in a Linux environment, including shell scripting and command-line tools. Familiarity with data storage solutions (e.g., SQL databases) and data serialization formats (e.g., JSON, XML. Experience with version control systems (e.g., Git). Excellent problem-solving skills. Strong communication and collaboration abilities.
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
Manali, Himachal Pradesh, India
On-site
Responsibilities Design and build the core backend architecture using Python, Node.js, or similar stacks. Create and manage scalable databases (PostgreSQL, MongoDB, etc). Develop scraping engines to gather artist, curator, and event data. Integrate secure contract logic and payment flows using APIs or Web3 tools. Work closely with the product and frontend teams to co-create seamless user experiences. Take full ownership of APIs, hosting, and technical decisions. Requirements 1 to 3 years of backend or data engineering experience, or strong independent projects. Good Grip over MERN Stack and Next.js Framework. Confidence with JavaScript and frameworks like Flask, Django, Node, and Express. Strong grasp of data structures, version control (Git/GitHub), and agile development. Experience with REST APIs, GraphQL, and database design. Familiarity with SQL and NoSQL (PostgreSQL, MySQL, MongoDB, etc). Data Science Toolkit including tools like Python, R, Machine learning and AI, and Deep learning. Web scraping toolkit including libraries like beautiful soup, Scrapy, Selenium, etc. Basic understanding of working with APIs. Understanding of cloud platforms (AWS, GCP, Azure) is a plus. Fast problem-solving and debugging mindset. Basic understanding of Blockchain technology. This job was posted by Saksham Singhal from Dream Stage.
Posted 3 days ago
3.0 - 5.0 years
9 - 11 Lacs
Pune
Work from Office
Hiring Senior Data Engineer for an AI-native startup. Work on scalable data pipelines, LLM workflows, web scraping (Scrapy, lxml), Pandas, APIs, and Django. Strong in Python, data quality, mentoring, and large-scale systems. Health insurance
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
surat, gujarat
On-site
TransForm Solutions is a trailblazer in the business process management and IT-enabled services industry, known for delivering top-notch solutions that drive business efficiency and growth. With a focus on innovation and excellence, the company empowers businesses to transform their operations and achieve their full potential. As the company continues to expand, they are looking for a dynamic Senior Web Data Scraping Engineer to join their team and help harness the power of data. Your mission in this role will involve developing cutting-edge solutions by designing, developing, and maintaining robust web scraping solutions that extract large datasets from various websites to fuel data-driven initiatives. You will need to master Python programming skills to implement and optimize sophisticated scraping scripts and tools. Utilizing industry-leading tools such as BeautifulSoup, Scrapy, Selenium, and other scraping frameworks will be essential for collecting and processing data efficiently. Additionally, you will be required to innovate with AI, using ChatGPT prompt skills to automate and enhance data extraction processes. Data management will be a key aspect of your responsibilities, involving cleaning, organizing, and storing extracted data in structured formats for seamless analysis and usage. Ensuring peak performance by optimizing scraping scripts for efficiency, scalability, and reliability will be crucial. You will also need to work independently, managing tasks and deadlines with minimal supervision, while demonstrating the ability to collaborate effectively with team members to understand data requirements and deliver actionable insights. Troubleshooting data scraping issues with precision to ensure data accuracy and completeness, as well as maintaining clear and comprehensive documentation of scraping processes, scripts, and tools used for transparency and knowledge sharing, will be part of your daily tasks. In terms of qualifications, the ideal candidate should have a minimum of 3 years of experience in web data scraping with a strong focus on handling large datasets. Advanced skills in Python programming, proficiency in relevant scraping tools such as BeautifulSoup, Scrapy, Selenium, and ChatGPT prompts, as well as strong data management and analytical skills, are required. Attention to detail, effective communication, and the ability to work independently are also essential qualities. Preferred skills include experience with API integration for data extraction, familiarity with cloud platforms like AWS, Azure, or Google Cloud for data storage and processing, understanding of database management systems and SQL, and proficiency in using version control systems like Git. In terms of compensation, the company offers a competitive base salary based on experience and skills, along with potential performance-based bonuses tied to successful project outcomes and contributions. Joining TransForm Solutions means being part of a forward-thinking team that values innovation, collaboration, and excellence. You will have the opportunity to work on groundbreaking projects, leveraging the latest technologies to transform data into actionable insights. The company is committed to professional growth and provides an environment where skills and expertise are recognized and rewarded. If you are a top-tier web data scraping engineer passionate about pushing the envelope and delivering impactful results, TransForm Solutions invites you to apply and be a key player in their journey to harness the power of data to transform businesses.,
Posted 3 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The role involves both coding skills and broader engineering concepts like system design, API integration, and building complete workflows across multiple technology layers (frontend, backend, database, AI, and infrastructure). The responsibilities go beyond just writing code to include designing solutions and integrating various technologies into a cohesive system. Preferred Skills: Experience with AI frameworks including LLMs (ChatGPT, Claude, etc.), TensorFlow, PyTorch, or Hugging Face Familiarity with API development and integration, particularly for data extraction and processing Should have good experience with webscraping (Selenium, Scrapy, Playwright) Understanding of modern web development with React.js for frontend applications Experience with backend frameworks like FastAPI or Node.js Knowledge of database technologies (PostgreSQL, Elasticsearch, Pinecone or similar vector databases) Understanding of cloud infrastructure on Microsoft Azure, AWS, or GCP Experience building AI pipelines and workflows that connect multiple systems Knowledge of media metrics, PR reporting, and data visualization Strong system design thinking to create efficient AI-driven workflows Prior project experience with automation tools or AI assistants Ability to prototype quickly while maintaining code quality and documentation This is a 3- month on-site Internship role only so the selected candidate will be working from our office in DN Nagar, Andheri West, Mumbai.
Posted 4 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Lead Python Software Engineer to join its team of experts. Skill : Lead Python Software Engineer Exp: 5+Yrs NP: Immediate to 15Days Location: Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, start-ups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 4 days ago
2.0 years
4 Lacs
Chennai
On-site
We are hiring a tech-savvy and creative Social Media Handler with strong expertise in AI-powered content creation , web scraping , and automation of scraper workflows . You will be responsible for managing social media presence while automating content intelligence and trend tracking through custom scraping solutions. This is a hybrid role requiring both creative content skills and technical automation proficiency. Key Responsibilities: 1) Social Media Management - Plan and execute content calendars across platforms: Instagram, Facebook, YouTube, LinkedIn, and X. - Create high-performing, audience-specific content using AI tools (ChatGPT, Midjourney, Canva AI, etc.). - Engage with followers, track trends, and implement growth strategies. 2) AI Content Creation - Use generative AI to write captions, articles, and hashtags. - Generate AI-powered images, carousels, infographics, and reels. - Repurpose long-form content into short-form video or visual content using tools like Descript or Lumen5. 3) Web Scraping & Automation - Design and build automated web scrapers to extract data from websites, directories, competitor pages, and trending content sources. - Schedule scraping jobs and set up automated pipelines using: - Python (BeautifulSoup, Scrapy, Selenium, Playwright) - Task schedulers (Airflow, Cron, or Python scripts) - Cloud scraping or headless browsers - Parse and clean data for insight generation (topics, hashtags, keywords, sentiment, etc.). - Store and organize scraped data in spreadsheets or databases for content inspiration and strategy. Required Skills & Experience: 1) 2–5 years of relevant work experience in social media, content creation, or web scraping. 2) Proficiency in AI tools: - Text: ChatGPT, Jasper, Copy.ai 3) Image: Midjourney, DALL·E, Adobe Firefly 4) Video: Pictory, Descript, Lumen5 5) Strong Python skills for: - Web scraping (Scrapy, BeautifulSoup, Selenium) 6) Automation scripting - Knowledge of data handling using Pandas, CSV, JSON, Google Sheets, or databases. 7) Familiar with social media scheduling tools (Meta Business Suite, Buffer, Hootsuite). 8) Ability to work independently and stay updated on digital trends and platform changes. Educational Qualification Degree in Marketing, Media, Computer Science, or Data Science preferred. - Skills-based hiring encouraged – real-world experience matters more than formal education. Work Location: Chennai (In-office role) Salary: Commensurate with experience + performance bonus Bonus Skills (Nice to Have) : 1) Knowledge of website development (HTML, CSS, JS, WordPress/Webflow). 2) SEO and content analytics. 3) Basic video editing and animation (CapCut, After Effects). 4) Experience with automation platforms like Zapier, n8n, or Make.com. To Apply: Please email your resume, portfolio, and sample projects to: Job Type: Full-time Pay: From ₹40,000.00 per month Work Location: In person
Posted 6 days ago
5.0 years
0 Lacs
India
Remote
About EQL Global EQL is redefining equity research with Al-powered tools that behave like a digital analyst. Our platform automates report writing, Q&A, document parsing and scenario modelling for analysts, brokers and investors. With tier-one financial institutions already on board, we're scaling fast and solving high-value problems in one of the world's most knowledge-intensive industries. Role Overview Join our distributed engineering team as a Backend & DevOps Engineer who bridges software development, cloud operations and information security. You will architect and maintain backend services, automate infrastructure, embed security controls, and drive our ISO 27001 compliance program. Key Responsibilities • Architect and implement RESTful APIs, microservices and real-time WebSocket endpoints in Python • Automate infrastructure provisioning on AWS and Azure via Terraform or CloudFormation • Containerize applications with Docker and orchestrate them on Kubernetes (AKS/EKS) • Design, deploy and maintain CI/CD pipelines using Git, GitHub Actions or Jenkins • Develop robust web-scraping scripts and automation tools with Python (Scrapy, Selenium) • Configure, monitor and optimize MongoDB and relational databases (PostgreSQL/MySQL) • Integrate security-by-design practices: encryption, authentication (OAuth2/JWT), vulnerability scanning • Collaborate with security and compliance teams to implement ISO 27001 controls and documentation • Lead risk assessments, incident response drills and audit preparations for certification • Continuously measure system performance and reliability, troubleshooting production issues Required Qualifications • 5+ years in backend development and DevOps, ideally in remote teams • Proven expertise in AWS (EC2, S3, Lambda) and Azure (VMs, Functions, AKS) • Strong Python skills, including web scraping and automation frameworks • Hands-on experience with Terraform, CloudFormation or Ansible for IaC • Deep knowledge of containerization and Kubernetes operations • Practical experience managing MongoDB and SQL databases • Thorough understanding of ISO 27001 standards, ISMS implementation and audit requirements • Familiarity with CI/CD tools, Git workflows and automated testing • Excellent communication skills and ability to work asynchronously across time zones Preferred (Bonus) Skills • Experience in additional languages: Node.js, Go, Java. • ISO 27001 Lead Implementer/Auditor certification (or in progress) • Experience with service meshes (Istio), serverless architectures and message brokers (Kafka) • Background in regulated environments (healthcare, finance) or enterprise compliance • Contributions to open-source security or DevOps projects What We Offer • It is a 2-year fixed contract position offering a starting basic salary between ₹5,00,000 and ₹7,00,000 per annum, with a 15% increment every six months throughout the contract period, providing a structured and performance-linked compensation plan to support your growth and contribution. • Fully remote, flexible work environment. • Opportunities for global collaboration and professional growth. • Access to cutting-edge tools, training, and conferences. • Inclusive culture committed to diversity and equity
Posted 1 week ago
5.0 years
8 - 15 Lacs
Chennai
On-site
eGrove Systems is looking for Senior Python Developer to join its team of experts. Skill : Senior Python Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers. Job Type: Full-time Pay: ₹800,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Web Scrapers Developer (C2H) with 6-7 years of experience, you will excel in developing web scrapers to facilitate interactions with web pages. Your role will involve designing and implementing web scraping solutions to extract data from various web sources. You should possess strong programming skills in languages commonly used for web scraping, such as Python, JavaScript, or similar. Your expertise should include familiarity with web scraping libraries and frameworks like Beautiful Soup, Selenium, Scrapy, among others. It will be essential to automate data collection processes and ensure data quality through cleansing and validation. Additionally, you will prepare and transform data for reporting, analytics, and integration into other systems. As part of your responsibilities, you will collaborate with cross-functional teams to understand data requirements and develop effective solutions. Documenting processes and creating technical documentation for implemented solutions will be a critical aspect of your role. Furthermore, you will provide knowledge transfer to team members and stakeholders regarding automated processes and tools. In this remote position, you will play a vital role in enhancing data extraction processes and contributing to the overall efficiency of web scraping activities.,
Posted 1 week ago
0 years
0 Lacs
Mohali district, India
On-site
We are seeking a highly skilled AI/ML Web Scraping Specialist to join our data engineering and analytics team. The ideal candidate will have hands-on experience in building robust web scraping pipelines focused on Instagram and other social media platforms (e.g., Facebook, TikTok, YouTube, X/Twitter) . The role involves designing scalable scraping architectures, solving anti-bot challenges, and applying machine learning to classify, enrich, or analyze social media content. Key Responsibilities Design and implement advanced web scraping tools, scripts, and pipelines using Python (BeautifulSoup, Scrapy, Selenium, Playwright, etc.). Build robust scrapers for social media platforms like Instagram, TikTok, X (Twitter), Facebook, and LinkedIn, bypassing rate limits and anti-bot mechanisms (Cloudflare, reCAPTCHA, etc.). Leverage APIs (when available) and reverse-engineer web/mobile requests to extract structured data. Develop and train ML models for tasks such as content categorization, influencer classification, sentiment analysis, engagement prediction, etc. Automate scraping workflows, schedule jobs (using Airflow/Cron), and store data in NoSQL or relational databases. Maintain and optimize scraping performance, and handle edge cases or UI changes in target platforms. Work with large-scale data pipelines, and ensure clean, deduplicated, and enriched datasets. Collaborate with product, marketing, and data science teams to provide actionable insights from social media data. Required Skills and Qualifications Strong programming skills in Python with proven experience using Scrapy, Selenium, Playwright, or Puppeteer . Deep knowledge of HTTP, HTML DOM traversal, JavaScript rendering, proxies, user agents, and browser automation. Solid understanding of Instagram’s data structures , public endpoints, GraphQL queries, and security challenges. Familiarity with anti-bot bypass techniques : rotating proxies, CAPTCHA solving (2Captcha, AntiCaptcha), session management. Hands-on experience in training and deploying ML models (NLP, classification, clustering) using scikit-learn, TensorFlow, or PyTorch . Experience with MongoDB, PostgreSQL, or Elasticsearch for data storage and retrieval. Good understanding of data privacy, legal considerations, and ethical scraping practices. Preferred Skills Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker/Kubernetes). Knowledge of Instagram Business APIs , Facebook Graph API, and TikTok’s unofficial endpoints. Prior work on influencer discovery, brand monitoring, or social listening tools. Experience in building data dashboards using tools like Streamlit, Power BI, or Tableau . Contributions to open-source scraping libraries or ML projects. Tools & Technologies You Might Use Python, Scrapy, Selenium, Playwright, Puppeteer Pandas, NumPy, scikit-learn, OpenAI APIs PostgreSQL, MongoDB, Redis, Elasticsearch AWS Lambda, EC2, S3, Cloud Functions Git, Docker, CI/CD pipelines
Posted 1 week ago
5.0 years
0 Lacs
Delhi
On-site
About Cisive Cisive is a trusted partner for comprehensive, high-risk compliance-driven background screening and workforce monitoring solutions, specializing in highly regulated industries—such as healthcare, financial services, and transportation. We catch what others miss, and we are dedicated to helping our clients effortlessly secure the right talent. As a global leader, Cisive empowers organizations to hire with confidence. Through our PreCheck division, Cisive provides specialized background screening and credentialing solutions tailored for healthcare organizations, ensuring patient and workforce safety. Driver iQ, our transportation-focused division, delivers FMCSA-compliant screening and monitoring solutions that help carriers hire and retain the safest drivers on the road. Unlike traditional background screening providers, Cisive takes a technology-first approach powered by advanced automation, human expertise, and compliance intelligence—all delivered through a scalable platform. Our solutions include continuous workforce monitoring, identity verification, criminal record screening, license monitoring, drug & health screening, and global background checks. Job Summary The Senior Software Developer is responsible for designing and delivering complex, scalable software systems, leading technical initiatives, and mentoring junior developers. This role plays a key part in driving high-impact projects and ensuring the delivery of robust, maintainable solutions. In addition to core development duties, the role works closely with the business to identify opportunities for automation and web scraping to improve operational efficiency. The Senior Software Developer will collaborate with Cisive’s Software Development team and client stakeholders to support, analyze, mine, and report on IT and business data—focusing on optimizing data handling for web scraping processes. This individual will manage and consult on data flowing into and out of Cisive systems, ensuring data integrity, performance, and compliance with operational standards. The role is critical to achieving service excellence and automation across Cisive’s diverse product offerings and will continuously strive to enhance process efficiency and data flow across platforms. Duties and Responsibilities Lead the design, architecture, and implementation of scalable and maintainable web scraping solutions using the Scrapy framework, integrated with tools such as Kafka, Zookeeper, and Redis Develop and maintain web crawlers to automate data extraction from various sources, ensuring alignment with user and application requirements Research, design, and implement automation strategies across multiple platforms, tools, and technologies to optimize business processes Monitor, troubleshoot, and resolve issues affecting the performance, reliability, and stability of scraping systems and automation tools Serve as a Subject Matter Expert (SME) for automation systems, providing guidance and support to internal teams Analyze and validate extracted data to ensure accuracy, integrity, and compliance with Cisive’s data standards Define, implement, and enforce data requirements, standards, and best practices to ensure consistent and efficient operations Collaborate with stakeholders and end users to define technical requirements, business goals, and alternative solutions for data collection and reporting Create, manage, and document reports, processes, policies, and project plans, including risk assessments and goal tracking Conduct code reviews, enforce coding standards, and provide technical leadership and mentorship to development team members Proactively identify and mitigate technical risks, recommending improvements in technologies, tools, and processes Drive the adoption of modern development tools, frameworks, and best practices Contribute to strategic planning related to automation initiatives and product development Ensure clear, thorough communication and documentation across teams to support knowledge sharing and training Minimum Qualifications Bachelor’s degree in Computer Science, Software Engineering, or related field. 5+ years of professional software development experience. Strong proficiency in HTML, XML, XPath, XSLT, and Regular Expressions for data extraction and transformation Hands-on experience with Visual Studio Strong proficiency in Python Some experience with C# .NET Solid experience with MS SQL Server, with strong skills in SQL querying and data analysis Experience with web scraping, particularly using the Scrapy framework integrated with Kafka, Zookeeper, and Redis Experience with .NET automation tools such as Selenium Understanding of CAPTCHA-solving services and working with proxy services Experience working in a Linux environment is a plus Highly self-motivated and detail-oriented, with a proactive, goal-driven mindset Strong team player with dependable work habits and well-developed interpersonal skills Excellent verbal and written communication skills Demonstrates willingness and flexibility to adapt schedule when necessary to meet client needs.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Python Developer – Web Scraping & Data Processing About the Role We are seeking a skilled and detail-oriented Python Developer with hands-on experience in web scraping, document parsing (PDF, HTML, XML), and structured data extraction. You will be part of a core team working on aggregating biomedical content from diverse sources, including grant repositories, scientific journals, conference abstracts, treatment guidelines, and clinical trial databases. Key Responsibilities • Develop scalable Python scripts to scrape and parse biomedical data from websites, pre-print servers, citation indexes, journals, and treatment guidelines. • Build robust modules for splitting multi-record documents (PDFs, HTML, etc.) into individual content units. • Implement NLP-based field extraction pipelines using libraries like spaCy, NLTK, or regex for metadata tagging. • Design and automate workflows using schedulers like cron, Celery, or Apache Airflow for periodic scraping and updates. • Store parsed data in relational (PostgreSQL) or NoSQL (MongoDB) databases with efficient schema design. • Ensure robust logging, exception handling, and content quality validation across all processes. Required Skills and Qualifications • 3+ years of hands-on experience in Python, especially for data extraction, transformation, and loading (ETL). o Strong command over web scraping libraries: BeautifulSoup, Scrapy, Selenium, Playwright o Proficiency in PDF parsing libraries: PyMuPDF, pdfminer.six, PDFPlumber • Experience with HTML/XML parsers: lxml, XPath, html5lib • Familiarity with regular expressions, NLP, and field extraction techniques. • Working knowledge of SQL and/or NoSQL databases (MySQL, PostgreSQL, MongoDB). • Understanding of API integration (RESTful APIs) for structured data sources. • Experience with task schedulers and workflow orchestrators (cron, Airflow, Celery). • Version control using Git/GitHub and comfortable working in collaborative environments. Good to Have • Exposure to biomedical or healthcare data parsing (e.g., abstracts, clinical trials, drug labels). • Familiarity with cloud environments like AWS (Lambda, S3) • Experience with data validation frameworks and building QA rules. • Understanding of ontologies and taxonomies (e.g., UMLS, MeSH) for content tagging. Why Join Us • Opportunity to work on cutting-edge biomedical data aggregation for large-scale AI and knowledge graph initiatives. • Collaborative environment with a mission to improve access and insights from scientific literature. • Flexible work arrangements and access to industry-grade tools and infrastructure.
Posted 1 week ago
0 years
0 Lacs
Jabalpur, Madhya Pradesh, India
On-site
Job Overview: We are looking for a highly skilled Python Developer with strong experience in building both web and mobile app crawlers. The ideal candidate should have in-depth knowledge of Scrapy, Selenium, REST APIs, proxy rotation, and techniques to bypass anti-scraping mechanisms. You'll be responsible for building scalable, stealthy crawlers that extract data from websites and mobile apps (via APIs or reverse engineering traffic). Responsibilities: Develop and maintain web crawlers using Scrapy, Selenium, and custom spiders. Build and manage mobile app crawlers by intercepting and decoding API requests, using tools like mitmproxy, Charles Proxy, or Burp Suite. Handle proxy rotation, user-agent spoofing, cookie/session management, and headless browsers. Monitor and adapt to changing structures of target websites and apps to maintain scraping accuracy. Optimize performance, manage rate limits, and avoid detection (CAPTCHA, IP blocks). Structure and store extracted data in JSON, CSV, or databases. Maintain logs, error tracking, and reprocessing pipelines for failed jobs. Required Skills: Strong expertise in Python with deep knowledge of Scrapy, Selenium, and requests/BeautifulSoup. Experience with mobile app traffic analysis using proxies/sniffers. Understanding of RESTful APIs, HTTP methods, and JSON/XML formats. Familiarity with proxy services, rotating residential/datacenter IPs, and anti-bot evasion. Solid grasp of HTML, DOM parsing, and browser automation. Hands-on with Git, Linux command line, and virtual environments. Preferred Skills: Experience with tools like mitmproxy, Charles Proxy, Burp Suite. Familiarity with headless browsers like Puppeteer, Playwright (optional). Ability to reverse engineer API calls from Android/iOS apps. Knowledge of Docker, cloud deployment (AWS/GCP), and job schedulers. Basic understanding of captcha-solving services (2Captcha, CapMonster, etc.). Bonus Points: Built crawlers for domains like e-commerce, travel, social media, or financial data. Experience with CI/CD pipelines for automated crawling workflows. Knowledge of data cleaning, ETL, or streaming pipelines (Kafka, Airflow, etc.).
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Lead Django Backend Developer to join its team of experts. Skill : Lead Django Backend Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough