Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Thiruvananthapuram, Kerala
On-site
Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 3 months ago
5.0 years
0 Lacs
Greater Delhi Area
Remote
ABOUT THE PYTHON DATA ENGINEER ROLE: We are looking for a skilled Python Data Engineer to join our team and work on building high-performance applications and scalable data solutions. In this role, you will be responsible for designing, developing, and maintaining robust Python-based applications, optimizing data pipelines, and integrating various APIs and databases. This is more than just a coding role—it requires strategic thinking, creativity, and a passion for data-driven decision-making to drive results and innovation. KEY RESPONSIBILITIES: Develop, test, and maintain efficient Python applications. Design, develop, and maintain ETL pipelines for efficient data extraction, transformation, and loading. Implement and integrate APIs, web scraping techniques, and database queries to extract data from various sources. Design and implement algorithms for data processing, transformation, and analysis. Write optimized SQL queries and work with relational databases to manage and analyse large datasets. Collaborate with cross-functional teams to understand technical requirements and deliver high-quality solutions. Ensure code quality, performance, and scalability through best practices and code reviews. Stay updated with the latest advancements in Python, data engineering, and backend development. REQUIRED QUALIFICATIONS: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 3–5+ years of hands-on experience as Data Engineer using Python Proficiency in Python frameworks and libraries such as Pandas, NumPy, and Scrapy. Experience with Data Visualization tools such as Power BI, Tableau Strong understanding of relational databases and SQL. Experience working with cloud platforms such as AWS Strong problem-solving skills with an analytical mindset. Excellent communication skills and the ability to work in a collaborative team environment. WHY JOIN US? Highly inclusive and collaborative culture built on mutual respect. Focus on core values, initiative, leadership, and adaptability. Strong emphasis on personal and professional development. Flexibility to work remotely and/or hybrid indefinitely. ABOUT WIN: Founded in 1993, WIN is a highly innovative proptech company revolutionizing the real estate industry with cutting-edge software platforms and products. With the stability and reputation of a 30-year legacy paired with the curiosity and agility of a start-up, we’ve been recognized as an Entrepreneur 500 company, one of the Fastest Growing Companies, and the Most Innovative Home Services Company. OUR CULTURE: Our colleagues are driven by curiosity and tinkering and a desire to make an impact. They enjoy a culture of high energy and collaboration where we listen to each other with empathy, experience personal and professional growth, and celebrate small victories and big accomplishments. Click here to learn more about our company and culture: https://www.linkedin.com/company/winhomeinspection/life Show more Show less
Posted 3 months ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
About the Role: We are looking for a hands-on Data Engineer to join our team and take full ownership of scraping pipelines and data quality. You'll be working on data from 60+ websites involving PDFs, processed via OCR and stored in MySQL/PostgreSQL. You’ll build robust, self-healing pipelines and fix common data issues (missing fields, duplication, formatting errors). Responsibilities: Own and optimize Airflow scraping DAGs for 60+ sites Implement validation checks, retry logic, and error alerts Build pre-processing routines to clean OCR'd text Create data normalization and deduplication workflows Maintain data integrity across MySQL and PostgreSQL Collaborate with ML team for downstream AI use cases Requirements: 2–5 years of experience in Python-based data engineering Experience with Airflow, Pandas, OCR (Tesseract or AWS Textract) Solid SQL and schema design skills (MySQL/PostgreSQL) Familiarity with CSV processing and data pipelines Bonus: Experience with scraping using Scrapy or Selenium Location: Delhi (in-office only) Salary Range : 50-80k/Month Show more Show less
Posted 3 months ago
3.0 years
0 Lacs
India
Remote
Job Title: AI/ML + Backend Developer (SEO Automation & Technical Implementation) Location: Remote (APAC preferred) | Full-Time or Contract 🔹About the Role We’re looking for a 3-7 years experienced technical powerhouse who blends backend engineering, AI/ML experience, and hands-on SEO implementation skills. This hybrid role will support our mission to scale intelligent SEO operations and automate key parts of our publishing workflows. You’ll build custom tools and systems that integrate machine learning, backend development, and SEO performance logic — from programmatic content generation and internal linking engines to technical audits, schema injection, and Google Search Console automations. 🔹What You'll Be Doing 🔧 Backend & Automation Development Build internal tools and APIs using Python or Node.js Automate content workflows (meta/gen content, redirects, schema, etc.) Integrate third-party APIs (GSC, Ahrefs, OpenAI, Gemini, Google Sheets) 🧠 AI/ML Workflows Apply NLP models for entity recognition, summarization, topic clustering Deploy and manage ML inference pipelines Work with LLMs to scale content enhancements (FAQs, headlines, refresh logic) ⚙️ SEO Automation & Technical Implementation Run and implement technical SEO audits (crawl issues, sitemaps, indexing, Core Web Vitals) Automate internal linking, canonical tags, redirects, structured data Use tools like Screaming Frog CLI, GSC API, and Cloudflare for scalable SEO execution 📈 Performance Monitoring Set up dashboards to monitor SEO KPIs and anomaly detection Build alerting systems for performance drops, crawl issues, or deindexed content 🔹 Key Skills Required Languages & Tools: Python (FastAPI, Pandas, Scrapy, etc.) and/or Node.js Databases (PostgreSQL, MongoDB, Redis) Docker, GitHub Actions, Cloud (GCP/AWS preferred) GSC API, Screaming Frog CLI, Google Sheets API OpenAI/Gemini API, LangChain or similar frameworks SEO Knowledge: Strong understanding of on-page and technical SEO Experience with programmatic content, schema markup, and CWV improvements Familiar with common issues like crawl depth, duplication, orphan pages, and indexability 🔹 Nice to Have Experience with content/media/publishing websites Familiarity with CI/CD and working in async product teams Exposure to headless CMS or WordPress API integrations Past experience automating large-scale content or SEO systems 🔹 What You'll Get The chance to work on large-scale content automation and modern SEO problems High autonomy, technical ownership, and visibility in decision-making Flexible remote work and performance-based incentives Direct collaboration with SEO strategy and editorial stakeholders . Show more Show less
Posted 3 months ago
5.0 - 7.0 years
6 - 8 Lacs
Kolkata
Remote
Note: Please don't apply if you do not have at least 3 years of Scrapy experience. We are seeking a highly experienced Web Scraping Expert specialising in Scrapy-based web scraping and large-scale data extraction. This role is focused on building and optimizing web crawlers, handling anti-scraping measures, and ensuring efficient data pipelines for structured data collection. The ideal candidate will have 5+ years of hands-on experience developing Scrapy-based scraping solutions, implementing advanced evasion techniques, and managing high-volume web data extraction. You will collaborate with a cross-functional team to design, implement, and optimize scalable scraping systems that deliver high-quality, structured data for critical business needs. Key Responsibilities Scrapy-based Web Scraping Development Develop and maintain scalable web crawlers using Scrapy to extract structured data from diverse sources. Optimize Scrapy spiders for efficiency, reliability, and speed while minimizing detection risks. Handle dynamic content using middlewares, browser-based scraping (Playwright/Selenium), and API integrations. Implement proxy rotation, user-agent switching, and CAPTCHA solving techniques to bypass anti-bot measures. Advanced Anti-Scraping Evasion Techniques Utilize AI-driven approaches to adapt to bot detection and prevent blocks. Implement headless browser automation and request-mimicking strategies to mimic human behavior. Data Processing & Pipeline Management Extract, clean, and structure large-scale web data into structured formats like JSON, CSV, and databases. Optimize Scrapy pipelines for high-speed data processing and storage in MongoDB, PostgreSQL, or cloud storage (AWS S3). Code Quality & Performance Optimization Write clean, well-structured, and maintainable Python code for scraping solutions. Implement automated testing for data accuracy and scraper reliability. Continuously improve crawler efficiency by minimizing IP bans, request delays, and resource consumption. Required Skills and Experience Technical Expertise 5+ years of professional experience in Python development with a focus on web scraping. Proficiency in using Scrapy based scraping Strong understanding of HTML, CSS, JavaScript, and browser behavior. Experience with Docker will be a plus Expertise in handling APIs (RESTful and GraphQL) for data extraction. Proficiency in database systems like MongoDB, PostgreSQL Strong knowledge of version control systems like Git and collaboration platforms like GitHub. Key Attributes Strong problem-solving and analytical skills, with a focus on efficient solutions for complex scraping challenges. Excellent communication skills, both written and verbal. A passion for data and a keen eye for detail Why Join Us? Work on cutting-edge scraping technologies and AI-driven solutions. Collaborate with a team of talented professionals in a growth-driven environment. Opportunity to influence the development of data-driven business strategies through advanced scraping techniques. Competitive compensation and benefits.
Posted 3 months ago
0 years
0 Lacs
India
Remote
Job Title: Data Automation Intern Location: Remote Duration: 3-6 months Type: Unpaid Internship About CollegePur: CollegePur is a remote educational consultancy dedicated to guiding students through their academic journeys. We focus on providing insightful resources and personalized support to help students make informed decisions about their education and careers. Overview: We are seeking motivated Python Programming Interns to join our team. In this role, you will develop automated solutions to extract and organize information from public online sources. This experience will provide exposure to real-world data workflows and foundational concepts in data engineering and automation. Key Responsibilities: Develop Python scripts to automate the retrieval of structured and semi-structured data from web and API sources. Process, clean, and structure extracted information into usable formats (e.g., CSV, JSON). Assist in building simple pipelines for data refresh and scheduled extraction. Collaborate with internal teams to understand data needs and deliver high-quality datasets. Maintain clear documentation of code, logic, and workflow steps. Preferred Skills: Proficiency in Python programming. Familiarity with web interaction tools such as Requests, BeautifulSoup, Selenium, or Scrapy. Understanding of HTML/CSS/DOM structures. Experience working with pandas for data manipulation and transformation. Basic knowledge of REST APIs and JSON data structures. Exposure to ETL concepts, especially the extract and transform stages, is a plus. Strong analytical and debugging skills. Eligibility: Currently enrolled in a Bachelor's or Master's program in Computer Science, Data Science, or a related field. Self-taught developers with Python automation projects or coursework. Aspiring data engineers or developers interested in real-world data workflows. Individuals who enjoy working with data and turning it into structured, usable information. Perks: Flexible working hours and remote work opportunity. Mentorship from experienced developers and analysts. Hands-on experience with live data extraction and automation projects. Certificate of completion and Letter of recommendation Show more Show less
Posted 3 months ago
1.0 years
0 Lacs
Mohali
On-site
Hiring: Python Developer (Web Scraping) – Mohali Location Location : Mohali Email : hr@baselineitdevelopment.com Contact : 9888122266 Experience : 1 Year Required Skills : Strong knowledge of Python. Hands-on experience in Web Scraping using tools like BeautifulSoup, Scrapy, or Selenium. Understanding of APIs and data parsing. Familiarity with data cleaning and automation scripts. Basic knowledge of databases (MySQL/PostgreSQL) is a plus. Good problem-solving skills and ability to write clean, efficient code Roles & Responsibilities : Develop and maintain web scraping scripts to extract data from various websites. Work with large datasets and automate data collection processes. Ensure data accuracy and consistency. Collaborate with the team to understand requirements and deliver solutions Job Types: Full-time, Permanent Pay: From ₹20,000.00 per month Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Work Location: In person
Posted 3 months ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Be a part of India’s largest and most admired news network! Network18 is India's most diversified Media Company in the fast growing Media market. The Company has a strong Heritage and we possess a strong presence in Magazines, Television and Internet domains. Our brands like CNBC, Forbes and Moneycontrol are market leaders in their respective segments. The Company has over 7,000 employees across all major cities in India and has been consistently managed to stay ahead of the growth curve of the industry. Network 18 brings together employees from varied backgrounds under one roof united by the hunger to create immersive content and ideas. We take pride in our people, who we believe are the key to realizing the organization’s potential. We continually strive to enable our employees to realize their own goals, by providing opportunities to learn, share and grow. Role Overview: We are seeking a passionate and skilled Data Scientist with over a year of experience to join our dynamic team. You will be instrumental in developing and deploying machine learning models, building robust data pipelines, and translating complex data into actionable insights. This role offers the opportunity to work on cutting-edge projects involving NLP, Generative AI, data automation, and cloud technologies to drive business value. Key Responsibilities: Design, develop, and deploy machine learning models, with a strong focus on NLP (including advanced techniques and Generative AI) and other AI applications. Build, maintain, and optimize ETL pipelines for automated data ingestion, transformation, and standardization from various sources Work extensively with SQL for data extraction, manipulation, and analysis in environments like BigQuery. Develop solutions using Python and relevant data science/ML libraries (Pandas, NumPy, Hugging Face Transformers, etc.). Utilize Google Cloud Platform (GCP) services for data storage, processing, and model deployment. Create and maintain interactive dashboards and reporting tools (e.g., Power BI) to present insights to stakeholders. Apply basic Docker concepts for containerization and deployment of applications. Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. Stay abreast of the latest advancements in AI/ML and NLP best practices. Required Qualifications & Skills: 2+ years of hands-on experience as a Data Scientist or in a similar role. Solid understanding of machine learning fundamentals, algorithms, and best practices. Proficiency in Python and relevant data science libraries. Good SQL skills for complex querying and data manipulation. Demonstrable experience with Natural Language Processing (NLP) techniques, including advanced models (e.g., transformers) and familiarity with Generative AI concepts and applications. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications & Skills: Familiarity and hands-on experience with Google Cloud Platform (GCP) services, especially BigQuery, Cloud Functions, and Vertex AI. Basic understanding of Docker and containerization for deploying applications. Experience with dashboarding tools like Power BI and building web applications with Streamlit. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Knowledge of data warehousing concepts and schema design. Experience in designing and building ETL pipelines. Disclaimer: Please note Network18 and related group companies do not use the services of vendors or agents for recruitment. Please beware of such agents or vendors providing assistance. Network18 will not be responsible for any losses incurred. “We correspond only from our official email address” Show more Show less
Posted 3 months ago
5.0 - 10.0 years
10 - 20 Lacs
Jaipur
Remote
Summary To enhance user profiling and risk assessment, we are building web crawlers to collect relevant user data from third-party sources, forums, and the dark web. We are seeking a Senior Web Crawler & Data Extraction Engineer to design and implement these data collection solutions. Job Responsibilities Design, develop, and maintain web crawlers and scrapers to extract data from open web sources, forums, marketplaces, and the dark web. Implement data extraction pipelines that aggregate, clean, and structure data for fraud detection and risk profiling. Use Tor, VPNs, and other anonymization techniques to safely crawl the dark web while avoiding detection. Develop real-time monitoring solutions for tracking fraudulent activities, data breaches, and cybercrime discussions. Optimize crawling speed and ensure compliance with website terms of service, ethical standards, and legal frameworks. Integrate extracted data with fraud detection models, risk scoring algorithms, and cybersecurity intelligence tools. Work with data scientists and security analysts to develop threat intelligence dashboards from collected data. Implement anti-bot detection evasion techniques and handle CAPTCHAs using AI-driven solvers where necessary. Stay updated on OSINT (Open-Source Intelligence) techniques, web scraping best practices, and cybersecurity trends. Requirements 5+ years of experience in web crawling, data scraping, or cybersecurity data extraction. Strong proficiency in Python, Scrapy, Selenium, BeautifulSoup, Puppeteer, or similar frameworks. Experience working with Tor, proxies, and VPNs for anonymous web scraping. Deep understanding of HTTP protocols, web security, and bot detection mechanisms. Experience parsing structured and unstructured data from JSON, XML, and web pages. Strong knowledge of database management (SQL, NoSQL) for storing large-scale crawled data. Familiarity with AI/ML-based fraud detection techniques and data classification methods. Experience working with cybersecurity intelligence sources, dark web monitoring, and OSINT tools. Ability to implement scalable, distributed web crawling architectures. Knowledge of data privacy regulations (GDPR, CCPA) and ethical data collection practices. Nice to Have Experience in fintech, fraud detection, or threat intelligence. Knowledge of natural language processing (NLP) for analyzing cybercrime discussions. Familiarity with machine learning-driven anomaly detection for fraud prevention. Hands-on experience with cloud-based big data solutions (AWS, GCP, Azure, Elasticsearch, Kafka).
Posted 3 months ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 10 The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. What’s In It For You Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists / ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML library’s and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements Proven track record as a senior / lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Master’s / PhD from a recognized institute in a relevant specialization Good To Have 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315679 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 months ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 10 The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. What’s In It For You Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists / ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML library’s and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements Proven track record as a senior / lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Master’s / PhD from a recognized institute in a relevant specialization Good To Have 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315679 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 months ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Role Grade Level (for internal use): 10 The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. What’s In It For You Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists / ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML library’s and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements Proven track record as a senior / lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Master’s / PhD from a recognized institute in a relevant specialization Good To Have 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315679 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 months ago
4.0 years
2 - 8 Lacs
India
On-site
We Are Hiring! Join Our Team as a Senior Python Developer Role: Senior Python Developer Work Locations: Teynampet, Chennai/Kk Nagar, Madurai Work from Office: (1pm to 10pm) Monday - Friday Mode of Interview: In-Person Experience : 4+ Years Hands-on software development skills, deep technical expertise across the entire software delivery process. Forward-thinking skilled individual. Structured, organized, and a good communicator. Write reusable, Testable, and Efficient code. Required Skills: - 3+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. Interested candidates can send their resume to this mail id -dharshanamurthy.v@egrovesys.com Or WhatsApp - 9342768767 About Company: eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies.At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers. Job Types: Full-time, Permanent Pay: ₹267,535.81 - ₹800,000.00 per year Benefits: Provident Fund Location Type: In-person Schedule: Monday to Friday UK shift Work Location: In person
Posted 3 months ago
2.0 - 6.0 years
60 - 72 Lacs
Ahmedabad
Work from Office
Responsibilities: Collaborate with cross-functional teams on project requirements & deliverables. Design, develop, test & maintain Python apps using Django/Flask & AWS/Azure Cloud.
Posted 3 months ago
1.0 - 3.0 years
3 - 7 Lacs
Gurugram
Work from Office
We are looking for a Python Developer who has expertise in web scraping and backend development. The ideal candidate should be proficient in Python frameworks, data extraction techniques, and API integration.
Posted 3 months ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Hiring Web Scraping Interns Location: Ahmedabad Internship Duration: 3 to 5 months Stipend: Based on performance (optional) Qualification: BCA / B.Sc (Computer Science) / B.Tech / BE (IT/CS) Final year students or recent graduates are preferred Skills Required: Basic knowledge of Python Familiarity with web scraping libraries such as BeautifulSoup, Selenium, or Scrapy Understanding of HTML and CSS; JavaScript knowledge is a plus Ability to work independently and in a team Willingness to learn and take ownership of tasks What You Will Learn: Real-world web scraping techniques Handling dynamic pages, proxies, captchas, and data cleaning Working on client-based projects with proper documentation Best practices for storing and managing scraped data Exposure to tools and workflows used in live environments Show more Show less
Posted 3 months ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking an experienced and motivated Data Scraper / Lead Generator to join our fast-growing team in Mumbai. The ideal candidate will have a strong background in generating leads through web scraping and online research, specifically targeting the Europe, UK, USA and other international markets . Key Responsibilities: Conduct in-depth online research to identify potential leads in targeted geographies Use advanced web scraping tools and techniques to extract accurate contact and business data from various sources. Validate and verify collected data to ensure quality and relevance. Maintain and manage a structured database of leads for outreach and tracking. Collaborate closely with the sales and marketing teams to deliver a steady pipeline of high-quality leads. Stay up to date with industry trends, tools, and best practices in data scraping and lead generation. Requirements: Proven experience in data scraping lead generation , especially in international markets (UK preferred) . Proficiency in web scraping tools and methods (e.g., Python/BeautifulSoup, Scrapy, Octoparse, or similar). Strong attention to detail, organizational skills, and data accuracy. Ability to manage time efficiently and handle multiple tasks. Excellent communication and coordination skills. Preferred: Immediate availability or short notice period. Show more Show less
Posted 3 months ago
6.0 - 10.0 years
8 - 13 Lacs
Gurugram
Work from Office
The Team : As a member of the Data Transformation - Cognitive Engineering team you will work on building and deploying ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead deployment of AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. Whats in it for you: Be a part of a global company and build solutions at enterprise scale Lead a highly skilled and technically strong team (including leadership) Contribute to solving high complexity, high impact problems Build production ready pipelines from ideation to deployment Responsibilities: Design, Develop and Deploy ML powered products and pipelines Mentor a team of Senior and Junior data scientists ML Engineers in delivering large scale projects Play a central role in all stages of the AI product development life cycle, including: Designing Machine Learning systems and model scaling strategies Research & Implement ML and Deep learning algorithms for production Run necessary ML tests and benchmarks for model validation Fine-tune, retrain and scale existing model deployments Extend existing ML librarys and write packages for reproducing components Partner with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Interpret results and present them to business leaders Manage production pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Lead and mentor by example, including project scrums Technical Requirements: Proven track record as a senior lead ML engineer Expert proficiency in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Excellent exposure to large scale model deployment strategies and tools Excellent knowledge of ML & Deep Learning domain Solid exposure to Information Retrieval, Web scraping and Data Extraction at scale Exposure to the following technologies - R-Shiny/Dash/Streamlit, SQL, Docker, Airflow, Redis, Celery, Flask/Django/FastAPI, PySpark, Scrapy Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Open to learning new technologies and programming languages as required A Masters PhD from a recognized institute in a relevant specialization Good to have: 6-7+ years of relevant experience in ML Engineering Prior substantial experience from the Economics/Financial industry Prior work to show on Github, Kaggle, StackOverflow etc.
Posted 3 months ago
6.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Overview We are seeking a skilled Data Engineer to join our team. The successful candidate will be responsible for maintaining and optimizing data pipelines, implementing robust data checks, and ensuring the accuracy and integrity of data flows. This role is critical in supporting data-driven decision-making processes, especially in the context of our insurance-focused business operations. Key Responsibilities Data Collection and Acquisition: Source Identification, Data Licensing and Compliance, Data Crawling/Collection Data Preprocessing and Cleaning: Data Cleaning, Text Tokenization, Normalization, Noise Filtering Data Transformation and Feature Engineering: Text Embedding, Text Augmentation, Handling Multilingual Data Data Pipeline Development: Scalable Pipelines, ETL Processes, Automation Data Storage and Management: Data Warehousing, Database Optimization, Version Control Collaboration with Data Scientists and ML Engineers: Data Accessibility, Support for Model Development, Data Quality Assurance Performance Optimization and Scaling: Efficient Data Handling, Distributed Computing Data Security and Privacy: Data Anonymization, Compliance with Regulations Documentation and Reporting: Data Pipeline Documentation, Reporting Candidate Profile 6 -10 years of relevant experience in data engineering tools Tools: Data Processing & Storage: Apache Spark, Apache Hadoop, Apache Kafka, Google BigQuery, AWS S3, Databricks Machine Learning Frameworks: TensorFlow, PyTorch, Hugging Face Transformers, scikit-learn Data Pipelines & Automation: Apache Airflow, Kubeflow, Luigi Version Control & Collaboration: Git, DVC (Data Version Control) Data Extraction: BeautifulSoup, Scrapy, APIs (RESTful, GraphQL) What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond Show more Show less
Posted 3 months ago
5.0 - 10.0 years
8 - 15 Lacs
Ahmedabad
Work from Office
Role & responsibilities Develop and implement Python scripts for web scraping using Selenium WebDriver to extract relevant data from client websites. Clean, transform, and manipulate extracted data using Python libraries (e.g., Pandas, BeautifulSoup) for schema (Structured data) markup implementation Write well-documented, maintainable, and efficient Python code adhering to best practices. Collaborate with SEOs and the Director of SEO to understand client requirements and translate them into technical solutions. Stay up-to-date on the latest trends and developments in web scraping, schema (Structured data) markup, and SEO best practices. Assist with testing and debugging developed scripts to ensure accuracy of schema (Structured data) implementation without any error. Experience working in Automation through AI agents Experience working with machine learning and AI (Artificial Intelligence) integration using Python. Preferred candidate profile Having 4-5 years of working experience in Python programming. Strong understanding of Python syntax, data structures, Iterator, Generators, Exception Handling, File handling, OOPs, Data Structures, ORM and object-oriented programming concepts. Proficiency in using web scraping libraries like Selenium WebDriver and Beautiful Soup. Must be familiar with Web Frameworks like HTML, CSS, JavaScript, Django or Flasks. Good knowledge of machine learning & ML frameworks like NumPy, Pandas, Kera's, scikit-learn, PyTorch, TensorFlow or Microsoft Azure Machine Learning will be added advantage. Must be familiar with development tools like Jupyter Notebook, IDLE, PyCharm or VS Code. Must be familiar with Scrum methodology, CI/CD, Git, Branching/Merging and test-driven software development. Candidates worked in product-based companies will be preferred. Excellent analytical and problem-solving skills. Ability to work independently and as part of a team. Strong communication and collaboration skills. A passion for SEO and a desire to learn about schema (Structured data) markup. Familiarity with cloud platforms (AWS, GCP, Azure DevOps, Azure Blob Storage Explorer) Experience with API integration. Experience working with AI (Artificial Intelligence) integration with Python to automate SEO tasks with Google Gemini, GenAI (Generative AI) & ChatGPT 4. Experience working in Automation through AI agents Good verbal and written communication skills.
Posted 3 months ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 3–8 years About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. Key Responsibilities System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving Show more Show less
Posted 3 months ago
3.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Description Python Web scraping Chennai/Pune About The Job NIQ Digital Shelf is one of Europe’s fastest-growing companies in the Retail Analytics space. Every day, we collect and process over 60 billion data points from web and mobile sources to power real-time market insights. Our tools help major brands and retailers understand what's happening in their market, how they compare to competitors, and what actions to take. We’re now a team of 60+ Scrapers from over 12 nationalities, working together with engineering, data, product, operations, and customer success. As we scale globally, we’re looking for new talent to help push our technology and data collection efforts even further. You’ll join an engineering team that values curiosity, autonomy, and the ability to iterate fast. You’ll also collaborate with people across the business to make sure the right data ends up in the right hands, in the cleanest, smartest way possible. What You'll Work On Build and maintain efficient web crawlers to extract structured data from websites (e.g. product listings, prices, reviews). Write robust data pipelines to parse and clean messy web content. Deal with real-world challenges like JavaScript-heavy pages, anti-bot measures, and changing page structures. Work closely with product and operations to adjust scraping strategies when sites change or new data needs emerge Qualifications Must Have: 1–3 years of experience working with Python. Comfortable using tools like Scrapy, Python Requests, BeautifulSoup, Playwright/Selenium. You understand how to work with HTTP headers, cookies, session management, and are not afraid of network debugging. You adapt quickly and aren’t scared of messy problems. When something breaks, your instinct is to figure out why and fix it. You enjoy learning, asking questions, and building better tools — not just copying and pasting scripts. Nice to Have: Basic exposure to concepts like rotating proxies, user-agent spoofing, or using headless browsers (e.g., with Selenium or Playwright). Some hands-on practice scraping structured websites, while using scrapy of python requests and BeautifulSoup. A basic understanding of HTML structure, XPaths, or CSS selectors. Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 3 months ago
0.0 - 1.0 years
0 Lacs
Jaipur, Rajasthan
On-site
Job Title: Python Developer – Web Scraping Specialist Experience: 1+ Years Location: Jaipur (Work from Office) Job Type: Full-Time Job Summary: We are seeking a detail-oriented and skilled Python Developer with expertise in Web Scraping to join our technology team. The ideal candidate will have at least 1 year of experience in Python programming and hands-on knowledge of web scraping tools and techniques. You will be responsible for designing and implementing efficient, scalable web crawlers to extract structured data from various websites and online platforms. Key Responsibilities: Design, build, and maintain web scraping scripts and crawlers using Python . Utilize tools such as BeautifulSoup , Selenium , and Scrapy to extract data from dynamic and static websites. Clean, structure, and store extracted data in usable formats (e.g., CSV, JSON, databases). Handle data parsing, anti-scraping measures, and ensure scraping compliance with website policies. Monitor and troubleshoot scraping tasks for performance and reliability. Collaborate with team members to understand data requirements and deliver accurate, timely results. Optimize scraping scripts for speed, reliability, and error handling. Maintain documentation of scraping processes and codebase. Required Skills: Solid programming skills in Core Python and data manipulation. Strong experience in Web Scraping using BeautifulSoup , Selenium , and Scrapy . Familiarity with HTTP protocols, request headers, cookies, and browser automation. Understanding of HTML, CSS, and XPath for parsing and navigating web content. Ability to handle and solve CAPTCHA and anti-bot mechanisms. Experience with data formats like JSON, XML, and CSV. Knowledge of version control tools like Git. Preferred Qualifications: Bachelor’s degree in Computer Science, IT, or a related field. Experience with task schedulers (e.g., CRON, Celery) for automated scraping. Knowledge of storing data in SQL or NoSQL databases. Familiarity with proxy management and user-agent rotation. Job Type: Full-time Pay: ₹7,000.00 - ₹35,000.00 per month Schedule: Day shift Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Python: 1 year (Required) beautiful soup or scrapy: 1 year (Required) Selenium: 1 year (Preferred) Location: Jaipur, Rajasthan (Preferred)
Posted 3 months ago
7.0 - 12.0 years
12 - 22 Lacs
Bengaluru
Remote
Role & responsibilities As a Data Engineer focused on web crawling and platform data acquisition, you will design, develop, and maintain large-scale web scraping pipelines to extract valuable platform data. You will be responsible for implementing scalable and resilient data extraction solutions, ensuring seamless data retrieval while working with proxy management, anti-bot bypass techniques, and data parsing. Optimizing scraping workflows for performance, reliability, and efficiency will be a key part of your role. Additionally, you will ensure that all extracted data maintains high quality and integrity. Preferred candidate profile We are seeking candidates with: Strong experience in Python and web scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup. Knowledge of distributed web crawling architectures and job scheduling. Familiarity with headless browsers, CAPTCHA-solving techniques, and proxy management to handle dynamic web challenges. Experience with data storage solutions, including SQL, and cloud storage. Understanding of big data technologies like Spark and Kafka (a plus). Strong debugging skills to adapt to website structure changes and blockers. A proactive, problem-solving mindset and ability to work effectively in a team-driven environment.
Posted 3 months ago
0 years
0 Lacs
Mohali
On-site
Job Summary: We are looking for a passionate and quick-learning Python Developer (Fresher) to join our growing team in Mohali . The ideal candidate should have completed at least one internship and be familiar with Python programming , Odoo ERP , PostgreSQL , and Web Scraping techniques . This is a great opportunity to gain hands-on experience and grow your skills in a professional and supportive environment. Key Responsibilities: Assist in the development and customization of Odoo modules Support integration of third-party services with Odoo Write clean and efficient Python code for automation and backend logic Help with creating and managing PostgreSQL databases Perform basic web scraping using Python tools (e.g., BeautifulSoup, Scrapy) Participate in testing, debugging, and improving application performance Collaborate with senior developers and follow best coding practices Required Skills: Strong understanding of Python programming Basic knowledge of Odoo ERP (academic or internship level) Familiarity with PostgreSQL and database queries Exposure to web scraping tools like BeautifulSoup, Scrapy, or Selenium Good understanding of REST APIs and data formats (JSON/XML) Eagerness to learn and grow in a development role Eligibility Criteria: B.Tech/B.E., MCA, BCA, or related technical degree Must have completed at least one internship or project in Python/Odoo/web technologies Good communication and teamwork skills Perks & Benefits: Mentorship and learning opportunities Hands-on experience with real-world projects Positive work environment with growth potential 5-day work week Call us - 9888122266 Job Types: Full-time, Permanent Pay: From ₹12,000.00 per month Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Work Location: In person
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |