Home
Jobs
Companies
Resume

12 Beautiful Soup Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

1 - 2 Lacs

Bengaluru

Work from Office

Naukri logo

Build and run scripts to scrape emails, phone numbers, and business data, clean and organize it, analyze insights using Python/Excel, automate workflows, and support lead generation for import-export operations.

Posted 1 week ago

Apply

2.0 - 4.0 years

5 - 7 Lacs

Mumbai

Work from Office

Naukri logo

Develop and maintain Python-based applications, with a focus on Flask for web development. Collaborate with cross-functional teams to understand project requirements and translate Bachelors or Master's degree in Computer Science, Data Science, or a related field. 2+ years of professional experience in Python development and data science. Strong proficiency in Python programming languagewith Flask framework and familiarity with relational databases (e.g., MySQL). Proficiency in handling and manipulating various types of data, including structured and unstructured data, using Python libraries such as Pandas, NumPy, and Beautiful Soup. Apply machine-learning techniques to analyse and extract insights from large text datasets, including social media data, customer feedback, and user interactions, to inform business decisions and strategy. Knowledge of machine learning techniques and libraries (e.g., scikit-learn, TensorFlow). Familiarity with creating and managing projects involving language models such as OpenAI's GPT (Generative Pre-trained Transformer) series, including ChatGPT and other prompt engineering tasks. Use models for LLMs and related tasks to enhance Chabots, virtual assistants, and other conversational AI applications, improving natural language understanding, conversation flow, and response generation. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities them into technical solutions. Design, implement, and maintain data pipelinesfor collecting, processing, and analysing large datasets. Perform exploratory data analysis to identify trends, patterns, and insights. Build machine learning models and algorithms to solve business problems and optimize processes. Deploy and monitor data science solutions in production environments. Conduct code reviews, testing, and debugging to ensure the quality and reliability of software applications. Stay updated with the latest trends and advancements in Python development, data science, and machine learning.

Posted 1 week ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Bengaluru

Remote

Naukri logo

Role & responsibilities As a Data Engineer focused on web crawling and platform data acquisition, you will design, develop, and maintain large-scale web scraping pipelines to extract valuable platform data. You will be responsible for implementing scalable and resilient data extraction solutions, ensuring seamless data retrieval while working with proxy management, anti-bot bypass techniques, and data parsing. Optimizing scraping workflows for performance, reliability, and efficiency will be a key part of your role. Additionally, you will ensure that all extracted data maintains high quality and integrity. Preferred candidate profile We are seeking candidates with: Strong experience in Python and web scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup. Knowledge of distributed web crawling architectures and job scheduling. Familiarity with headless browsers, CAPTCHA-solving techniques, and proxy management to handle dynamic web challenges. Experience with data storage solutions, including SQL, and cloud storage. Understanding of big data technologies like Spark and Kafka (a plus). Strong debugging skills to adapt to website structure changes and blockers. A proactive, problem-solving mindset and ability to work effectively in a team-driven environment.

Posted 1 week ago

Apply

2.0 - 3.0 years

6 - 8 Lacs

Noida

Work from Office

Naukri logo

About Us: LdotR is an online brand protection service company, offering businesses the right solution and services to protect, manage and benefit from their digital assets in the online space. We work across all digital platforms - Domains, Website, Social Media, Online Marketplaces, and App Stores to identify, assess and nullify brand infringements. About the Role: We are looking for an experienced Data Scraping Specialist to help us extract and structure data from leading social media platforms at scale. The ideal candidate will have hands-on expertise with scraping tools, APIs, and large-scale data processing. Key Responsibilities: Design and develop custom scraping solutions to extract public data from platforms like Instagram, Facebook, X (Twitter), LinkedIn, YouTube, etc. Handle large-scale scraping tasks with efficiency and resilience against rate-limiting and platform-specific restrictions. Clean, normalize, and structure the scraped data for analysis or downstream applications. Maintain scraping scripts to adapt to frequent platform changes. Ensure compliance with data protection policies and terms of service. Required Skills: Proficiency in Python and scraping libraries (e.g., Scrapy, BeautifulSoup, Selenium, Playwright). Experience with API integration (official or unofficial social media APIs). Familiarity with rotating proxies, headless browsers, and CAPTCHA-solving techniques. Strong understanding of data structuring formats like JSON, CSV, and databases (MongoDB, PostgreSQL, etc.). Experience with cloud-based scraping and storage solutions (AWS/GCP preferred). Good to Have: Knowledge of NLP or data analytics for social media sentiment or trend analysis. Understanding of GDPR and CCPA compliance. Prior work with third-party scraping platforms or browser automation tools. What We Offer: Opportunity to work on impactful, large-scale data projects. Flexible work arrangements. Competitive compensation based on experience and delivery.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

1 - 2 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as Beautiful Soup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like Beautiful Soup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environmen

Posted 2 weeks ago

Apply

3.0 - 7.0 years

1 - 2 Lacs

Mumbai, Thane, Navi Mumbai

Work from Office

Naukri logo

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as BeautifulSoup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like BeautifulSoup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environment. CANDIDATES AVAILABLE FOR FACE-TO-FACE INTERVIEWS ARE PREFERRED.

Posted 3 weeks ago

Apply

0 - 1 years

1 - 3 Lacs

Bengaluru

Work from Office

Naukri logo

Title: Python Web Scraping / Web Crawling Experience Range: 0-1 Years Qualification: BE/MBA Passout Year: 2023, 2024 only Walkin Dates: 2nd, 3rd and 4th April 2025 Timings: 10AM to 4PM Address: Spire Technologies & Solutions Pvt. Ltd., Ajmera Aditya Summit, 2nd Floor, 3D, 7th C Main, 3rd Block, Koramangala, Bangalore 560034. Job Description: Seeking an experienced Web Crawling Engineer to build and optimize scalable data extraction systems. Must have strong expertise in Python scripting, web crawling, data processing, MongoDB, and Power BI. Key Responsibilities: • Develop and maintain crawlers for extracting data from websites, APIs, and complex pages. • Ensure scripts adapt to website changes over time. • Process and transform structured/unstructured data into JSON, CSV formats. • Troubleshoot scraping challenges and optimize performance. • Apply machine learning techniques, particularly time series analysis, to analyze trends and make predictive insights. • Work with MongoDB, Elasticsearch for data storage. • Strong understanding of HTTP protocols, REST APIs, JavaScript rendering, and browser automation. Required Skills: • Python (Scrapy, Selenium, BeautifulSoup, Playwright) • Regex & Shell Scripting for data extraction • NoSQL (MongoDB, Elasticsearch) & API Handling • Data Processing & Visualization (Power BI)

Posted 2 months ago

Apply

3 - 7 years

1 - 2 Lacs

Mumbai

Work from Office

Naukri logo

Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as BeautifulSoup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like BeautifulSoup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environment. CANDIDATES AVAILABLE FOR FACE-TO-FACE INTERVIEWS ARE PREFERRED.

Posted 2 months ago

Apply

2 - 4 years

3 - 7 Lacs

Jaipur

Work from Office

Naukri logo

Position Overview: We are seeking a motivated and experienced Web Scraping Developer with 2+ years of hands-on experience to join our team. The ideal candidate will possess strong technical skills in web scraping, data processing, and performance optimization, with expertise in Python, popular scraping frameworks, and large dataset management. You will play a critical role in extracting and processing data from a variety of websites while ensuring compliance with legal and ethical guidelines. Key Responsibilities: Develop and maintain scalable and optimized Python-based web scraping scripts. Use web scraping frameworks (Scrapy, Beautiful Soup, Selenium, etc.) to extract data from static and dynamic websites. Implement solutions for handling dynamic content using headless browsers like Playwright or Puppeteer. Extract and process data efficiently from complex HTML, CSS, JavaScript, and XPath structures. Work with large datasets, using tools such as Pandas for data manipulation, cleaning, and processing. Ensure proper handling of data export in formats like CSV, JSON, and direct database integration. Consume and integrate REST APIs, manage API rate limits, and handle HTTP protocols, cookies, and sessions. Manage data storage using relational databases like MySQL, optimising queries and indexing for large datasets. Troubleshoot and bypass common anti-scraping techniques such as CAPTCHAs, IP blocking, and user-agent tracking. Use tools and techniques like rotating proxies, headless browsers, and CAPTCHA-solving services to mitigate blocking. Collaborate with teams using version control systems like Git, perform code reviews, and contribute to collaborative workflows. Optimize scraping scripts for performance, including parallel processing or distributed scraping with tools like Celery, Redis, or AWS Lambda. Deploy scraping solutions using tools like Docker, AWS, or Google Cloud, and automate scraping tasks with schedulers (e.g., Cron). Implement robust error-handling mechanisms and monitor scraping jobs with logging frameworks. Stay updated with web scraping trends and ensure that projects comply with web scraping ethics, copyright laws, and website terms of service. Required Qualifications: 2+ years of hands-on experience in web scraping using Python. Strong proficiency in Scrapy, Beautiful Soup, Selenium, or similar scraping frameworks. Experience with headless browsers like Playwright or Puppeteer for handling complex websites. In-depth understanding of HTML, CSS, XPath, and JavaScript for dynamic content interaction. Proficiency in data handling with Pandas and experience in exporting data in multiple formats (CSV, JSON, databases). Strong knowledge of REST APIs and working with web protocols like HTTP, cookies, and session management. Experience in managing MySQL databases, query optimization, and indexing for large-scale data. Familiarity with anti-scraping techniques and proficiency in bypassing measures like CAPTCHAs and IP blocks. Experience with version control tools like Git and familiarity with collaborative workflows and code review processes. Hands-on experience in performance optimization, parallel processing, and distributed scraping. Knowledge of deploying scraping solutions using Docker, AWS, or Google Cloud. Strong problem-solving skills, including error-handling and debugging using logging frameworks. Awareness of web scraping ethics, copyright laws, and legal compliance with website terms of service. Preferred Qualifications: Familiarity with cloud-based solutions like AWS Lambda, Google Cloud, or Azure for distributed scraping. Experience with workflow automation tools and Cron jobs. Basic knowledge of frontend development (HTML, CSS, JavaScript) is a plus.

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Hyderabad

Remote

Naukri logo

Job Title : Data Engineer Medical Data Collection and Aggregation Job Summary: We are seeking a skilled Data Engineer to join our team, focusing on the collection and aggregation of medical data from diverse sources to fine-tune our models. The ideal candidate will have a strong background in data engineering, with an emphasis on accuracy and reliability in the medical domain. Key Responsibilities: Data Pipeline Development:Design, develop, and maintain scalable data pipelines to collect and process medical data from various sources. Data Integration:Aggregate and integrate data from multiple systems, ensuring consistency and quality. Collaboration:Work closely with AI ML Engineers and domain experts to understand data requirements and ensure the availability of high-quality data for model fine-tuning. Data Quality Assurance: Implement data validation and cleansing procedures to maintain data accuracy and integrity. Documentation: Maintain comprehensive documentation of data sources, methodologies, and pipeline processes. Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Minimum of 5 years of proven experience as a Data Engineer, preferably in the medical or healthcare domain. Strong proficiency in Python and experience with data engineering libraries such as: Pandas: For data manipulation and analysis. NumPy: For numerical computing. SQLAlchemy: For database interactions. Apache Airflow: For workflow automation. Beautiful Soup: For web scraping. Experience with data pipeline and workflow management tools. Familiarity with database systems (SQL and NoSQL). Understanding of data privacy regulations and best practices in handling sensitive medical data. Preferred Skills: Experience with cloud platforms and services related to data processing. Knowledge of machine learning frameworks and model fine-tuning processes. Excellent problem-solving skills and attention to detail. Why Join Us? Lead and work on cutting-edge projects in the chemical industry. Collaborate with a team of top tech professionals. Opportunity for career growth in a dynamic and challenging environment. How to Apply Send your updated resume to: Primary Emai l: ankitha.reddy@ekshvaku.com CC : nohita.tammareddy@ekshvaku.com, sreemith.kushal@ekshvaku.com

Posted 2 months ago

Apply

4 - 8 years

8 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

Hello, Hope you are doing good. Urgent job openings for Python (SSE - Senior Software Engineer) - (Web Scraping) @ GlobalData(Hyd). Job Description given below please go through to understand requirement. if requirement is matching to your profile share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Python (SSE - Senior Software Engineer) - (Web Scraping) @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description: We are seeking a Senior Software Engineer Python with expertise in web scraping to join our team. The ideal candidate will have strong Python development skills, experience in data extraction from various sources, and a deep understanding of web technologies. Key Responsibilities: Develop and maintain scalable web scraping solutions to extract data from various websites. Optimize scraping scripts for performance, efficiency, and reliability. Ensure compliance with legal and ethical standards for web scraping. Work with large datasets, including data cleaning and transformation. Collaborate with the team to integrate scraped data into databases or other storage solutions. Troubleshoot and resolve scraping-related challenges, including CAPTCHA handling and IP blocking. Write well-structured, maintainable, and reusable code. Requirements: 4+ years of Python development experience. Strong experience with web scraping frameworks like Scrapy, BeautifulSoup, Selenium, or Playwright. Good understanding of HTML, CSS, JavaScript, and browser automation. Experience working with APIs (RESTful, GraphQL) and data storage solutions (SQL, NoSQL). Familiarity with cloud-based services like AWS, Azure, or GCP is a plus. Strong problem-solving skills and ability to work independently. Thanks & Regards, Salim (Human Resources)

Posted 2 months ago

Apply

2 - 4 years

4 - 6 Lacs

Mumbai, Goregaon

Work from Office

Naukri logo

We are seeking a talented and experienced Python Developer + Data Scientist with a strong background in Flask to join our dynamic team. The ideal candidate will have a passion for leveraging data to drive insights and create impactful solutions, along with proficiency in Python development, particularly with Flask. Responsibilities: Develop and maintain Python-based applications, with a focus on Flask for web development. Collaborate with cross-functional teams to understand project requirements and translate them into technical solutions. Design, implement, and maintain data pipelinesfor collecting, processing, and analysing large datasets. Perform exploratory data analysis to identify trends, patterns, and insights. Build machine learning models and algorithms to solve business problems and optimize processes. Deploy and monitor data science solutions in production environments. Conduct code reviews, testing, and debugging to ensure the quality and reliability of software applications. Stay updated with the latest trends and advancements in Python development, data science, and machine learning. Requirements: Bachelors or Master's degree in Computer Science, Data Science, or a related field. 2+ years of professional experience in Python development and data science. Strong proficiency in Python programming languagewith Flask framework and familiarity with relational databases (e.g., MySQL). Proficiency in handling and manipulating various types of data, including structured and unstructured data, using Python libraries such as Pandas, NumPy, and Beautiful Soup. Apply machine-learning techniques to analyse and extract insights from large text datasets, including social media data, customer feedback, and user interactions, to inform business decisions and strategy. Knowledge of machine learning techniques and libraries (e.g., scikit-learn, TensorFlow). Familiarity with creating and managing projects involving language models such as OpenAI's GPT (Generative Pre-trained Transformer) series, including ChatGPT and other prompt engineering tasks. Use models for LLMs and related tasks to enhance Chabots, virtual assistants, and other conversational AI applications, improving natural language understanding, conversation flow, and response generation. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies