Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 2.0 years
2 - 4 Lacs
Gujarat
Work from Office
Responsibilities & Duties Conduct email marketing and LinkedIn marketing for business development. Draft and respond to emails professionally on behalf of the company. Manage and schedule meetings with potential clients. Handle LinkedIn account management for business outreach and networking. Perform data scraping for lead generation and reporting. Prepare reports related to marketing and lead conversion. Maintain effective communication with clients and internal teams. Skill-set we are looking for Soft Skills: Strong written and verbal communication skills in English to draft proposals and engage effectively with clients. Adaptability Time Management Initiator and Collaborative Technical Skills & Qualifications: A bachelors degree in computer science, Marketing or a related field. Email Drafting LinkedIn Pitching English Communication Ability to create professional presentations using PowerPoint, Google Slides, or Canva to pitch ideas and business strategies. Able to work in tools like Slack, Microsoft Teams, and Zoom for internal communication and virtual meetings.
Posted 3 days ago
5.0 - 10.0 years
10 - 20 Lacs
Jaipur
Remote
Summary To enhance user profiling and risk assessment, we are building web crawlers to collect relevant user data from third-party sources, forums, and the dark web. We are seeking a Senior Web Crawler & Data Extraction Engineer to design and implement these data collection solutions. Job Responsibilities Design, develop, and maintain web crawlers and scrapers to extract data from open web sources, forums, marketplaces, and the dark web. Implement data extraction pipelines that aggregate, clean, and structure data for fraud detection and risk profiling. Use Tor, VPNs, and other anonymization techniques to safely crawl the dark web while avoiding detection. Develop real-time monitoring solutions for tracking fraudulent activities, data breaches, and cybercrime discussions. Optimize crawling speed and ensure compliance with website terms of service, ethical standards, and legal frameworks. Integrate extracted data with fraud detection models, risk scoring algorithms, and cybersecurity intelligence tools. Work with data scientists and security analysts to develop threat intelligence dashboards from collected data. Implement anti-bot detection evasion techniques and handle CAPTCHAs using AI-driven solvers where necessary. Stay updated on OSINT (Open-Source Intelligence) techniques, web scraping best practices, and cybersecurity trends. Requirements 5+ years of experience in web crawling, data scraping, or cybersecurity data extraction. Strong proficiency in Python, Scrapy, Selenium, BeautifulSoup, Puppeteer, or similar frameworks. Experience working with Tor, proxies, and VPNs for anonymous web scraping. Deep understanding of HTTP protocols, web security, and bot detection mechanisms. Experience parsing structured and unstructured data from JSON, XML, and web pages. Strong knowledge of database management (SQL, NoSQL) for storing large-scale crawled data. Familiarity with AI/ML-based fraud detection techniques and data classification methods. Experience working with cybersecurity intelligence sources, dark web monitoring, and OSINT tools. Ability to implement scalable, distributed web crawling architectures. Knowledge of data privacy regulations (GDPR, CCPA) and ethical data collection practices. Nice to Have Experience in fintech, fraud detection, or threat intelligence. Knowledge of natural language processing (NLP) for analyzing cybercrime discussions. Familiarity with machine learning-driven anomaly detection for fraud prevention. Hands-on experience with cloud-based big data solutions (AWS, GCP, Azure, Elasticsearch, Kafka).
Posted 6 days ago
3.0 - 6.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Senior Engineer-NLP: Elevate Your Impact Through learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4, 500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development, work-life balance, and equal opportunity for all. A bout Innovation and Technology Centre (ITC) Our Innovation and Technology Centre specializes in creating modular solutions to meet clients' specific needs. As a member of the team, you will have the opportunity to develop and implement digital products, platforms, and tools specifically designed for research, analytics, and data management domains. We are at the forefront of technological advancements, and our AI efforts rely on our larger platform called AI for Research and Analytics (AIRA). What you will be doing at Evalueserve Building and implementing advanced machine learning (ML) and deep learning (DL) algorithms and models Applying different natural language processing (NLP) techniques to problems such as text classification, text summarization, questions and answers, information retrieval, knowledge extraction, and design of conversational bots by using traditional and generative AI techniques Contributing to the design and development of enterprise-grade generative AI applications, including but not limited to advanced RAG, VDB optimization, LLM evaluation, and finetuning Designing and developing a practical and analytical approach while maintaining a focus on aspects such as data quality and availability, feasibility, scalability, and turnaround time What were Looking for At least a bachelors /master`s degree in computer science, information systems, statistics, mathematics, or a related field Strong understanding of data science processes, such as data investigation, cleaning, minimal viable models, and nuances related to the deployment and enhancement About 38 years of experience in NLP, ML, and DL More than one year of experience in generative AI application development at the production level Demonstrated ability in developing NLP / ML / DL / generative AI project solutions here Hands-on experience and deep theoretical expertise in NLU, NLP, NLG, and common NLP algorithms and DL architectures such as Transformers, BERT, word2vec, FastText, and ELMO Hands-on experience in building and handling product-level delivery of knowledge graphs, ranking algorithms, recommendation engines, etc In-depth knowledge and experience in handling open-source frameworks, such as TensorFlow, PyTorch, and Huggingface Transformers Expert-level programming experience in Python / C++ Familiarity with general software design concepts, product development lifecycles, and ML model deployment best practices Experience in analyzing large amounts of user-generated content and process data in large-scale environments by using cloud infrastructure Proficiency in scraping data from external sources and developing architectures to store and organize the information for generating insights Experience in contributing to open-source software projects Experience and tenacity to go beyond available tools / techniques to design solutions in line with product requirements Ability to communicate with internal and external stakeholders and convey complex information clearly and concisely Understanding of the cultural differences and work styles in a global environment Want to learn more about our culture and what its like to work with us? Write to us at: careers@evalueserve.com Disclaimer: The following job description serves as an informative referencefor the tasks you may be required to perform. However, it does not constitutean integral component of your employment agreement and is subject to periodicmodifications to align with evolving circumstances.
Posted 1 week ago
0.0 - 5.0 years
1 - 3 Lacs
Noida
Hybrid
HEAD FIELD SOLUTIONS PVT LTD. Job Description Email Marketing Required Skills: Candidate should minimum have 100-400 email ids Candidate should have experience in particular industry (Example:-Staffing. Healthcare) 0-10 years experience in the email marketing profile Candidate should be comfortable working with target based Job(Lead generation) We work on domain leads We work for only US Clients Required Candidate Profile: Ability to work in a team and with different personalities. Ability to work under pressure and to strict deadlines Proactive Benefits: Employee friendly Corporate Work culture Excellent Salary structure Timings are fixed (9:00 AM to 6:00 PM) with Saturday and Sunday Off. Best in class infrastructure In-House Meals are available(Lunch and Tea) Strong recognition for our employees, giving them excellent career path. Company Website links: www.headfield.com www.glocalrpo.com
Posted 1 week ago
2.0 - 5.0 years
1 - 2 Lacs
Bengaluru
Work from Office
Build and run scripts to scrape emails, phone numbers, and business data, clean and organize it, analyze insights using Python/Excel, automate workflows, and support lead generation for import-export operations.
Posted 1 week ago
7.0 - 11.0 years
12 - 19 Lacs
Bengaluru
Work from Office
Responsibilities:As a Data Engineer focused on web crawling and platform data acquisition, you will design, develop, and maintain large-scale web scraping pipelines to extract valuable platform data. Annual bonus Health insurance Provident fund
Posted 2 weeks ago
2.0 - 7.0 years
3 - 8 Lacs
Noida
Remote
Hi, We are seeking a detail-oriented and analytical Data Scrapper to join our team. In this role, you will collect, process, and analyze data to help inform business decisions, uncover trends, and identify opportunities for growth and optimization. CANDIDATE BASED OUT OF NOIDA CAN ONLY APPLY Key Responsibilities: Collect and clean data from various sources to ensure accuracy and consistency Analyze complex datasets to identify trends, patterns, and insights Develop dashboards and reports using tools like Tableau, Power BI, or Excel Collaborate with cross-functional teams (product, marketing, operations, etc.) to understand data needs and provide actionable insights Create and maintain documentation for data models, processes, and reports Assist in A/B testing and other statistical analysis to support business decisions Monitor performance metrics and make recommendations for improvements
Posted 2 weeks ago
1.0 - 2.0 years
2 - 3 Lacs
Kolkata
Work from Office
We're seeking a sharp Sales Executive with experience in Digital Marketing Agency (1-2 years) to turn leads into loyal clients. Master outreach, data scraping, and end-to-end sales in a fast-paced, growth-focused agency.
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Kolkata
Remote
Role & responsibilities 1. Cold Calling 2.Communication Skill 3. Should know Automation tools. 4. Should know sales navigator tools. 5. Data scraping tools ( Apollo, Lusha) Preferred candidate profile Who has experience in IT Sales .
Posted 3 weeks ago
1.0 - 2.0 years
1 - 5 Lacs
Ahmedabad
Work from Office
About the Role We re looking for a Lead Researcher/Lead Generation Specialist to join our sales team. Your main job will be to find and organize potential leads, ensuring our sales team always has great opportunities to pursue. If you enjoy researching, working with data, and helping teams succeed, this role is for you! What You will Do 1. Research and Find Leads Look for potential customers using online tools like LinkedIn, Google, and other lead generation platforms. Collect important information, such as company name, contact person, email, phone number, and more. 2. Organize Lead Information Keep all lead details accurate and well-organized in databases or spreadsheets. Regularly update and check the information to make sure it is correct. 3. Qualify Leads Check if the leads match our ideal customer profile. Rank leads based on their potential to work with us, so the sales team can focus on the best opportunities. 4. Work with the Sales Team Share your findings with the sales team to help them target the right customers. Understand the team needs and adjust your research accordingly. 5. Use Tools and Technology Use tools like Excel, Google Sheets, and CRM software (e.g., HubSpot, Salesforce) to make your work easier. Learn and use new tools to improve the lead generation process. 6. Track and Report Progress Keep track of how many leads you have found and how well they are performing. Share reports and suggest ways to improve lead generation efforts. What We are Looking For Someone who loves researching and working with data. Basic knowledge of tools like LinkedIn, Excel, and CRMs (e.g., HubSpot, Salesforce). Organized, detail-oriented, and able to manage multiple tasks. Strong communication skills and a team player. Bonus: Experience with B2B sales or lead generation. Bottom of Form
Posted 3 weeks ago
2 - 4 years
4 - 4 Lacs
Ahmedabad
Work from Office
1. Data Scraping 2. Advanced Excel 3. Communicating with clients, understanding project requirements, and providing training to the team accordingly. 4. Strong communication skills (English) in both written and verbal 5. Provident fund Annual bonus
Posted 1 month ago
- 2 years
4 - 4 Lacs
Pune
Work from Office
Data scraping from the mentioned property in the case ETL knowledge will be added advantage Basic knowledge of photoshop Good knowledge of excel Basic understanding of computers Browsing knowledge. Good communication skills
Posted 1 month ago
3 - 6 years
6 - 10 Lacs
Noida
Work from Office
Python Developer Location: Sector-1, Noida (Work from Office) Experience: Minimum 3 years Education: B.E./B.Tech Primary Role: Responsible for performing web scraping and crawling to extract and structure data from various websites. Handle data cleaning, transformation, and storage in structured formats. Write efficient and scalable Python scripts to manage high-volume data extraction tasks. Monitor and manage log files using automation scripts. Key Skills: Proficiency in Python with hands-on experience in web scraping and crawling . Strong working knowledge of BeautifulSoup , Selenium , NumPy , Pandas , and Pytest . Good understanding of JavaScript , HTML , and SQL (preferably MS SQL ). Experience with MongoDB is an added advantage. Ability to integrate multiple data sources and databases into a single pipeline. Solid understanding of: Python threading and multiprocessing Event-driven programming Scalable and modular application design Preferred Skills: Practical experience in writing and maintaining web crawlers and scrapers . Familiarity with anti-bot mechanisms and techniques to bypass them responsibly. Exposure to handling large datasets and ensuring data accuracy and completeness. Experience with automated testing using Pytest .
Posted 1 month ago
3 - 7 years
7 - 17 Lacs
Ahmedabad
Work from Office
What we are looking for The ideal candidate will possess hands-on expertise in designing and deploying advanced web scraping solutions, leveraging Node.js and other technologies. A significant focus will be on overcoming bot detection challenges, building scalable and resilient scraping systems, and ensuring the efficiency and scalability of data acquisition pipelines. This is a highly technical, hands-on role ideal for someone passionate about solving complex scraping and infrastructure challenges. Things you will be doing Advanced Web Scraping: Develop and maintain high-performance scraping systems using Node.js, Python, or other relevant technologies. Handle JavaScript-heavy and asynchronous content using tools like Puppeteer, Playwright, or custom solutions in Node.js. Implement advanced bot detection bypass techniques, including: CAPTCHA solving using automation, AI/ML, or third-party services. Advanced proxy management and IP rotation strategies. User-agent, cookie, and header spoofing. Build robust error-handling mechanisms to adapt to changes in website structures or anti-scraping measures. Bot Detection and Anti-Scraping Expertise: Analyze and reverse-engineer advanced bot detection systems and anti-scraping mechanisms, including rate-limiting, behavioral analysis, and fingerprinting. Design and implement techniques to bypass WAFs (Web Application Firewalls) and server-side protections using Node.js libraries and tools. Monitor, log, and analyze bot detection patterns to ensure system adaptability. Create innovative solutions to blend scraping traffic with legitimate user behavior. Infrastructure and Networking: Architect and maintain scalable infrastructure using containerization tools like Docker and orchestration platforms such as Kubernetes. Leverage cloud platforms (AWS, GCP, Azure) for distributed scraping and data acquisition. Utilize Node.js and related tools to optimize network configurations for high-throughput scraping, including proxy and load balancer configurations. Automate deployment and scaling of scraping systems using CI/CD pipelines. Performance and Optimization: Ensure optimal performance of scraping systems by reducing latency and optimizing resource utilization. Develop robust monitoring and logging systems to track and troubleshoot issues in real time. Optimize pipelines for scalability, fault tolerance, and high availability. Compliance and Security: Ensure adherence to legal, ethical, and regulatory standards (e.g., GDPR, CCPA) for all scraping activities. Safeguard data acquisition systems from detection, blocking, and external threats. Respect website terms of service while implementing efficient scraping solutions. Skills you need in order to succeed in this role Technical Skills: 3+ years of hands-on experience in web scraping or data engineering. Expertise in Node.js for building and optimizing scraping systems. Deep expertise in handling advanced bot detection systems and anti-scraping mechanisms. Strong knowledge of programming languages such as Python and JavaScript. Advanced understanding of networking concepts, including HTTP/HTTPS protocols, WebSockets, DNS, and API integrations. Experience with containerization tools (Docker) and orchestration platforms (Kubernetes). Proficiency in cloud platforms (AWS, GCP, Azure) for scalable data acquisition pipelines. Familiarity with tools like Puppeteer, Playwright, Scrapy, or Selenium. Problem-Solving Expertise: Proven ability to reverse-engineer anti-bot measures such as CAPTCHA, IP blocks, and fingerprinting. Strong debugging and optimization skills for network and scraping pipelines.
Posted 1 month ago
- 2 years
4 - 5 Lacs
Pune
Work from Office
Data scraping from the mentioned property in the case ETL knowledge will be added advantage Basic knowledge of photoshop Good knowledge of excel Basic understanding of computers Browsing knowledge. Good communication skills
Posted 1 month ago
years
3 - 4 Lacs
Pune
Work from Office
Data scraping from the mentioned property in the case ETL knowledge will be added advantage Basic knowledge of photoshop Good knowledge of excel Basic understanding of computers Browsing knowledge. Good communication skills Required Candidate profile FRESHER CAN APPLY
Posted 1 month ago
- 1 years
1 - 1 Lacs
Noida
Work from Office
Responsibilities: * Manage data, execute email campaigns, scrap leads, oversee social media presence, generate online leads through mass mailings & social media for international clients. Data extraction, Instagram posting & linkedin postings.
Posted 1 month ago
years
3 - 4 Lacs
Pune
Work from Office
Data scraping from the mentioned property in the case ETL knowledge will be added advantage Basic knowledge of photoshop Good knowledge of excel Basic understanding of computers Browsing knowledge. Good communication skills Required Candidate profile FRESHER CAN APPLY
Posted 1 month ago
5 - 7 years
6 - 9 Lacs
Kolkata
Work from Office
Overview We are seeking a highly experienced Web Scraping Expert (Python) specialising in Scrapy-based web scraping and large-scale data extraction. This role is focused on building and optimizing web crawlers , handling anti-scraping measures , and ensuring efficient data pipelines for structured data collection. The ideal candidate will have 5+ years of hands-on experience developing Scrapy-based scraping solutions , implementing advanced evasion techniques , and managing high-volume web data extraction . You will collaborate with a cross-functional team to design, implement, and optimize scalable scraping systems that deliver high-quality, structured data for critical business needs. Key Responsibilities Scrapy-based Web Scraping Development Develop and maintain scalable web crawlers using Scrapy to extract structured data from diverse sources. Optimize Scrapy spiders for efficiency, reliability, and speed while minimizing detection risks. Handle dynamic content using middlewares, browser-based scraping (Playwright/Selenium), and API integrations. Implement proxy rotation, user-agent switching, and CAPTCHA solving techniques to bypass anti-bot measures. Advanced Anti-Scraping Evasion Techniques Utilize AI-driven approaches to adapt to bot detection and prevent blocks. Implement headless browser automation and request-mimicking strategies to mimic human behavior. Data Processing & Pipeline Management Extract, clean, and structure large-scale web data into structured formats like JSON, CSV, and databases. Optimize Scrapy pipelines for high-speed data processing and storage in MongoDB, PostgreSQL, or cloud storage (AWS S3). Code Quality & Performance Optimization Write clean, well-structured, and maintainable Python code for scraping solutions. Implement automated testing for data accuracy and scraper reliability. Continuously improve crawler efficiency by minimizing IP bans, request delays, and resource consumption. Required Skills and Experience Technical Expertise 5+ years of professional experience in Python development with a focus on web scraping. Proficiency in using Scrapy based scraping Strong understanding of HTML, CSS, JavaScript, and browser behavior. Experience with Docker will be a plus Expertise in handling APIs (RESTful and GraphQL) for data extraction. Proficiency in database systems like MongoDB, PostgreSQL Strong knowledge of version control systems like Git and collaboration platforms like GitHub. Key Attributes Strong problem-solving and analytical skills, with a focus on efficient solutions for complex scraping challenges. Excellent communication skills, both written and verbal. A passion for data and a keen eye for detail. Why Join Us? Work on cutting-edge scraping technologies and AI-driven solutions. Collaborate with a team of talented professionals in a growth-driven environment. Opportunity to influence the development of data-driven business strategies through advanced scraping techniques. Competitive compensation and benefits.
Posted 2 months ago
2 - 4 years
3 - 7 Lacs
Jaipur
Work from Office
Position Overview: We are seeking a motivated and experienced Web Scraping Developer with 2+ years of hands-on experience to join our team. The ideal candidate will possess strong technical skills in web scraping, data processing, and performance optimization, with expertise in Python, popular scraping frameworks, and large dataset management. You will play a critical role in extracting and processing data from a variety of websites while ensuring compliance with legal and ethical guidelines. Key Responsibilities: Develop and maintain scalable and optimized Python-based web scraping scripts. Use web scraping frameworks (Scrapy, Beautiful Soup, Selenium, etc.) to extract data from static and dynamic websites. Implement solutions for handling dynamic content using headless browsers like Playwright or Puppeteer. Extract and process data efficiently from complex HTML, CSS, JavaScript, and XPath structures. Work with large datasets, using tools such as Pandas for data manipulation, cleaning, and processing. Ensure proper handling of data export in formats like CSV, JSON, and direct database integration. Consume and integrate REST APIs, manage API rate limits, and handle HTTP protocols, cookies, and sessions. Manage data storage using relational databases like MySQL, optimising queries and indexing for large datasets. Troubleshoot and bypass common anti-scraping techniques such as CAPTCHAs, IP blocking, and user-agent tracking. Use tools and techniques like rotating proxies, headless browsers, and CAPTCHA-solving services to mitigate blocking. Collaborate with teams using version control systems like Git, perform code reviews, and contribute to collaborative workflows. Optimize scraping scripts for performance, including parallel processing or distributed scraping with tools like Celery, Redis, or AWS Lambda. Deploy scraping solutions using tools like Docker, AWS, or Google Cloud, and automate scraping tasks with schedulers (e.g., Cron). Implement robust error-handling mechanisms and monitor scraping jobs with logging frameworks. Stay updated with web scraping trends and ensure that projects comply with web scraping ethics, copyright laws, and website terms of service. Required Qualifications: 2+ years of hands-on experience in web scraping using Python. Strong proficiency in Scrapy, Beautiful Soup, Selenium, or similar scraping frameworks. Experience with headless browsers like Playwright or Puppeteer for handling complex websites. In-depth understanding of HTML, CSS, XPath, and JavaScript for dynamic content interaction. Proficiency in data handling with Pandas and experience in exporting data in multiple formats (CSV, JSON, databases). Strong knowledge of REST APIs and working with web protocols like HTTP, cookies, and session management. Experience in managing MySQL databases, query optimization, and indexing for large-scale data. Familiarity with anti-scraping techniques and proficiency in bypassing measures like CAPTCHAs and IP blocks. Experience with version control tools like Git and familiarity with collaborative workflows and code review processes. Hands-on experience in performance optimization, parallel processing, and distributed scraping. Knowledge of deploying scraping solutions using Docker, AWS, or Google Cloud. Strong problem-solving skills, including error-handling and debugging using logging frameworks. Awareness of web scraping ethics, copyright laws, and legal compliance with website terms of service. Preferred Qualifications: Familiarity with cloud-based solutions like AWS Lambda, Google Cloud, or Azure for distributed scraping. Experience with workflow automation tools and Cron jobs. Basic knowledge of frontend development (HTML, CSS, JavaScript) is a plus.
Posted 2 months ago
0 - 2 years
0 - 2 Lacs
Ghaziabad
Work from Office
UI Path Trainee & Junior Developer About the Role: We are looking for a UI Path Trainee & Junior Developer to join our automation team. This role is ideal for individuals passionate about Robotic Process Automation (RPA) and eager to build automation solutions using UI Path . Key Responsibilities: Learn and develop automation workflows using UI Path . Assist in designing, developing, testing, and deploying RPA solutions. Collaborate with business analysts and senior developers to understand automation requirements. Debug and optimize workflows for performance and scalability. Maintain proper documentation for developed bots and processes. Stay updated with UI Path best practices and industry trends. Requirements: For Trainee Role: Basic understanding of UI Path and RPA concepts. Knowledge of any programming language (Python, C#, Java, or VB). Strong analytical and problem-solving skills. Willingness to learn and adapt to automation technologies. For Junior Developer Role: 6 months 2 years of experience in UI Path development . Hands-on experience with UI Path Studio, Orchestrator, and Robots . Understanding of selectors, data scraping, and workflow automation . Ability to integrate UI Path with APIs and databases . Familiarity with version control (Git) and debugging RPA processes. Preferred Skills: Certification in UI Path RPA Developer is a plus. Experience with SQL, REST APIs, and web automation . Exposure to Agile methodologies and CI/CD processes.
Posted 2 months ago
10 - 12 years
30 Lacs
Ahmedabad, Hyderabad, Kolkata
Work from Office
Dear Candidate, We are looking for a skilled Python Software Developer to design, develop, and optimize high-performance applications. You will work on building scalable back-end services, APIs, and data-driven applications while ensuring code quality and efficiency. If you are passionate about Python development, cloud technologies, and automation , wed love to hear from you! Key Responsibilities: Develop, test, and maintain backend applications and RESTful APIs using Python. Work with Django, Flask, or FastAPI frameworks to build scalable web applications. Optimize performance, security, and scalability of applications. Integrate third-party services, APIs, and data pipelines. Work with SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, Redis). Implement authentication and authorization mechanisms (JWT, OAuth, LDAP). Write clean, maintainable, and well-documented code following best practices. Conduct unit testing and test-driven development (TDD) using PyTest or Unittest. Implement CI/CD pipelines and automation for efficient deployments. Collaborate with DevOps teams to deploy applications on AWS, Azure, or GCP . Stay updated with Python libraries, frameworks, and industry trends . Required Skills & Qualifications: Strong proficiency in Python and object-oriented programming (OOP) . Experience with Django, Flask, or FastAPI for web application development. Familiarity with SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, Redis). Knowledge of RESTful APIs, GraphQL, and WebSockets . Experience with Docker and Kubernetes for containerized deployments. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Strong understanding of data structures, algorithms, and system design . Experience with Celery, RabbitMQ, or Kafka for background task processing. Proficiency in Git and version control workflows (GitHub, GitLab, Bitbucket). Familiarity with CI/CD pipelines and automation (Jenkins, GitHub Actions, Azure DevOps). Strong problem-solving, debugging, and analytical skills. Soft Skills: Strong problem-solving and analytical skills. Excellent communication skills to work with cross-functional teams. Ability to work independently and as part of a team. Detail-oriented with a focus on delivering high-quality solutions Note: If you are interested, please share your updated resume and suggest the best number & time to connect with you. If your resume is shortlisted, one of the HR from my team will contact you as soon as possible. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 2 months ago
3 - 7 years
9 - 15 Lacs
Kolkata
Work from Office
Areas of Responsibilities Develop and implement RPA solutions using Microsoft Power Automate to streamline and automate business workflows. Design and build complex automation workflows that integrate with multiple applications and systems such as Microsoft 365, SharePoint, Dynamics, and third-party tools. Collaborate with stakeholders to understand current business processes, identify automation opportunities, and gather automation requirements. Create and maintain process documentation (e.g., process maps, technical specs) to ensure transparency and traceability in automation workflows. Test and debug RPA workflows to ensure accurate and efficient execution and troubleshoot any issues that arise during development or after deployment. Monitor, maintain, and optimize existing automation workflows to ensure long-term performance and scalability. Support and train internal teams in using Power Automate and other Microsoft Power Platform tools, ensuring effective knowledge transfer. Stay updated with the latest features and updates in Microsoft Power Automate and Power Platform, continuously improving automation strategies. Work closely with IT and security teams to ensure that all automated workflows comply with organizational policies and security standards. Required Skill Set Bachelors degree in Computer Science, Information Technology, or a related field. 3 - 5 years of hands-on experience with Microsoft Power Automate, specifically in creating RPA workflows. Solid understanding of RPA best practices and experience in implementing RPA in a business setting. Experience integrating Power Automate with other Microsoft services such as PowerApps, Power BI, SharePoint, and Microsoft Teams . Proficiency in using AI Builder , Power Virtual Agents , and other Power Platform tools to enhance automation capabilities. Strong problem-solving skills with the ability to identify and implement solutions to complex business challenges. Familiarity with REST APIs and other integration mechanisms to connect Power Automate to external systems and services. Ability to create and manage Power Automate workflows using both cloud flows and desktop flows (for desktop automation). Excellent communication skills and the ability to work collaboratively with stakeholders across departments. Educational Background Microsoft certifications in Power Automate or Power Platform are highly desirable. Experience with RPA tools like UIPath, Automation Anywhere, or Blue Prism is a plus. Basic understanding of PowerShell scripting or other scripting languages. Experience with Azure Automation or Azure Logic Apps .
Posted 2 months ago
2 - 5 years
4 - 7 Lacs
Ahmedabad
Work from Office
Who are we? MeasureOne, the leading consumer-permissioned data exchange platform, transforms the way in which businesses access and use consumer data. MeasureOne empowers organizations to access a wide range of trusted consumer data while prioritizing privacy and consent. Through MeasureOnes platform, businesses can confidently and reliably integrate and verify consumer information such as income, employment, education, and student enrollment. MeasureOne offers flexible implementation options for businesses to easily leverage consumer-permission data, from a developer-friendly API to third-party integrations. MeasureOne is headquartered in San Francisco. What we are looking for We are looking to hire an experienced information analyst to process the document with the desired quality and speed. The ideal candidate will work according to published guidelines, resolve customer queries, and collaborate with team members. Things you will be doing Should Process key, scan, and transcribe a high volume of data/document from a variety of sources as per the published guidelines Create and maintain scripts and programs to gather data from web pages Research, analyze, and sort information as needed Works extensively on different types of Excel spreadsheets simultaneously. Should generate ad-hoc reports, operation reports, and status reports as and when needed Extensive exposure to working on data and web research. Identifies and provides recommendations for process improvements based on best practices Proactively provide work product updates to a manager, including the timing of deliverables, challenges, and recommended solutions Investigate discrepancies, fill gaps in incomplete records, and resolve other problems Experience in scraping in education, employment & income domains is an added advantage Monitor the ongoing tasks and highlight the areas that need attention Skills you need in order to succeed in this role Strong organizational skills, time management skill Proven experience in web scraping and web crawling. Knowledge of data storage and retrieval techniques Research and organizational skills. Strong understanding of HTML, CSS, JavaScript, web protocols, MS Excel/Google Sheets Works with team to overcome challenges Strives to deliver a quality product with efficiency Flexibility work in shifts and Indian/US holidays, as needed Excellent problem-solving skills and attention to detail Qualification Bachelors degree in any stream. Remuneration Remuneration is no bar for the right candidate
Posted 2 months ago
8 - 13 years
15 - 20 Lacs
Hyderabad
Hybrid
Hello, Hope you are doing good. Urgent job openings for Python Lead - (Data Scraping Lead) @ GlobalData(Hyd). Job Description given below please go through to understand requirement. if requirement is matching to your profile share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Python Lead - (Data Scraping Lead) @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description :- We are looking for a Data Scraping Lead (Python Lead) to design and oversee large-scale web scraping solutions. The ideal candidate should have extensive experience in Python development, web scraping techniques, and managing data extraction pipelines. Key Responsibilities: Lead the architecture, design, and development of large-scale web scraping projects. Optimize scraping strategies for efficiency, accuracy, and scalability. Manage a team of developers, provide technical guidance, and review code. Handle dynamic content, CAPTCHA solving, and proxy management. Develop and maintain data validation and deduplication processes. Ensure adherence to best practices, including legal and ethical scraping guidelines. Work closely with data engineers and analysts to ensure seamless integration of extracted data. Requirements: 8+ years of experience in Python development with a focus on web scraping. Expertise in frameworks like Scrapy, Selenium, Playwright, or Puppeteer. Strong knowledge of HTML parsing, JavaScript execution, and browser automation. Experience with large-scale data processing Cloud platforms (AWS, GCP, Azure), and containerization (Docker, Kubernetes) is an advantage. Familiarity with database management systems (MSSQL, MySQL). Experience leading teams and mentoring junior developers. Excellent problem-solving and analytical skills. Thanks & Regards, Salim (Human Resources)
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2