Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
delhi
On-site
As a skilled and motivated Python Developer with 3 to 6 years of experience, you will be joining our dynamic development team. Your primary responsibility will be to build high-quality, scalable back-end systems, integrate APIs, and contribute to all phases of the software development lifecycle. Additionally, having knowledge of Robotic Process Automation (RPA) and web scraping tools will be beneficial for this role. Your key responsibilities will include designing, developing, testing, and deploying scalable and secure Python applications. You will work on server-side logic, RESTful APIs, microservices, and database models while adhering to best practices in code writing. Collaboration with front-end developers, DevOps engineers, and product teams will be essential for successful project outcomes. Troubleshooting, debugging, and upgrading existing applications, as well as participating in code reviews and maintaining code documentation, will also be part of your routine tasks. Moreover, you will be expected to optimize applications for performance and scalability, ensure the security and data protection of applications, and design and implement user interfaces using WinForms or WPF. To excel in this role, you should possess 36 years of hands-on experience in Python development. Strong knowledge of Python frameworks like Django, Flask, or FastAPI, experience with RESTful APIs and integrating third-party services, and a good understanding of databases such as PostgreSQL, MySQL, or MongoDB are essential requirements. Familiarity with version control tools like Git and experience with cloud platforms like AWS, Azure, or GCP will be advantageous. This is a full-time position based in Barakhamba Road, New Delhi, where you will be expected to work in person. If you are excited about this opportunity, feel free to contact the employer at +91 9899129159. The expected start date for this position is 01/08/2025.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Alien Technology Transfer empowers top-class innovators to transform visionary product concepts into commercial realities. The company helps companies with concrete growth ambitions to secure funding for their product development through their innovation consulting expertise. With a track record of raising more than $500 million for Small and Medium Enterprises (SMEs) across various domains such as agri-tech, artificial intelligence, biotechnology, clean-tech, cyber-security, quantum computing, space, and transport, Alien Technology Transfer is now seeking an Innovation Scout. As an Innovation Scout, you will be responsible for sourcing and analyzing the highest quality prospect clients for the business lines among innovative high-tech high-impact start-ups and SMEs in Europe and the USA. To excel in this role, you must possess a genuine interest in technology and business, quickly understand complex engineering or medical innovations, be highly analytical and articulate, and have a strong command of English. Additionally, being a team player, well-organized, and eager to take on responsibility are key qualities. Demonstrating an entrepreneurial mindset, high self-motivation, and the ability to work in an ambitious and goal-driven environment are also essential. Your duties and responsibilities will include: - Keeping yourself updated in the technological and business field to identify business opportunities and industry/market trends effectively. - Identifying potential clients from web platforms, online databases, and events. - Monitoring innovative project financing and fund-raising trends. - Handling and analyzing databases to deliver appropriate results. - Evaluating information related to innovative technologies and businesses. - Maintaining and enriching internal databases for prospects, clients, and public grant awardees. - Preparing reports on funding trends to define yearly targets. Job requirements include: - A Masters degree in Life Science (Bioengineering, Biomedical, Biotechnology, Neurosciences, Biochemistry, Microbiology, etc.) - Advanced Excel skills and ability to master professional industry databases. - Proficiency in using digital resources to uncover leads. - Self-starting, inquisitive, and pragmatic attitude. - Fluent English communication skills. - Knowledge of Python programming for web scraping/extraction mechanisms is a plus. - Added advantage if you have ideas for creating web scrapers for startup data extraction. Please note that due to the high volume of applications, individual feedback on application outcomes cannot be provided.,
Posted 1 month ago
2.0 - 6.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer II / Senior Data Engineer Job Location : Bengaluru, Pune - India Job Summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities: Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Step Functions. Collaborate with cross-functional teams to gather requirements and design solutions for complex data engineering projects. Develop ETL/ELT pipelines using Python scripts and SQL queries to extract insights from structured and unstructured data sources. Implement web scraping techniques to collect relevant data from various websites and APIs. Ensure high availability of the system by implementing monitoring tools like CloudWatch. Desired Profile: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application.
Posted 1 month ago
0.0 - 4.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have knowledge in web (PHP) development with PHP, Cake PHP, MySQL, jQuery, JavaScript, AJAX, Linux, JSON, and XML. In the back-end, you should be proficient in MVC/Object-Oriented PHP (v.5+), web scraping using Regular Expressions and XPATH, developing complex data-driven systems without a framework, secure e-commerce crawler development, adaptive problem-solving skills, performance optimization techniques, and debugging for issue diagnosis. Unit testing your code with assertions is essential along with hands-on experience with Linux/UNIX, RESTful paradigms, Apache, and MySQL database efficiency analysis. For front-end development, expertise in jQuery/JavaScript, scripting languages like Ajax and DOM manipulation, and a strong grasp of HTML5 and CSS3 is required. Regarding the database, you should have a good command of MySQL (v.5+) supported by phpMyAdmin, understanding of design patterns, PHP best practices, PHP Frameworks like Cake PHP, scalability architecture, server performance considerations, and push notification system implementation. The ideal candidate should hold a BS/MS degree in Computer Science or Engineering, possess development skills in PHP, MySQL, jQuery, JavaScript, AJAX, Linux, JSON, and XML, have knowledge of relational databases, version control tools, web services development, API development using PHP, a passion for sound design and coding practices, strong verbal and written communication skills, problem-solving attitude, and consistency in work ethics. Must-have skills include PHP, Core PHP, MySQL, Curl, Linux commands, API development using PHP, scripting for process automation, web crawling, and being consistent and dependable. Interested candidates can share their resumes at vaishali@mastheadtechnologies.com. This is a full-time position with a day shift schedule, requiring in-person work at the designated location. The application deadline is 13/07/2025, and the expected start date is 11/07/2025.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Python Developer specializing in Web Scraping and Data Processing, you will play a crucial role in a core team dedicated to aggregating biomedical content from a variety of sources. With over 3 years of experience, you will be responsible for developing scalable Python scripts to extract and parse biomedical data from diverse web platforms, including grant repositories, scientific journals, conference abstracts, treatment guidelines, and clinical trial databases. Your key responsibilities will include building robust modules to split multi-record documents into manageable content units, implementing NLP-based field extraction pipelines using libraries like spaCy and NLTK, and designing automated data acquisition workflows utilizing tools like cron, Celery, or Apache Airflow. You will ensure efficient storage of parsed data in both relational (PostgreSQL) and NoSQL (MongoDB) databases, with a focus on optimal schema design for performance and scalability. Additionally, you will be responsible for maintaining robust logging, thorough exception handling, and comprehensive content quality validation throughout all data processing and scraping workflows. To excel in this role, you should possess a strong command over web scraping libraries such as BeautifulSoup, Scrapy, Selenium, and Playwright. Proficiency in PDF parsing libraries like PyMuPDF, pdfminer.six, and PDFPlumber, as well as experience with HTML/XML parsers (lxml, XPath, html5lib) are essential. Familiarity with regular expressions, NLP concepts, advanced field extraction techniques, SQL, NoSQL databases, and API integration (RESTful APIs) is also required. Experience with task schedulers, workflow orchestrators, version control using Git/GitHub, and collaborative development environments will be beneficial. While not mandatory, exposure to biomedical or healthcare data parsing, cloud environments like AWS, data validation frameworks, and understanding of ontologies and taxonomies will be advantageous in this role. Join us in this exciting opportunity to contribute to cutting-edge data processing and scraping projects in the field of biomedicine and healthcare.,
Posted 1 month ago
2.0 - 4.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Primary Skills - Data Engineering Secondary Skills - SQL & Python Education - Bachelors or Masters degree in Computer Science, IT, or related Experience Range - 2 TO 4 YEARS exp on data engineering proficient in PuSpark or similar data-focused Role, experience in data/web scraping, experience with ETL tools, expertise in SQL, python, Scala, Java, experience in Cloud based platform, AWS, Google Cloud, Azure, experience in Data warehousing concepts, Familiar with version control systems like Git. Domain - IT Start Date - Immediate Duration of the Project - 6 months contract (extendable) Shift Timing - Regular Shift CTC - As per Industry Number of Interviews - L1 & 2 Client Interview+HR Location - Bangalore No. of Positions - 4 Job description - Design, develop and maintain scalable ETL/ELT pipelines, maintain web crawlers, manage data base SQL/NoSQL, implement and maintain data engineering, provide solutions that help scale operations. Documents Mandatory - Form 16, Salary slip, Aadahar, Pancard, Academic Documents, offer letter, Experience Letter all to be submitted after selection Note: Immediate joiners are welcome
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Analyst with expertise in Market Research and Web Scraping, you will be responsible for analyzing large datasets to uncover trends and insights related to market dynamics and competitor performance. Your role will involve conducting thorough market research to track competitor activities, identify emerging trends, and understand customer preferences. Additionally, you will design and implement data scraping solutions to extract competitor data from various online sources while ensuring compliance with legal standards and website terms of service. Your key responsibilities will include developing dashboards, reports, and visualizations to communicate key insights effectively to stakeholders. You will collaborate with cross-functional teams to align data-driven insights with company objectives and support strategic decision-making in product development and marketing strategies. Furthermore, you will be involved in database management, data cleaning, and maintaining organized databases with accurate and consistent information for easy access and retrieval. To excel in this role, you should have a Bachelor's degree in Data Science, Computer Science, Statistics, Business Analytics, or a related field. Advanced degrees or certifications in data analytics or market research will be advantageous. Proficiency in SQL, Python, or R for data analysis, along with experience in data visualization tools like Tableau, Power BI, or D3.js, is essential. Strong analytical skills, the ability to interpret data effectively, and knowledge of statistical analysis techniques are key requirements for this position. Experience with data scraping tools such as BeautifulSoup, Scrapy, or Selenium, as well as familiarity with web analytics and SEO tools like Google Analytics or SEMrush, will be beneficial. Preferred skills include experience with e-commerce data analysis, knowledge of retail or consumer behavior analytics, and an understanding of machine learning techniques for data classification and prediction. Ethical data scraping practices and adherence to data privacy laws are essential considerations for this role. If you meet these qualifications and are excited about the opportunity to work in a dynamic environment where your analytical skills and market research expertise will be valued, we encourage you to apply by sending your updated resume along with your current salary details to jobs@glansolutions.com. For any inquiries, feel free to contact Satish at 8802749743 or visit our website at www.glansolutions.com to explore more job opportunities. Join us at Glan Solutions and leverage your data analysis skills to drive strategic decisions and contribute to our success in the fashion/garment/apparel industry! Note: This job was posted on 14th November 2024.,
Posted 1 month ago
6.0 - 11.0 years
0 - 1 Lacs
Bengaluru
Remote
Were seeking a talented and driven Python Web Scraping & Automation Engineer to develop scalable solutions for data extraction and automation. The ideal candidate will have hands-on experience with Python-based scraping tools and frameworks as well as a solid background in working with SQL databases. In this role, you’ll be responsible for building efficient web crawlers, automating data pipelines, creating RESTful APIs, and supporting backend development efforts as needed. Key Responsibilities: Build and maintain efficient web scrapers using tools like Scrapy, BeautifulSoup, Selenium, and Requests. Automate data collection, processing, and storage with clean, reusable, and well-documented code. Develop backend services and RESTful APIs using frameworks such as Flask, FastAPI, or Django. Design and manage SQL databases (MySQL, PostgreSQL) to support data storage, access, and analytics. Implement proxy rotation, session handling, and CAPTCHA bypass mechanisms for reliable data scraping. Monitor and optimize scraper and API performance to ensure scalability, speed, and reliability. Collaborate closely with data analysts and developers to build and deliver end-to-end data solutions. Follow ethical and legal best practices for web scraping and data handling. Diagnose and fix issues across web crawlers, backend services, and data pipelines. Preferred candidate profile Must have excellent experience in Python we scraping and Sql Server. This is a work from home job
Posted 1 month ago
6.0 - 11.0 years
0 - 1 Lacs
Bengaluru
Remote
Were seeking a talented and driven Python Web Scraping expert to develop scalable solutions for data extraction and automation. The ideal candidate will have hands-on experience with Python-based scraping tools and frameworks as well as a solid background in working with SQL databases. In this role, youll be responsible for building efficient web crawlers, automating data pipelines, creating RESTful APIs, and supporting backend development efforts as needed. Key Responsibilities: Build and maintain efficient web scrapers using tools like Scrapy, BeautifulSoup, Selenium, and Requests. Automate data collection, processing, and storage with clean, reusable, and well-documented code. Develop backend services and RESTful APIs using frameworks such as Flask, FastAPI, or Django. Design and manage SQL databases (MySQL, PostgreSQL) to support data storage, access, and analytics. Implement proxy rotation, session handling, and CAPTCHA bypass mechanisms for reliable data scraping. Monitor and optimize scraper and API performance to ensure scalability, speed, and reliability. Collaborate closely with data analysts and developers to build and deliver end-to-end data solutions. Follow ethical and legal best practices for web scraping and data handling. Diagnose and fix issues across web crawlers, backend services, and data pipelines. Preferred candidate profile Must have excellent experience in Python we scraping and Sql Server. This is a work from home job
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are an experienced Senior Automation Tester - Network Security with 5+ years of industry experience. You have hands-on experience in End-to-End Solution testing in the Security SDWAN area. Your strong understanding of network security concepts, protocols, and technologies is a key asset for this role. Your QA experience includes working with VPN technologies such as IKev2, IKev1, IPSec, SSL/TLS, as well as SD-WAN technologies and solutions. You are proficient in handling network devices, L2/L3 protocols, and traffic generation tools like Ixia and Spirent. Knowledge of next-generation network security standards, including Post-Quantum Cryptography, and best practices is essential. Proficiency in Python and standard libraries is required. You should have experience in developing APIs using Python to handle scaling, which involves infra automation work experience. Additionally, experience with automation tools and frameworks like Selenium and Rest API is necessary. A solid understanding of RESTful APIs, web scraping, and automation of web-based systems is expected. Familiarity with version control systems like Git and experience with CI/CD tools such as Jenkins and GitHub Actions will be beneficial for this role. You should have experience working with different key stakeholders throughout the software life cycle of the project. As a motivated self-starter with excellent communication skills, you have demonstrated the ability to deliver superior products in a cross-functional team environment under aggressive schedules. Your soft skills include strong communication and organizational skills, proven ability to deliver superior products in cross-functional team settings under tight schedules, and excellent leadership, problem-solving, and communication skills. You are experienced in system design and debugging, capable of designing, building, and debugging large-scale distributed systems.,
Posted 1 month ago
3.0 - 5.0 years
3 - 5 Lacs
Ahmedabad
Work from Office
We are seeking a skilled Python Developer to join our team. The ideal candidate will be responsible for working with existing APIs or developing new APIs based on our requirements. You should have a strong foundation in Python and experience with RESTful services and cloud infrastructure. Requirements: Strong understanding of Python Experience with RESTful services and cloud infrastructure Ability to develop microservices/functions Familiarity with libraries such as Pandas, NumPy, Matplotlib & Seaborn, Scikit-learn, Flask , Django, Requests, FastAPI and TensorFlow & PyTorch. Basic understanding of SQL and databases Ability to write clean, maintainable code Experience deploying applications at scale in production environments Experience with web scraping using tools like BeautifulSoup, Scrapy, or Selenium Knowledge of equities, futures, or options microstructures is a plus Experience with data visualization and dashboard building is a plus Why Join Us? Opportunity to work on high-impact real-world projects Exposure to cutting-edge technologies and financial datasets A collaborative, supportive, and learning-focused team culture 5-day work week (Monday to Friday)
Posted 1 month ago
6.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
About Calfus: Calfus is a Silicon Valley headquartered software engineering and platforms company with a vision deeply rooted in the Olympic motto "Citius, Altius, Fortius Communiter". At Calfus, we aim to inspire our team to rise faster, higher, and stronger while fostering a collaborative environment to build software at speed and scale. Our primary focus is on creating engineered digital solutions that drive positive impact on business outcomes. Upholding principles of #Equity and #Diversity, we strive to create a diverse ecosystem that extends to the broader society. Join us at #Calfus and embark on an extraordinary journey with us! Position Overview: As a Data Engineer specializing in BI Analytics & DWH, you will be instrumental in crafting and implementing robust business intelligence solutions that empower our organization to make informed, data-driven decisions. Leveraging your expertise in Power BI, Tableau, and ETL processes, you will be responsible for developing scalable architectures and interactive visualizations. This role necessitates a strategic mindset, strong technical acumen, and effective collaboration with stakeholders across all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution aligning with business requirements, utilizing tools like Power BI and Tableau. - Data Integration: Supervise ETL processes through SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Establish and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Employ SQL for crafting intricate queries, stored procedures, and managing data transformations via joins and cursors. - Visualization Development: Spearhead the design of interactive dashboards and reports in Power BI and Tableau while adhering to best practices in data visualization. - Collaboration: Engage closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyze and optimize BI solutions for enhanced performance, scalability, and reliability. - Data Governance: Implement data quality and governance best practices to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts to cultivate a culture of continuous learning and improvement. - Azure Databricks: Utilize Azure Databricks for data processing and analytics to seamlessly integrate with existing BI solutions. Qualifications: - Bachelor's degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong emphasis on Power BI and Tableau. - Proficiency in ETL processes and tools, particularly SSIS. Strong command over SQL Server, encompassing advanced query writing and database management. - Proficient in exploratory data analysis using Python. - Familiarity with the CRISP-DM model. - Ability to work with various data models and databases like Snowflake, Postgres, Redshift, and MongoDB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and Dash. - Strong programming foundation in Python for data manipulation, analysis, serialization, database interaction, data pipeline and ETL tools, cloud services, and more. - Familiarity with Azure SDK is a plus. - Experience with code quality management, version control, collaboration in data engineering projects, and interaction with REST APIs and web scraping tasks is advantageous. Calfus Inc. is an Equal Opportunity Employer.,
Posted 1 month ago
3.0 - 8.0 years
6 - 15 Lacs
Bengaluru
Remote
Role & responsibilities As a Data Engineer focused on web crawling and platform data acquisition, you will design, develop, and maintain large-scale web scraping pipelines to extract valuable platform data. You will be responsible for implementing scalable and resilient data extraction solutions, ensuring seamless data retrieval while working with proxy management, anti-bot bypass techniques, and data parsing. Optimizing scraping workflows for performance, reliability, and efficiency will be a key part of your role. Additionally, you will ensure that all extracted data maintains high quality and integrity. Preferred candidate profile We are seeking candidates with: Strong experience in Python and web scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup. Knowledge of distributed web crawling architectures and job scheduling. Familiarity with headless browsers, CAPTCHA-solving techniques, and proxy management to handle dynamic web challenges. Experience with data storage solutions, including SQL, and cloud storage. Understanding of big data technologies like Spark and Kafka (a plus). Strong debugging skills to adapt to website structure changes and blockers. A proactive, problem-solving mindset and ability to work effectively in a team-driven environment.
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
punjab
On-site
As a Python Developer at F33 Baseline IT Development Pvt. Ltd. in Mohali, you will be responsible for developing and maintaining web applications using Python, Django, and Odoo. Your key responsibilities will include designing and implementing RESTful APIs, performing data extraction and automation through web scraping tools, debugging and optimizing code for performance and scalability, collaborating with front-end developers and project managers, and writing clean, well-documented, and testable code. To excel in this role, you should have a minimum of 2 years of experience in Python development, proficiency in the Django framework, hands-on experience with Odoo ERP, expertise in web scraping using libraries like BeautifulSoup, Scrapy, or Selenium, good understanding of databases such as PostgreSQL and MySQL, familiarity with Git version control, and excellent problem-solving and communication skills. Preferred qualifications for this position include a Bachelor's degree in Computer Science, IT, or a related field, experience with API integrations, and knowledge of Linux server environments. At F33 Baseline IT Development Pvt. Ltd., you will enjoy a friendly and collaborative work environment, career growth opportunities, and a 5-day working culture. This is a full-time position requiring in-person work during morning shifts. If you are interested in this opportunity, please speak with the employer at +91 9888122266.,
Posted 2 months ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
The position of Software Engineer - Automation at Wildnet Technologies requires a smart and enthusiastic individual with 02 years of experience in automation, web scraping, or bot development. You will be involved in AI-driven projects, utilizing tools such as Puppeteer or Selenium, and primarily coding in JavaScript/Node.js. This role offers the opportunity to work on modern web protocols, browser automation, and the development of scalable systems within a fast-paced, tech-focused environment. Your responsibilities will include designing, developing, and maintaining automation scripts and web scraping tools. You will be responsible for building intelligent bots using frameworks like Puppeteer or Selenium, working with HTTP protocols, cookies, headers, and session management to mimic user behavior effectively. Collaboration with the AI/ML team to integrate automation into smart workflows and writing clean, scalable code in Node.js and JavaScript will also be key aspects of the role. Additionally, you will optimize automation systems for performance, reliability, and scalability. To excel in this role, you should possess 02 years of relevant experience in automation, scraping, or bot development. A strong understanding of HTTP protocols, cookies, headers, and web session mechanics is essential, along with hands-on experience with Puppeteer or Selenium automation frameworks. Proficiency in JavaScript and Node.js is required, as well as exposure to AI frameworks or tools like ChatGPT, Copilot, or other LLM-based systems. The ability to adapt quickly and contribute to fast-paced, evolving projects is also crucial. Experience with browser emulation, proxies, and anti-bot bypass strategies would be advantageous, as well as familiarity with cloud services and deployment tools. Wildnet Technologies offers a dynamic work environment with ongoing training, career advancement, and leadership development opportunities. As an established industry leader in digital marketing and IT services, you will have the opportunity to work on diverse projects with leading global brands across industries. The company is recognized for fostering a flexible, positive, and people-first work culture, with comprehensive insurance and wellness support for employees and their families. Flexible working hours, a 5-day work week, and a generous leave policy ensure a healthy work-life balance for employees.,
Posted 2 months ago
6.0 - 9.0 years
10 - 20 Lacs
Noida
Hybrid
Company Overview BOLD is an established and fast-growing product company that transforms work lives. Since 2005,weve helped more than 10,000,000 folks from all over America(and beyond!) reach higher and do better. A career at BOLD promises great challenges, opportunities, culture and the environment. With our headquarters in Puerto Rico and offices in San Francisco and India, were a global organization on a path to change the career industry. Position Overview BOLD is seeking a highly skilled professional to spearhead the development of cutting-edge browser automation technology for our Expert Apply product. You will play a key role in designing scalable automation frameworks, tackling challenges in bot detection, and optimizing system performance. You'll also be responsible for building and monitoring metrics to ensure system reliability and robustness. If you are passionate about large-scale automation and system reliability, we want to hear from you. Role & responsibilities Design and architect scalable and robust enterprise-level automation applications using Python. Develop applications that run on PODs (Kubernetes), ensuring high availability and reliability. Debug complex issues in applications and devise solutions that enhance stability and performance. Identify performance bottlenecks within applications through profiling and metrics analysis. Optimize existing code to improve performance and efficiency, ensuring the system can handle high traffic loads. Utilize automation frameworks and tools such as Playwright, Chromium, and stealth browser for web automation tasks. Implement message handling to facilitate communication between different services. Develop web scraping solutions to gather and process data from various online sources. Analyze and troubleshoot software issues, providing timely resolutions to ensure system reliability Collaborate with cross-functional teams to understand user requirements and translate them into technical specifications. Review and enhance code quality through thorough testing and code reviews. Stay updated with industry trends and emerging technologies, integrating best practices into the development process Document architecture, design choices, and implementation details for future reference and knowledge sharing. Ensure compliance with security and data privacy standards throughout the application lifecycle. Preferred candidate profile Strong programming skills in Python like expertise in string manipulation and regular expression to effectively handle and process the text data during web scrapping and automation tasks. Deep understanding of OOP principles, including encapsulation, inheritance, and polymorphism, to design robust and maintainable software systems. Knowledge of common design patterns (e.g., Singleton, Factory, Observer) to enhance system design, improve code reusability, and implement best practices in software architecture. Solid foundation in algorithms (sorting, searching, parsing) and data structures (lists, dictionaries, trees) to solve complex problems efficiently and effectively during software development. Good understanding of how modern browsers function, including rendering engines, Java Script engines, HTTP protocols, and browser APIs. Experience optimizing scraping strategies based on browser behavior and performance. Experience with caching technologies (e.g. Redis, in-memory caching) Experience with messaging protocols (e.g. Azure service bus, Kafka, RabbitMQ) working knowledge and proven experience in containerization using Docker. Understanding of DevOps practices and CI/CD pipelines. Excellent communication skills and the ability to collaborate across time zones Excellent analytical and problem-solving skills. Knowledge of cloud computing, Amazon Web Services or Microsoft Azure
Posted 2 months ago
2.0 - 4.0 years
12 - 15 Lacs
Pune
Work from Office
Lead and scale Django backend features, mentor 2 juniors, manage deployments, and ensure best practices. Expert in Django, PostgreSQL, Celery, Redis, Docker, CI/CD, and vector DBs. Own architecture, code quality, and production stability.
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
As an experienced automation engineer, you will be responsible for designing, building, and maintaining sophisticated web scraping systems and autonomous application agents. Your role will involve combining technical expertise in web automation with strategic thinking to develop tools that efficiently collect data and interact with web applications. Your key responsibilities will include designing and developing robust web scraping architectures capable of collecting structured data from complex websites, including those with anti-bot measures. You will also build intelligent agents that can autonomously navigate job application workflows, including form completion and document submission. Implementing sophisticated rate limiting, proxying, and browser fingerprinting techniques to ensure reliable operation will be crucial. Additionally, you will create data processing pipelines to clean, normalize, and enrich collected information, develop monitoring systems to detect and adapt to website changes, and build user-friendly interfaces for controlling automation systems. It is essential to stay current with evolving anti-automation technologies and develop countermeasures. To excel in this role, you should have 4+ years of experience with web scraping and automation technologies, strong programming skills in Python and JavaScript, and advanced experience with browser automation libraries such as Puppeteer, Playwright, or Selenium. Expertise in parsing HTML/DOM structures and handling dynamic content (AJAX, SPAs), experience implementing IP rotation, user-agent cycling, and browser fingerprinting solutions, and knowledge of data storage solutions for organizing and querying large datasets are required. Familiarity with headless browser environments, containerization, and deploying automation scripts in Lambda AWS is also necessary. Preferred qualifications include experience with proxy management services and residential IP networks, knowledge of machine learning techniques for solving CAPTCHAs and pattern recognition, experience with natural language processing for enhancing form completion logic, familiarity with ethical and legal considerations surrounding web automation, experience developing against enterprise authentication systems, and understanding of job application workflows and ATS (Applicant Tracking Systems). In return, we offer a competitive salary and benefits package, a flexible work environment, cutting-edge technical challenges, and the opportunity to shape the development of next-generation automation systems.,
Posted 2 months ago
6.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Calfus is a Silicon Valley headquartered software engineering and platforms company that seeks to inspire its team to rise faster, higher, stronger, and work together to build software at speed and scale. The company's core focus lies in creating engineered digital solutions that bring about a tangible and positive impact on business outcomes while standing for #Equity and #Diversity in its ecosystem and society at large. As a Data Engineer specializing in BI Analytics & DWH at Calfus, you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower the organization to make data-driven decisions. Leveraging expertise in Power BI, Tableau, and ETL processes, you will create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution that meets business requirements, leveraging tools such as Power BI and Tableau. - Data Integration: Oversee the ETL processes using SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Create and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Utilize SQL to write complex queries, stored procedures, and manage data transformations using joins and cursors. - Visualization Development: Lead the design of interactive dashboards and reports in Power BI and Tableau, adhering to best practices in data visualization. - Collaboration: Work closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyse and optimize BI solutions for performance, scalability, and reliability. - Data Governance: Implement best practices for data quality and governance to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts, fostering a culture of continuous learning and improvement. - Azure Databricks: Leverage Azure Databricks for data processing and analytics, ensuring seamless integration with existing BI solutions. Qualifications: - Bachelors degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong focus on Power BI and Tableau. - Proven experience with ETL processes and tools, especially SSIS. Strong proficiency in SQL Server, including advanced query writing and database management. - Exploratory data analysis with Python. - Familiarity with the CRISP-DM model. - Ability to work with different data models. - Familiarity with databases like Snowflake, Postgres, Redshift & Mongo DB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and/or Dash. - Strong programming foundation with Python for data manipulation and analysis using Pandas, NumPy, PySpark, data serialization & formats like JSON, CSV, Parquet & Pickle, database interaction, data pipeline and ETL tools, cloud services & tools, and code quality and management using version control. - Ability to interact with REST APIs and perform web scraping tasks is a plus. Calfus Inc. is an Equal Opportunity Employer.,
Posted 2 months ago
0.0 - 5.0 years
4 - 9 Lacs
Chennai
Remote
Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Required Candidate profile Knowledge of Python and related frameworks including Django and Flask. A deep understanding and multi-process architecture and the threading limitations of Python. Perks and benefits Flexible Work Arrangements.
Posted 2 months ago
0.0 - 1.0 years
0 Lacs
Mumbai Suburban
Work from Office
Job Description of Data Scraper 1.Develop and maintain automated web scraping scripts to extract data from multiple sources websites APIs databases Clean structure and store scraped data in a structured format CSV JSON SQL or cloud databases 2. Monitor scraping scripts to ensure reliability and prevent website blocks using proxies rotating useragents CAPTCHAsolving techniques Integrate scraped data into CRM dashboards or analytics platforms
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining Servient as a motivated engineer to contribute to the forward direction of the Systems/DevOps team. In this role, you will focus on implementing eDiscovery software solutions, working across all levels of the technology stack. A proactive and research-driven attitude, coupled with problem-solving skills, will be beneficial for this position as it revolves around enhancing existing features and introducing new services. Collaboration with a geographically dispersed team will be a key aspect of this role. As a self-starter, you should be comfortable working independently while effectively coordinating with team members to ensure project deadlines are met. As a Java Full Stack Developer at Servient, you will collaborate closely with the Solutions Architect and Technical Lead to create JAVA-based services. Your responsibilities will include designing, implementing, and testing code in line with Product development standards and overall strategic objectives. You will be involved in developing programs for data extraction from websites, public datasets, and subscription services, with proficiency in XML, JSON (structured), and unstructured data parsing. Additionally, you will manipulate and store data efficiently and conduct data analysis to support informed business decisions. Key Skills and Experience Required: - Proficiency in JAVA and web API (Elastic Search) - Strong background in database management - Experience with Java/J2EE, web development, and elastic search - In-depth knowledge of database systems and authentication - Familiarity with JSP, Servlets, Spring, Hibernate or EJB3, REST API, and Struts - Desirable experience with AJAX, web scraping, text mining, and machine learning - Ability to work as an individual contributor Academic Qualifications: - Bachelor's or Master's degree in Computer Science Experience Level: - Minimum of 4 years of relevant experience Join Servient's innovative team and contribute to the continuous improvement of software solutions while collaborating with a diverse group of professionals.,
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Manufacturing Data Analyst at our high-growth company, you will be responsible for understanding manufacturing data from clients, devising strategies to address pain points, and enhance business value. Your role will involve data exploration, analysis, and automation of data cleaning tasks. You will collaborate closely with the data science and product teams to derive valuable insights from client data. You should hold a Bachelor's or Master's degree in engineering, mathematics, Mechanical/Aerospace Engineering, and possess 4-6 years of experience working with renowned manufacturing/aerospace/automotive/consumer product companies. Proficiency in working with spreadsheets and processing large volumes of data is essential, along with experience in Python (Pandas)/R. In this role, you will be expected to understand and define relationships among entities in manufacturing data, interpret technical statements, and engage in tasks ranging from understanding business objectives to data manipulation and web scraping solutions. Collaboration with stakeholders such as machine learning engineers, data scientists, data engineers, and product managers will be a key aspect of your responsibilities. We are looking for individuals who are entrepreneurial, driven, and eager to contribute to a fast-growing SaaS company. You should have excellent written and verbal communication skills, expertise in natural language processing (NLP) techniques for text and sentiment analysis, and ideally, experience working with startups. Joining our interdisciplinary team, you will have the opportunity to work with leaders from Palantir, McKinsey, GM, and Ford. We value collaboration, diversity of thinking, and setting high standards to deliver exceptional results in a startup environment.,
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
As a Python Developer at Innefu Lab, you will play a crucial role in the software development life cycle, contributing from requirements analysis to deployment. Working in collaboration with diverse teams, you will design and implement solutions that align with client requirements and industry standards. Your responsibilities encompass various key areas: Software Development: You will be responsible for creating, testing, and deploying high-quality Python applications and scripts. Code Optimization: Your role involves crafting efficient, reusable, and modular code while enhancing existing codebases for optimal performance. Database Integration: You will integrate Python applications with databases to ensure data integrity and efficient data retrieval. API Development: Designing and implementing RESTful APIs to enable seamless communication between different systems. Collaboration: Working closely with UI/UX designers, backend developers, and stakeholders to ensure effective integration of Python components. Testing and Debugging: Thoroughly testing applications, identifying and rectifying bugs, and ensuring software reliability. Documentation: Creating and maintaining comprehensive technical documentation for code, APIs, and system architecture. Continuous Learning: Staying updated on industry trends, best practices, and emerging technologies related to Python development. Required Skills: - Proficient in Python, Django, Flask - Strong knowledge of Regular Expressions, Pandas, Numpy - Excellent expertise in Web Crawling and Web Scraping - Experience with scraping modules like Selenium, Scrapy, Beautiful Soup, or URLib - Familiarity with text processing, Elasticsearch, and Graph-Based Databases such as Neo4j (optional) - Proficient in data mining, Natural Language Processing (NLP), and Optical Character Recognition (OCR) - Basic understanding of databases - Strong troubleshooting and debugging capabilities - Effective interpersonal, verbal, and written communication skills - Ability to extract data from structured and unstructured sources, analyze text, images, and videos, and utilize NLP frameworks for data enrichment - Skilled in collecting and extracting intelligence from data, utilizing regular expressions, and extracting information from RDBMS databases - Experience in web scraping frameworks like Scrapy for data extraction from websites Join us at Innefu Lab, where innovative offerings and cutting-edge technologies converge to deliver exceptional security solutions. Be part of our dynamic team driving towards excellence and growth in the cybersecurity domain.,
Posted 2 months ago
10.0 - 15.0 years
32 - 37 Lacs
Bengaluru
Work from Office
SDWAN Security (IPsec/IKEv2) - Senior Software Test Development Engineer - IP Routing, SDWAN, Security, VPN technologies, L2/L3 (8 years to 11 years) Meet the Team To collaborate effectively with the Cisco Routing Security Automation and Test team on VPN and SD-WAN solutions, and delve into security protocols like IKEv2 and IPSEC, here are some key insights: VPN and SD-WAN Solutions Cisco Catalyst SD-WAN : This platform offers high-performance connectivity and robust security, optimizing application performance and providing multicloud and SaaS optimization. Integration with Skyhigh Security Service Edge : Enhances network performance and cybersecurity measures, providing flexibility and reliable performance. Security Protocols IKE and IPSEC : These protocols are crucial for secure, authenticated key exchange and establishing Security Associations (SAs). Post Quantum Cryptography Quantum-Safe Solutions : Cisco is actively working on quantum-safe cryptography solutions to safeguard transport security protocols. Quantum Key Distribution (QKD) : Cisco has tested interoperability with several QKD vendors, providing integration with Cisco routers. Your Impact Key Responsibilities Solution Validation : Test/Validate and automate advanced network solutions that integrate seamlessly with a broad spectrum of technologies. Security Protocols : Test and Automate security protocols to safeguard data transmission and network integrity. Quantum-Safe Encryption : Work to validate and automate quantum-safe encryption methods to protect against Harvest Now, Decrypt Later (HNDL) attacks Impact and Growth Industry Leadership: Position yourself at the forefront of network security innovation, contributing to Cisco's leadership in quantum-safe technologies Qualifications Educational Background : 10-15 years of Industry experience. Hands on experience on End-to-End Solution testing in Security SDWAN area. Strong understanding of network security concepts, protocols, and technologies. QA Experience with VPN technologies (e.g., IKev2, IKev1, IPSec, SSL/TLS etc). QA Experience with SD-WAN technologies and solutions. Hands-on experience with network devices, L2/L3 protocols , and traffic generation tools (Ixia, Spirent) Knowledge of next generation network security standards (e.g Post-Quantum Cryptography ) and best practices. Proficient in Python and standard libraries. Experience with automation tools and frameworks (e.g., Selenium, Rest API). Solid understanding of RESTful APIs, web scraping, or automation of web-based systems. Familiarity with version control systems (e.g., Git). Experience with CI/CD tools (e.g., Jenkins, GitHub Actions) is a plus. Working experience with different Key stakeholder for entire software life cycle of the project. Motivated self-starter with good communication, with demonstrated ability to deliver superior products in a cross-functional team environment under aggressive schedules. Soft Skills: Motivated self-starter with strong communication and organizational skills. Proven ability to develop and deliver superior products in cross-functional team settings under tight schedules. Excellent leadership, problem-solving, and communication skills. System Design and Debugging: Experienced in designing, building, and debugging large-scale distributed systems.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |