Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: As a Senior Data Engineer at DATAECONOMY, you will be responsible for leading the end-to-end development of complex models for compliance and supervision. Your deep expertise in cloud-based infrastructure, ETL pipeline development, and financial domains will be crucial in creating robust, scalable, and efficient solutions. Key Responsibilities: - Lead the development of advanced models using AWS services such as EMR, Glue, and Glue Notebooks. - Design, build, and optimize scalable cloud infrastructure solutions with a minimum of 5 years of experience. - Create, manage, and optimize ETL pipelines using PySpark for large-scale data processing. - Build and maintain CI/CD pipelines for deploying and maintaining cloud-based applications. - Perform detailed data analysis and deliver actionable insights to stakeholders. - Work closely with cross-functional teams to understand requirements, present solutions, and ensure alignment with business goals. - Operate effectively in agile or hybrid agile environments, delivering high-quality results within tight deadlines. - Enhance and expand existing frameworks and capabilities to support evolving business needs. - Create clear documentation and present technical solutions to both technical and non-technical audiences. Qualification Required: - 5+ years of experience with Python programming. - 5+ years of experience in cloud infrastructure, particularly AWS. - 3+ years of experience with PySpark, including usage with EMR or Glue Notebooks. - 3+ years of experience with Apache Airflow for workflow orchestration. - Solid experience with data analysis in fast-paced environments. - Strong understanding of capital markets, financial systems, or prior experience in the financial domain is a must. - Proficiency with cloud-native technologies and frameworks. - Familiarity with CI/CD practices and tools like Jenkins, GitLab CI/CD, or AWS CodePipeline. - Experience with notebooks (e.g., Jupyter, Glue Notebooks) for interactive development. - Excellent problem-solving skills and ability to handle complex technical challenges. - Strong communication and interpersonal skills for collaboration across teams and presenting solutions to diverse audiences. - Ability to thrive in a fast-paced, dynamic environment.,
Posted 11 hours ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Node.js Developer at Capgemini, you will play a crucial role in developing server-side applications and APIs. Your responsibilities will include: - Designing and deploying cloud-native solutions using AWS services such as Lambda, API Gateway, S3, DynamoDB, EC2, and CloudFormation. - Implementing and managing CI/CD pipelines for automated deployments. - Optimizing application performance to ensure scalability and reliability. - Collaborating with front-end developers, DevOps, and product teams to deliver end-to-end solutions. - Monitoring and troubleshooting production issues in cloud environments. - Writing clean, maintainable, and well-documented code. To excel in this role, you should have the following qualifications: - Strong proficiency in Node.js and JavaScript/TypeScript. - Hands-on experience with AWS services (Lambda, API Gateway, S3, DynamoDB, etc.). - Experience with serverless architecture and event-driven programming. - Familiarity with Infrastructure as Code (IaC) tools like CloudFormation or Terraform. - Knowledge of RESTful APIs, authentication (OAuth, JWT), and microservices. - Experience with CI/CD tools (e.g., GitHub Actions, Jenkins, AWS CodePipeline). - Understanding of logging, monitoring, and alerting tools (e.g., CloudWatch, ELK Stack). Capgemini offers a range of career paths and internal opportunities, allowing you to shape your career with personalized guidance from leaders. You will also benefit from comprehensive wellness benefits, access to a digital learning platform with 250,000+ courses, and the opportunity to work for a global business and technology transformation partner. Capgemini is a responsible and diverse group with a strong heritage of over 55 years and a global presence in more than 50 countries. Trusted by clients to unlock the value of technology, Capgemini delivers end-to-end services and solutions leveraging strengths in AI, generative AI, cloud and data, combined with deep industry expertise and a strong partner ecosystem.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Senior Data Scientist at our client's organization, your role will involve architecting agentic AI solutions and overseeing the entire ML lifecycle, from proof-of-concept to production. You will play a key role in operationalizing large language models, designing multi-agent AI systems for cybersecurity tasks, and implementing MLOps best practices. Key Responsibilities: - Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. - Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. - Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. - Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. - Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. - Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerized with Docker and Kubernetes. - Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyze performance and optimize for cost and SLA compliance. - Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: - Bachelors or Masters in Computer Science, Data Science, AI, or related quantitative discipline. - 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. - In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). - Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI, or equivalent agentic frameworks. - Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. - Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. - Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). - Infrastructure as Code skills with Terraform. - Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. - Solid data science fundamentals: feature engineering, model evaluation, data ingestion. - Understanding of cybersecurity principles, SIEM data, and incident response. - Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: - AWS certifications (Solutions Architect, Developer Associate). - Experience with Model Context Protocol (MCP) and RAG integrations. - Familiarity with workflow orchestration tools (Apache Airflow). - Experience with time series analysis, anomaly detection, and machine learning.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Backend Developer, you will be responsible for designing, building, and maintaining high-quality and scalable backend applications using Python. Your focus will be on ensuring robust functionality and performance in all aspects of application development. Your primary responsibilities will include designing and optimizing APIs, implementing asynchronous programming for efficient task handling, managing databases effectively, integrating cloud services for deployment and monitoring, collaborating with CI/CD pipelines for continuous deployment, and ensuring code quality through testing, debugging, and version control practices. To excel in this role, you will need to have a Bachelor's Degree in Computer Science, Software Engineering, or a related field, along with 2-4 years of experience in backend development with Python. Strong knowledge of web frameworks, databases, and API development is essential for success in this position. Proficiency in core Python concepts, object-oriented programming principles, web frameworks like FastAPI, Sanic, and Django, SQL management and optimization, AWS services, RESTful APIs, asynchronous programming, version control with Git, CI/CD pipeline setup, testing with PyTest and UnitTest, and debugging tools like pdb and Sentry are crucial skills required for this role. Additionally, staying updated on new technologies and trends in Python development, cloud computing, and backend best practices is encouraged to enhance development efficiency and stay competitive in the field. If you are passionate about backend development, have a strong foundation in Python and related technologies, and enjoy working collaboratively in a dynamic development environment, this opportunity with Twenty Point Nine Five Ventures could be the perfect fit for you.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
tamil nadu
On-site
As a Cloud + DevOps Engineer with 6-10 years of experience, your primary responsibility will be automating deployments, managing CI/CD pipelines, and monitoring system health across cloud environments. You will leverage modern DevOps tools and practices to ensure scalable, secure, and efficient infrastructure and application delivery. In this role, you will automate deployment processes and infrastructure provisioning, optimize CI/CD pipelines using tools like Jenkins, GitHub Actions, and AWS CodePipeline, monitor system health and performance using CloudWatch and Prometheus, implement Infrastructure as Code (IaC) using Terraform and AWS CloudFormation, containerize applications using Docker, and orchestrate them with Kubernetes. Collaboration with development and operations teams to ensure seamless integration and delivery, along with applying security best practices across cloud and container environments, will be crucial aspects of your job. The ideal candidate will have strong experience in cloud platforms, preferably AWS, and DevOps practices. Hands-on expertise with CI/CD tools like Jenkins, GitHub Actions, and CodePipeline, proficiency in Infrastructure as Code using Terraform and CloudFormation, experience with monitoring tools such as CloudWatch and Prometheus, solid understanding of containerization and orchestration using Docker and Kubernetes, knowledge of security best practices in cloud and DevOps environments, as well as excellent problem-solving and collaboration skills. Working at Capgemini, you will enjoy flexible work arrangements and a supportive work-life balance, an inclusive and collaborative culture with opportunities for growth, access to cutting-edge technologies and certifications, and the chance to work on impactful cloud and DevOps projects. Capgemini is a global business and technology transformation partner, trusted by clients worldwide to address their business needs through a diverse and responsible approach that leverages digital, cloud, and data capabilities for an inclusive and sustainable future.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your role will involve designing and managing AWS infrastructure using services like EC2, S3, VPC, IAM, CloudFormation, and Lambda. You will be responsible for implementing CI/CD pipelines using tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Automation of infrastructure provisioning using Infrastructure-as-Code (IaC) tools like Terraform or AWS CloudFormation will also be part of your responsibilities. Monitoring and optimizing system performance using CloudWatch, Datadog, or Prometheus will be crucial. Ensuring cloud security and compliance by applying best practices in IAM, encryption, and network security is another key aspect. Collaboration with development teams to streamline deployment processes and improve release cycles is essential. Additionally, you will troubleshoot and resolve issues in development, test, and production environments while maintaining documentation for infrastructure, processes, and configurations. Keeping updated with AWS services and DevOps trends to continuously improve systems is also expected. Your profile should include 3+ years of experience in DevOps or cloud engineering roles, strong hands-on experience with AWS services and architecture, proficiency in scripting languages (e.g., Python, Bash), experience with containerization tools (Docker, Kubernetes), familiarity with CI/CD tools and version control systems (Git), knowledge of monitoring and logging tools, and excellent problem-solving and communication skills. AWS certifications (e.g., AWS Certified DevOps Engineer) are a plus. At Capgemini, we are committed to ensuring that people of all backgrounds feel encouraged and have a sense of belonging. You are valued for who you are, and you can bring your original self to work. Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. You will also get to participate in internal sports events, yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. With a strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
vadodara, gujarat
On-site
We are seeking a skilled Node.js Backend Developer with over 4 years of experience in constructing scalable, enterprise-level applications using Node.js, TypeScript, and AWS services. As a crucial member of our development team, you will be responsible for the design, development, and deployment of backend applications leveraging AWS cloud technologies. Your role will involve contributing to architectural decisions, collaborating with cross-functional teams, and ensuring that applications are optimized for performance, scalability, and portability across various environments. Your responsibilities will include designing, developing, and maintaining scalable Node.js (TypeScript) applications and microservices on AWS through services like EventBridge, Lambda, and API Gateway. You will also be utilizing AWS services such as EC2, SQS, SNS, ElastiCache, and CloudWatch to build cloud-native solutions that prioritize high performance, scalability, and availability. Furthermore, you will develop containerized solutions using Docker and AWS ECS/ECR to facilitate portability and cloud-agnostic deployments. Additionally, you will build and uphold RESTful APIs that are integrated with AWS cloud services. In this role, you will actively contribute to the design of new features, ensuring that applications remain scalable, portable, and maintainable. You will troubleshoot and optimize cloud-based applications to enhance their performance. Implementing monitoring, logging, and tracing solutions using AWS CloudWatch, X-Ray, and other third-party tools will also be part of your responsibilities. Your code will be expected to be clean, well-documented, and maintainable, following industry best practices. Moreover, you will participate in code reviews, mentor junior developers, and advocate for coding best practices. It is essential to stay informed about new AWS services and industry best practices and apply them to your development work effectively. Key Skills and Qualifications: - Over 4 years of experience in Node.js and TypeScript development, with a solid grasp of asynchronous programming and event-driven architecture. - Proficiency in frameworks like Express.js, Nest.js, or similar Node.js-based frameworks. - Strong understanding of TypeScript, including type definitions, interfaces, and decorators for building maintainable code. - Experience working with AWS services like EC2, EventBridge, VPC, API Gateway, Lambda, SQS, SNS, ElastiCache, CloudWatch, and S3 for cloud-native application development. - Hands-on experience with Docker for containerization and familiarity with AWS Elastic Container Registry (ECR) and Elastic Container Services (ECS). - Sound knowledge of microservices architecture and building RESTful APIs integrated with AWS services. - Proficiency with Git for version control. - Strong troubleshooting and performance optimization skills in cloud environments. - Experience working in an Agile/Scrum environment, participating in sprint planning, stand-ups, and retrospectives. - Excellent written and verbal communication skills for effective collaboration with technical and non-technical team members. Desired Skills: - Experience with unit testing frameworks like Mocha, Chai, Jest, or similar for automated testing. - Familiarity with monitoring, logging, and tracing tools such as AWS CloudWatch, X-Ray, OpenTelemetry, or third-party integrations. - Knowledge of CI/CD pipelines using tools like Jenkins, AWS CodePipeline, or similar for automating deployment processes. - Understanding of AWS security best practices, including IAM roles, policies, encryption techniques, and securing AWS resources. This role does not require knowledge of Java or Spring Boot.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
You will join our team as a Full Stack Developer, bringing your 5+ years of experience to the table. Your main responsibilities will revolve around designing, developing, testing, and deploying applications using ReactJS for the front-end and Node.js for the back-end. You will work on ensuring performance, scalability, and security of the applications, utilizing AWS Cloud services such as EC2, S3, Lambda, API Gateway, DynamoDB, and RDS to build scalable and fault-tolerant solutions. Your role will involve implementing scalable, cost-effective, and resilient cloud architectures, managing application state using tools like Redux, Mobx, or Zustand, and writing unit and integration tests using frameworks such as Jest, Mocha, or Chai. You will be responsible for developing modern user interfaces using frameworks like Tailwind CSS, Material UI, or Bootstrap, as well as leading the development process, collaborating with cross-functional teams, and staying updated with the latest technologies and industry trends. Additionally, mentoring junior developers, providing guidance for career growth, and ensuring best practices are followed within the team will be part of your tasks. You should possess hands-on experience in ReactJS, Node.js, HTML, CSS, JavaScript, and TypeScript, along with expertise in AWS Cloud services, unit testing, state management, UI frameworks, version control using Git, and effective collaboration in agile environments. Preferred skills include experience with Next.js, Docker, automated deployment pipelines, and security best practices for web applications and cloud environments.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a qualified candidate for this position, you should have experience in at least one high-level programming language such as Python, Ruby, or GoLang, and possess a strong understanding of Object-Oriented Programming concepts. You must be proficient in designing, deploying, and managing distributed systems as well as service-oriented architectures. Your responsibilities will include designing and implementing Continuous Integration, Continuous Deployment, and Continuous Testing pipelines using tools like Jenkins, Bamboo, Azure DevOps, AWS CodePipeline, and various other DevOps tools such as Jenkins, Sonar, Maven, Git, Nexus, and UCD. You should also have experience in deploying, managing, and monitoring applications and services on both Cloud and on-premises infrastructure like AWS, Azure, OpenStack, Cloud Foundry, OpenShift, etc. Additionally, you must be proficient in using Infrastructure as Code tools like Terraform, CloudFormation, Azure ARM, etc. Your role will involve developing, managing, and monitoring tools, as well as log analysis tools to handle operations efficiently. Knowledge of tools like AppDynamics, Datadog, Splunk, Kibana, Prometheus, Grafana, Elasticsearch, etc., will be beneficial. You should demonstrate the ability to maintain enterprise-scale production software and possess knowledge of diverse system landscapes such as Linux and Windows. Expertise in analyzing and troubleshooting large-scale distributed systems and Microservices is essential. Experience with Unix/Linux operating systems internals and administration, including file systems, inodes, system calls, networking, TCP/IP routing, and network topologies, is crucial. Preferred skills for this role include expertise in Continuous Integration within the Mainframe environment and Continuous Testing practices.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior DevOps Engineer at HolboxAI, located in Ahmedabad, India, you will play a crucial role in designing, implementing, and managing our AWS cloud infrastructure. Your primary focus will be on automating deployment processes and migrating production workloads to AWS while ensuring robust, scalable, and resilient systems that adhere to industry best practices. Your key responsibilities will include leading the assessment, planning, and execution of AWS production workload migrations with minimal disruption, building and managing Infrastructure as Code (IaC) using tools like Terraform or CloudFormation, optimizing CI/CD pipelines for seamless deployment through Jenkins, GitLab CI, or AWS CodePipeline, implementing proactive monitoring and alerting using tools such as Prometheus, Grafana, ELK Stack, or AWS CloudWatch, and enforcing best practices in cloud security, including access controls, encryption, and vulnerability assessments. To excel in this role, you should have 3 to 5 years of experience in DevOps, Cloud Infrastructure, or SRE roles with a focus on AWS environments. Proficiency in core AWS services like EC2, S3, RDS, VPC, IAM, Lambda, and cloud migration is essential. You should also be skilled in Terraform or CloudFormation, proficient in scripting and automation using Python or Bash, experienced with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline, familiar with monitoring tools such as Prometheus, Grafana, ELK Stack, or AWS CloudWatch, and possess a strong understanding of cloud security principles. Preferred qualifications for this role include certifications such as AWS Certified Solutions Architect and AWS Certified DevOps Engineer, expertise in Docker and Kubernetes for containerization, familiarity with AWS Lambda and Fargate for serverless applications, and skills in performance optimization focusing on high availability, scalability, and cost-efficiency tuning. At HolboxAI, you will have the opportunity to work on cutting-edge AI technologies in a collaborative environment that encourages open knowledge sharing and rapid decision-making. Additionally, you will benefit from generous sponsorship for learning and research initiatives. If you are excited about this opportunity and ready to contribute to HolboxAI's promising journey, we invite you to apply now and take the next step in your career. Join us in shaping the future of cloud engineering and technology at HolboxAI.,
Posted 2 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
hyderabad
Hybrid
Title: MLOPS Engineer (Senior / Manager) Experience: 6 to 12 years Location: Hyderabad Interview Date: 6th September 2025 at Hyderabad Mode of Interview: In-Person (Scheduled) Note: This is a scheduled interview process. Candidates will be invited based on shortlisting and prior appointment. Kindly note that this is not a walk-in interview . Job Summary: We are seeking a skilled and proactive MLOps Engineer to join our AI/ML team. The ideal candidate will be responsible for designing, implementing, and maintaining robust MLOps pipelines and infrastructure across cloud platforms. You will collaborate closely with data scientists, software engineers, and DevOps teams to operationalize machine learning models and ensure scalable, secure, and automated deployment and monitoring. Key Responsibilities: Design and implement model deployment, monitoring, and retraining pipelines. Build and maintain CI/CD/CT pipelines for ML workflows using tools like MLFlow, Kubeflow, Airflow, GitHub Actions, and AWS CodePipeline. Develop and manage inference, monitoring, and drift detection pipelines (data drift, model drift). Architect scalable and secure MLOps infrastructure using Kubernetes, AKS, and Terraform. Publish and manage REST APIs for model inference using FastAPI. Track experiments and model performance metrics. Collaborate with cross-functional teams to raise MLOps maturity across the organization. Conduct internal training and presentations on MLOps tools and best practices. Required Skills & Technologies: Cloud Platforms: AWS SageMaker, Azure ML Studio, GCP Vertex AI Big Data & Processing: PySpark, Azure Databricks MLOps Tools: MLFlow, Kubeflow, Airflow, GitHub Actions, AWS CodePipeline Infrastructure & Automation: Kubernetes, AKS, Terraform, FastAPI Programming Languages: Python (ML and automation), Bash, Unix CLI Qualifications: Bachelor's or Masters degree in Computer Science, Data Science, or related field. Proven experience in operationalizing ML models using MLOps frameworks. Strong understanding of ML/AI concepts and hands-on experience in model development. Experience with cloud-native development and container orchestration. Familiarity with agile methodologies and DevOps practices. Preferred Qualifications: Certification in AWS, Azure, or GCP. Experience with DataRobot, DKube, or similar platforms. Exposure to enterprise-level ML systems and governance.
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As an Automation Testing Engineer specializing in Property & Casualty (P&C) insurance at ValueMomentum's Engineering Center in Hyderabad, you will play a crucial role in ensuring the quality and reliability of insurance software applications. Your responsibilities will include developing and executing test automation strategies, guiding junior team members, and collaborating with various teams to deliver high-quality software solutions tailored to the P&C insurance domain. Join a team of passionate engineers at ValueMomentum who are dedicated to solving complex business challenges with innovative solutions in the P&C insurance value chain. The Engineering Center focuses on Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise to drive continuous improvement and deliver impactful projects. Your main responsibilities will involve developing and implementing comprehensive test strategies, creating scalable test automation frameworks using tools like Selenium and Rest-Assured, integrating automated tests into CI/CD pipelines, leading and mentoring a team of SDETs, collaborating with stakeholders to define testing requirements, overseeing test execution and analysis, and ensuring alignment with P&C insurance business workflows and regulatory requirements. To qualify for this role, you should have a minimum of 5 years of software testing experience, with at least 3 years in a senior or lead position. You should possess a strong understanding of P&C insurance processes, proficiency in programming languages like Java, Python, or C#, hands-on experience with Selenium and Rest-Assured, familiarity with CI/CD tools such as Jenkins or AWS CodePipeline, and experience in Agile methodologies. ValueMomentum, headquartered in New Jersey, US, is a leading provider of IT Services and Solutions to Insurers, focusing on Digital, Data, Core, and IT Transformation initiatives. As part of our team, you will have access to competitive compensation, career advancement opportunities, comprehensive training programs, and a supportive work environment to help you grow both professionally and personally. Embrace this opportunity to contribute to transformative projects and enhance your skills in a dynamic and collaborative setting.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
You are a skilled Tester with 2 years of experience, responsible for ensuring the quality and performance of MERN stack applications hosted on AWS. You will collaborate with developers, product managers, and DevOps teams to deliver robust, scalable, and high-quality web applications. Your key responsibilities include developing and executing comprehensive test plans, test cases, and test scripts for MERN stack applications. You will perform functional, integration, regression, and performance testing across front-end and back-end components. Validating data integrity and performance of MongoDB databases, testing RESTful APIs, and identifying bugs, bottlenecks, and security vulnerabilities through automated and manual testing are also part of your role. You will collaborate with DevOps to test and validate application deployments on AWS, monitor application performance, scalability, and reliability, and document test results and defects. Participation in Agile ceremonies to ensure quality at every development stage is crucial, along with staying updated on the latest testing tools and methodologies. Your required skills include 2 years of experience in software testing, proficiency in testing React and Node.js applications, familiarity with MongoDB, hands-on experience with testing tools, knowledge of AWS services, and understanding of CI/CD pipelines. Strong analytical, problem-solving, communication, and collaboration skills are essential. A Bachelor's degree in computer science or related field is required, and ISTQB Foundation Level certification is preferred. Nice-to-have skills include experience with JMeter or LoadRunner for performance testing, knowledge of Docker and Kubernetes for microservices testing, and exposure to Microsoft GraphQL testing. You will have the opportunity to work on cutting-edge MERN stack applications hosted on AWS in a collaborative and inclusive work environment. Health insurance, paid sick time, paid time off, Provident Fund, and work from home options are provided as benefits. You will work full-time on a day shift, Monday to Friday, with a performance bonus. Additionally, you may be asked application questions relating to immediate availability, experience in MERN Stack testing, current CTC, and budget acceptance. If interested, please apply for this exciting opportunity.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
delhi
On-site
As an AWS Technical Lead / Architect at Hitachi Digital, you will play a crucial role in driving the design, implementation, and optimization of scalable cloud architectures on AWS. Your expertise will be instrumental in leading cloud transformation initiatives, guiding DevOps practices, and acting as a subject matter expert for cloud infrastructure, security, and performance. Your responsibilities will include designing and implementing robust, scalable, and secure AWS cloud architectures for enterprise-level applications and workloads. You will lead cloud solution development, including architecture design, automation, and deployment strategies, while providing technical leadership to cloud engineers and developers, mentoring junior team members. Collaboration with stakeholders to translate business needs into cost-effective cloud solutions aligned with best practices will be essential. Additionally, you will be responsible for implementing security, governance, and compliance controls across cloud infrastructure, evaluating and recommending AWS services and tools for specific business use cases, and supporting the migration of legacy systems to the cloud with minimal disruption and high performance. Monitoring and optimizing cloud workloads for performance, availability, and cost, staying current with AWS technologies and trends, and incorporating innovations where appropriate are also key aspects of your role. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 5 years of hands-on experience in designing and managing AWS infrastructure. Your proven experience in cloud architecture, DevOps practices, and automation tools such as Terraform, CloudFormation, and Ansible will be crucial. A strong understanding of AWS core services, networking, security best practices, identity and access management in AWS, as well as proficiency in scripting and infrastructure-as-code (IaC), are necessary for success in this role. Familiarity with CI/CD tools and pipelines like Jenkins, GitLab, AWS CodePipeline, along with excellent communication, leadership, and problem-solving skills, will further enhance your effectiveness as an AWS Technical Lead / Architect at Hitachi Digital. Preferred certifications for this role include AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer, and AWS Certified Security - Specialty. At Hitachi Digital, we are a global team of professional experts dedicated to promoting and delivering Social Innovation through our One Hitachi initiative. We value diversity, equity, and inclusion, and encourage individuals from all backgrounds to apply and contribute to our mission of creating a digital future. We offer industry-leading benefits, support, and services to take care of your holistic health and wellbeing, along with flexible arrangements that prioritize life balance. At Hitachi Digital, you will experience a sense of belonging, autonomy, freedom, and ownership as you collaborate with talented individuals and share knowledge in a supportive and innovative environment.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As a PHP Developer at LiteBreeze, you will have the opportunity to utilize your strong backend foundation and knowledge of UI and DevOps workflows to build scalable web applications. Your role will involve contributing across the entire development lifecycle, from requirements gathering to deployment. Your responsibilities will include developing and maintaining PHP 8 applications using clean, object-oriented code, designing and implementing business logic, APIs, and database interactions, participating in sprint planning, estimations, and code reviews, collaborating with UI/UX and DevOps teams for seamless delivery, and taking ownership of end-to-end development of custom web projects. You should have in-depth experience with PHP frameworks like Laravel, Symfony, or CodeIgniter, as well as with RDBMS systems such as MySQL and PostgreSQL. Proficiency in HTML, CSS, and JavaScript for basic frontend collaboration, version control using Git, and containerization with Docker is essential. Additionally, exposure to cloud platforms like AWS, Azure, Google Cloud, CI/CD tools such as Bitbucket Pipelines, AWS CodePipeline, Jenkins, testing tools like PHPUnit and PEST, search technologies including ElasticSearch, Algolia, Apache Solr, frontend frameworks like Angular, React, Vue, and basic scripting in Bash or Python for task automation will be valuable. LiteBreeze offers a stimulating work environment with complex customized team projects, the chance to lead projects, work on assignments from North European clients, clear career growth opportunities, and the freedom to implement new ideas and technologies. Employees also benefit from free technical certifications like AWS, opportunities to learn other backend technologies like Go and Node.js, and a great place to work certified for three consecutive years. Join LiteBreeze to work on cutting-edge, customized web projects for North European clients, with clear growth paths, opportunities to enhance your technical skills, and a supportive work-life balance. Top talent is rewarded with competitive salaries, opportunities for professional development and coaching, and the possibility of on-site visits to Europe. LiteBreeze is seeking individuals with excellent English communication skills, a self-directed learning approach, enthusiasm for client service, and the ability to swiftly understand client requirements and deliver high-value customized solutions. If you are passionate about improving your skillset, working on challenging projects, obtaining free AWS certification, achieving work-life balance, enhancing your professionalism, traveling to Europe, refining your coding abilities, and thriving in a relaxed work environment, LiteBreeze could be the perfect fit for you.,
Posted 2 weeks ago
6.0 - 10.0 years
25 - 32 Lacs
pune
Hybrid
We are seeking a hands-on, results-driven Engineering Consultant with deep expertise in AWS services, containerization (Docker & Kubernetes), microservices architecture, and system design. In this role, you will lead a team of engineers in designing, building, and deploying cloud-native applications while working closely with clients to understand their needs and deliver high-impact solutions. You will also be responsible for ensuring the technical excellence of the team, managing engineering best practices, and mentoring junior engineers. Key Responsibilities: Technical Leadership & Team Management: Lead a cross-functional team of engineers, ensuring effective collaboration, high-quality code, and adherence to best practices. Foster a culture of continuous improvement, technical learning, and innovation within the team. Cloud Architecture & Design: Design and architect scalable, resilient, and highly available cloud solutions on AWS, utilizing services such as EKS, Lambda, RDS, S3, and CloudFormation. Lead the development and deployment of microservices-based architectures, leveraging Docker and Kubernetes for containerization and orchestration, while optimizing for performance, security, and scalability. Client Engagement & Consulting: Act as the technical expert in client-facing engagements, advising on best practices for cloud adoption, microservices, and infrastructure design. Collaborate with clients to assess their technical needs, define architectures, and deliver solutions. Mentorship & Skill Development: Mentor and coach engineers, helping them grow their technical skills. Provide guidance on design patterns, architecture decisions, and coding practices. Reporting & Dashboards (Good to Have): Work on building insightful reporting dashboards for clients, leveraging AWS services like QuickSight or integrating third-party reporting tools to drive data-driven decisions. Required Qualifications: Experience: 6+ years of hands-on software engineering experience, with at least 2 years in a technical leadership or engineering lead role. Proven experience delivering complex, cloud-native applications for enterprise clients. AWS Expertise: Strong experience with AWS services such as EKS, S3, Lambda, RDS, VPC, CloudFormation, and others. Experience in designing highly available, scalable architectures on AWS. Containerization & Orchestration: Extensive experience with Docker and Kubernetes for containerizing and orchestrating microservices. Experience with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Microservices Architecture: Hands-on experience in designing and building microservices architectures. Knowledge of distributed systems, fault tolerance, and service discovery patterns. Cloud-Native Design: Strong understanding of cloud-native application design principles, including event-driven architectures, serverless computing, and infrastructure-as-code (e.g., Terraform, CloudFormation). Client-Facing Consulting Experience: Experience in working directly with clients to define technical solutions, align on project goals, and manage expectations. Good to Have: Experience with data reporting tools and building visual dashboards (e.g., AWS QuickSight, Power BI, or Tableau) to deliver insights for clients. Preferred Skills: Experience with advanced AWS services such as Amazon ECS, EKS, SQS, SNS, CloudWatch, and others. Experience with infrastructure as code tools like Terraform or CloudFormation. Familiarity with DevOps principles and best practices. Experience with serverless architecture using AWS Lambda or similar technologies. Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
Posted 2 weeks ago
3.0 - 5.0 years
15 - 19 Lacs
bengaluru
Work from Office
Immediate Joiners Job Summary We are seeking an experienced DevOps Engineer to join our team and help us build and maintain scalable, secure, and efficient infrastructure on Amazon Web Services (AWS). The ideal candidate will have a strong background in DevOps practices, AWS services, and scripting languages. Key Responsibilities Design and Implement Infrastructure: Design and implement scalable, secure, andefficient infrastructure on AWS using services such as EC2, S3, RDS, and VPC. Automate Deployment Processes: Automate deployment processes using tools such as AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Implement Continuous Integration and Continuous Deployment (CI/CD): Implement CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, and CircleCI. Monitor and Troubleshoot Infrastructure: Monitor and troubleshoot infrastructure issuesusing tools such as Amazon CloudWatch, AWS X-Ray, and AWS CloudTrail. Collaborate with Development Teams: Collaborate with development teams to ensuresmooth deployment of applications and infrastructure. Implement Security Best Practices: Implement security best practices and ensurecompliance with organizational security policies. Optimize Infrastructure for Cost and Performance: Optimize infrastructure for cost andperformance using tools such as AWS Cost Explorer and AWS Trusted Advisor. Requirements Education: Bachelors degree in Computer Science, Information Technology, or relatedfield. Experience: Minimum 3 years of experience in DevOps engineering, with a focus on AWSservices. Technical Skills: AWS services such as EC2, S3, RDS, VPC, and Lambda. Scripting languages such as Python, Ruby, or PowerShell. CI/CD tools such as Jenkins, GitLab CI/CD, and CircleCI. Monitoring and troubleshooting tools such as Amazon CloudWatch, AWS X-Ray, and AWS CloudTrail. Soft Skills: Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work in a team environment and collaborate with development teams. Nice to Have Certifications: AWS certifications such as AWS Certified DevOps Engineer or AWSCertified Solutions Architect. Experience with Containerization: Experience with containerization using Docker orKubernetes. Experience with Serverless Computing: Experience with serverless computing using AWSLambda or Azure Functions.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an MLOps Engineer, you will be responsible for researching and implementing MLOps tools, frameworks, and platforms for Data Science projects. You will work on enhancing MLOps maturity within the organization by introducing modern, agile, and automated approaches to Data Science. Conducting internal training and presentations on the benefits and usage of MLOps tools will be part of your responsibilities. Your key responsibilities will include model deployment, monitoring, and retraining, along with managing deployment, inference, monitoring, and retraining pipelines. You will be involved in drift detection, experiment tracking, MLOps architecture, and REST API publishing. To excel in this role, you should have extensive experience with Kubernetes and operationalization of Data Science projects using popular frameworks or platforms like Kubeflow, AWS SageMaker, Google AI Platform, Azure Machine Learning, etc. A solid understanding of ML and AI concepts, hands-on experience in ML model development, and proficiency in Python for both ML and automation tasks are essential. Additionally, knowledge of Bash, Unix command line toolkit, CI/CD/CT pipelines implementation, and experience with cloud platforms, especially AWS, would be advantageous.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
You will be a part of an experienced team as a Java AWS professional with 6-9 years of experience in the role of T2. Your responsibilities will include: - Demonstrating strong proficiency in Java (8 or higher) and the Spring Boot framework. - Having hands-on experience with various AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, and RDS. - Developing microservices and RESTful APIs while understanding cloud architecture and deployment strategies. - Utilizing CI/CD pipelines and tools like Jenkins, GitHub Actions, or AWS CodePipeline. - Knowledge of containerization (Docker) and orchestration tools like ECS/Kubernetes would be a plus. - Experience with monitoring/logging tools such as CloudWatch, ELK Stack, or Prometheus is desirable. - Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.). - Developing and maintaining robust backend services and RESTful APIs using Java and Spring Boot. - Designing and implementing scalable, maintainable, and deployable microservices in AWS. - Integrating backend systems with AWS services like Lambda, S3, DynamoDB, RDS, SNS/SQS, and CloudFormation. - Collaborating with product managers, architects, and other developers to deliver end-to-end features. - Participating in code reviews, design discussions, and agile development processes. About Virtusa: Virtusa embodies values such as teamwork, quality of life, and professional and personal development. Join a global team of 27,000 professionals who care about your growth and provide exciting projects, opportunities, and exposure to state-of-the-art technologies throughout your career. At Virtusa, collaboration and a team-oriented environment are highly valued. We provide a dynamic platform for great minds to nurture new ideas and strive for excellence.,
Posted 2 weeks ago
5.0 - 10.0 years
5 - 15 Lacs
bengaluru
Work from Office
- AWS infrastructure engineering experience. - Expert-level knowledge of AWS core and advanced services. - Proficiency in Terraform, Cloud Formation, or AWS CDK. - Proven experience with CI/CD pipelines
Posted 3 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
hyderabad
Work from Office
We are seeking a Lead/Senior Data Engineer with 7-12 years of experience to architect, develop, and optimize data solutions in a cloud-native environment. The role requires strong expertise in AWS Glue, PySpark, and Python with a proven ability to design scalable data pipelines and frameworks for large-scale enterprise systems. Prior exposure to financial services or regulated environments is a strong advantage. Key Responsibilities Design and implement secure, scalable pipelines using AWS Glue, PySpark, EMR, S3, Lambda, and other AWS services. Lead ETL development for structured and semi-structured data, ensuring high performance and reliability. Build reusable frameworks, automation tools, and CI/CD pipelines with AWS CodePipeline, Jenkins, or GitLab. Mentor junior engineers, conduct code reviews, and enforce best practices. Implement data governance practices including quality, lineage, and compliance standards. Collaborate with product, analytics, compliance, and DevOps teams to align technical solutions with business goals. Optimize workflows for cost efficiency, scalability, and speed. Prepare technical documentation and present architectural solutions to stakeholders. Requirements Strong hands-on experience with AWS Glue, PySpark, Python, and AWS services (EMR, S3, Lambda, Redshift, Athena). Proficiency in ETL workflows, Airflow (or equivalent), and DevOps practices. Solid knowledge of data governance, lineage, and agile methodologies. Excellent communication and stakeholder engagement skills. Financial services or regulated environment background preferred.
Posted 3 weeks ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
Job Description: You are a Backend Developer with 5 to 10 years of experience, specializing in Python. This role requires expertise in Python, OpenSearch, AWS Lambda, AWS Appsync, GraphQL, DynamoDb, and Rest API. It is a 6-month contract position, extendable based on performance, and involves working with AWS serverless architecture and modern backend frameworks. Your primary responsibilities will include developing and maintaining backend systems using Python at an expert level. You should also have intermediate proficiency in OpenSearch, AWS Lambda, AWS AppSync, GraphQL, DynamoDB, and REST API development. Additionally, familiarity with JavaScript/TypeScript, ArangoDB, AWS CloudFormation, AWS CLI, Docker, AWS CodePipeline, and AWS CloudWatch would be beneficial. Strong analytical and problem-solving skills are essential for this role, along with the ability to thrive in a fast-paced environment. Immediate joiners are preferred for this position located in Bangalore, Hyderabad, Chennai, Gurgaon, or Pune. If you are a proactive Backend Developer with a passion for cutting-edge technologies and a drive for excellence, we encourage you to apply for this opportunity.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You are an experienced AWS Cloud & DevOps Architect with 5-6 years of experience, based in Coimbatore. Your primary responsibility is to design, implement, and manage scalable, secure, and high-performing cloud infrastructure using modern DevOps practices, Kubernetes, automation, and observability tools. You will also provide technical leadership and guidance to the team. Your key responsibilities include designing and deploying scalable, fault-tolerant systems on AWS using services like EC2, S3, VPC, RDS, and Lambda. You will automate infrastructure provisioning with IaC tools like Terraform, CloudFormation, and AWS CDK. Additionally, you will deploy, manage, and optimize Kubernetes clusters (EKS) and containerized applications, ensuring best practices for performance, networking, and security. In terms of observability and monitoring, you will implement solutions such as Prometheus, Grafana, ELK, and CloudWatch for proactive performance management. You will build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, and AWS CodePipeline, as well as develop automation tools to streamline operations. Security and compliance are also key aspects of your role. You will enforce IAM, network security, and data protection best practices, and perform regular security audits to ensure compliance. Collaboration and leadership skills are essential as you will partner with cross-functional teams, mentor junior engineers, and stay updated on emerging cloud and DevOps technologies. To qualify for this role, you should have strong hands-on experience with AWS core services, Kubernetes (EKS), and IaC tools. A proven track record in CI/CD, automation, observability, and cloud security is required. Excellent problem-solving, collaboration, and leadership skills are also important. This is a full-time position with health insurance benefits. The work location is in person.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a skilled professional in the field of software engineering, your role will involve designing and implementing top-quality technical solutions by leveraging cutting-edge technologies and tools. Your expertise in NodeJS, JavaScript, and Typescript, along with a deep understanding of various AWS services such as Lambda, API gateway, Route53, ECS, System Manager, EC2, S3, IAM Role, Cloud Front, Cloud Watch, etc., will be essential for successfully executing projects. Additionally, your proficiency in Serverless with NodeJs, VueJS/ReactJs, creating bitbucket pipelines, AWS CodePipeline, and Headless CMS will be valuable assets in this role. Your exceptional logical, analytical, and troubleshooting skills will play a crucial role in handling end-to-end responsibilities including Requirement Gathering, Development, and Deployment. Being open to on-call support when necessary and possessing excellent verbal and written communication skills are key aspects of this position. Your dedication to ensuring high reusability, performance optimization, and adherence to design patterns, as well as experience in REST or GraphQL, automated unit testing, code review, refactoring, and technical documentation, will be highly appreciated. You will find excitement in the opportunity to work on Enterprise Grade applications, taking complete ownership and independence in project execution, being rewarded for innovation, and learning from experienced UI tech architects. The work environment offers a great and rewarding atmosphere for your professional growth and development. This position is based in Ahmedabad, and requires working from the office.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
coimbatore, tamil nadu
On-site
Kovai.co is a catalyst, sparking a revolution in the world of enterprise software and B2B SaaS. As a technology powerhouse, we deliver best-in-class enterprise software and game-changing SaaS solutions across industries. We are rewriting the B2B landscape by empowering over 2,500 businesses worldwide with our award-winning SaaS solutions, including Biztalk360, Turbo360, and Document360. With our UK headquarters, Indian innovation, and global impact, our journey has been remarkable, witnessing exponential growth and profitability right from our inception. At Kovai.co, we are on track towards $30 million in annual revenue and we are just getting started. Fueled by a tribe of thoughtful helpers, we are obsessed with empowering customers, uplifting colleagues, and igniting our own journeys. Redefining tech is our game, and we invite you to join Kovai.co where passion meets purpose. As a Lead SDET, your responsibilities will include: - Advanced Test Automation (C# Focus): - Architecting and maintaining enterprise-grade test automation frameworks using Selenium WebDriver (C#/.NET Core), Playwright, or Cypress, following the Page Object Model (POM) design pattern. - Developing reusable libraries for cross-browser testing and parallel execution via Selenium Grid. - Performance Testing: - Designing and executing performance testing strategies to assess system scalability, stability, and responsiveness. - Utilizing tools such as JMeter, LoadRunner, and Gatling to simulate user load and measure application performance. - Functional Testing of Enterprise Products with Customer Focus: - Conducting thorough functional testing of enterprise-level products to ensure they meet customer requirements. - Ensuring zero defect leakage by rigorously validating new features and updates before release. - Security Testing: - Conducting vulnerability scans and penetration tests using tools like Burp Suite or OWASP ZAP. - Validating compliance with GDPR, SOC2 standards during test cycles. - In-Sprint Automation: - Automating test scenarios within sprint development cycles. - Shifting-left security and performance testing into CI/CD pipelines. - Tooling & Framework Ownership: - Architecting tools for test data generation, environment provisioning, and parallel execution. - Mentoring teams on automation best practices and implementing scalable Selenium automation frameworks. To be a good fit for this role, you must have: - A Bachelors or masters degree in computer science, Engineering, or a related field. - 8+ years of experience in software testing, automation, and quality assurance. - Strong programming experience with data structures and algorithms. - Expertise in C# and Selenium WebDriver. - Hands-on experience with various testing tools and frameworks. - Familiarity with CI/CD pipelines, version control, and DevOps tools. - Strong understanding of Agile methodologies and shift-left testing practices. - Excellent problem-solving, debugging, and analytical skills. - Strong communication and collaboration skills to work effectively with cross-functional teams. Additionally, it would be good to have: - Open-Source Contributions & GitHub Presence. - Exposure to AI-Driven Testing Innovation & knowledge in tools like TensorFlow, Hugging Face, ChatGPT, GitHub Copilot for test script generation. Kovai.co is committed to building a diverse workforce and fostering a culture of belonging and respect for all. We stand against discrimination and ensure equal opportunity for everyone to build a successful career.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |